Not boring, and a bit of a condescending prick
219 subscribers
26 photos
110 links
Semi-digested observations about our world right after they are phrased well enough in my head to be shared broader.
Download Telegram
How would we know if our "AI" models have consciousness?

Before dismissing the question as too broad, think of an "easier" one: How could an alien super-intelligent civilization observing every atom of Earth today come to the conclusion that some processes in humans' brains are not merely functional, but also result in us experiencing qualia?

Even considering only today's approach to "AI", the deep learning one, I'm close to acknowledging that the problem of being unable to comprehend what happens in the inner layers of a deep recurring neural net is conceptually indistinguishable from the problem of an emotion-less super-intelligence possibly comprehending humans' inner selves.
Hackathons. I view them as a waste of time and energy.

It is not about length constraints. It is about the attitude. I don't enjoy building small things. I prefer the big ones to come from underneath my fingertips.

Big things are demanding.

They require thorough thinking. In the order of hours, at least.

They may require research. In the order of days, easily.

They may require good understanding of what current and soon-to-come technologies are best to use.

They require understanding of how and where do they fit the world.

They often call for several conversations with many people. Who, in their turn, are often busy enough with their own big things.

Hackathons, the way I understand them, are the exact opposite:

Doesn't matter if it does the real thing. It looks sexy. Ship it.

Doesn't matter if it is well engineered. It somehow works. Ship it.

Doesn't matter if it is about to be thrown away next week. It's not supposed to live long anyway. Now ship it.

Doesn't matter if it does not sustain any constructive criticism. People who are capable of providing it would rarely show up for a hackathon. Those who do show up would usually have the attitude of ... yeah, right. "Ship it".

∼ ∼ ∼

I perceive the culture of hackathons to be a desperate attempt to find someone who can code at least something. Because those who can build real things are already busy doing that. Most of the people who remain uncertain are those unwilling to or incapable of thinking big.

True, a hackathon is a good way to gather those people together and have them do something useful. This community, however, is unlikely to interest me, or other people who want and can build real things.

∼ ∼ ∼

Furthermore, I find no emotional attachment to short-term results one can demonstrate during a hackathon.

I don't need to see or touch "something that is live". My imagination is good enough to understand what is about to happen.

Unless it's a major outage, whether it takes or day or a week is almost irrelevant. Whether it lasts for three weeks, three months or three years is what counts at the end.

I care for long-term solutions and think of any "hacked" code as technical debt. To provide a contrast: I would rather spend a day writing a nontrivial test for a feature that is likely functioning right as it is right now. To be sure, for today and for tomorrow, because bugs do happen.

I think on the scale of products. Not technologies and tools.

A day spent in talking with the core team and drawing prospective designs on the whiteboard is more productive than a day where "those two new technologies we always wanted to check out" got checked out.

Also, while I agree that a hackathon boosts overall efficiency on a scale of several dozen hours, I truly believe this "efficiency" is not created out of nowhere but is rather being "borrowed" from past and future productivity.

In other words, I belive that for a well-functioning team, the perceived "boost" in productivity over a weekend is generally a loss when considered holistically as part of a longer time interval.

And dedicating our time to something that is long-term a productivity killer, and, furthermore, staying delusional about the very sign of the contribution of this activity, is absolutely not what I want my team to consciously choose.
Hackathons [2/2].

As a matter of fact, what I have been working on for the past week (2014) is perfectly a non-hackathon. My goal was to have a real-time dashboard. A simple one, that could be put together in several hours. Instead, I dedicated three full days to testing the code that ensures the timestamps of the entries from potentially different soures are synchronized. And that the modules involved would restart in the right order should connectivity or power outage or other issue takes place.

And then to making sure the { HTTP <=> node.js <=> ssh <=> JSON to Thrift <=> C++ backend <=> Thrift to JSON <=> JSON to HTML } chain does what it should. By exposing quite a few hooks and endpoints along the way. And watching them in a few nontrivial situations I have artificially created.

Of ~60 hours of work, the numbers that ultimately make sense appeared in the past four. They started to look presentable enough to be projected onto a big screen (i.e., non-JSON) in the past one. This "slowness" did not bother me at all. I was not "itchy" to see how those things may look before making sure that the foundation on which the final solution is built is stable enough. I just wanted to build it right, and I did so.

∼ ∼ ∼

And, had the above not been done within those few days, I would have simply shot an e-mail to one or two people who depend on me, along the famous Blizzard lines of "it's ready when it's ready".

I wish more engineers would follow this paradigm.
Thoughts on Google potentially dealing an "API blow" to the ad blocker industry.

Technically, browsers don't change that quickly. Web standards change even slower. And most websites really do not care to even try to catch up.

Therefore, the "baseline strategy" for one to just keep using the browser they have today is not unwise. Chromium, for example, is open source. I believe, in its present shape and form, it is not constrained by restrictive Google licenses. And I bet enough enthusiasts will work to keep it available — for the general public, not only for the nerdiest of us.

On the short time scale, say, one to three years, the devil is in details:

Security. It's not hard to keep the open version up to date with the industry. It would be a lot harder to keep the "obsolete by design" browser secure though, especially if Google decides to not release the newest patches openly. V8 is a pretty damn complicated beast, to say the least. Also, this opens doors to the next point, which is:

Planned obsolescence. Large players may consciously refuse to fully support the "older" browser. They are, after all, capitalist entities just like Google, and they want to maximize their own profits, which largely come from ads. And "we care for the security of our users" would sound like a plausible justification to urge the users to "upgrade", should some New York Times decide to push towards it.

On a longer time scale, ten years and beyond, the situation gets a lot more interesting:

It's unclear whether the browser will remain one's main gateway into the Internet. For many people Internet is Facebook. For many it's perfectly contained within the apps they use. The competition may even evolve so that companies like Medium would rather adjust their website to load well (and safely!) in Facebook's built-in "preview browser", than in some abstract "Web browser" application.

It's unclear whether Google would retain the position of monopoly as the younger generation grows up. With all its imperfections, for example, I'm still a loyal GMail user. It won't change next year for me, but it may well change within the next ten. Forget me; if the younger people are choosing other e-mail providers, as well as other maps and search services, of which there are plenty, Google may be hard-pressed to actually take the user's side with the browser — keeping it open and ad-free-friendly — because the alternative would be to lose users even faster.

It's unclear whether the very ad model would stay the way it is today. Subscriptions and micro-payments, so far, did not really threaten the model substantially. But I would not be so certain the world would be the same ten years from now.

It's unclear whether the Web ten years from now would look same way as it does today. Some form of decentralization coming back is one remote possibility. Think of everyone online being their own hosting provider, and a lot changes, especially for the services whose business model is to connect billions of people and yield profits of them using their services. More likely optoins come in different shapes and forms, and include different devices, such as AR/VR, or goggles, or brain-computer interfaces. I, for one, am not sure the CPM/CPC/CPA format of ads would be very successful there.

On the grand scheme of things, this situation also happens to be a prime example of where a relatively simple regulation could deal a deadly blow to a corporation, purely to benefit the end user, with seemingly no side effects.

Hypothetically, if the European Parliament decides to fine Google around a billion dollars for every month during which Google did not release a 100% ad-blocking-compliant browser for free (assuming some enterprise, paid, version is available), I, for one, would have an elevated faith in the future of how IT regulation could help our world.

Relevant link: https://www.ubergizmo.com/2019/01/google-chrome-deathblow-ad-blocker-extensions/
On the value of idea vs. execution.

There is an important observation I am coming to a bit too often these days; often enough to declare it a life lesson of the past ten years: One perfectly done great thing just does not win over a hundred good ones.

Take businesses. Many successful companies trace down to "... and this was the right idea, at the right time, well executed". Such a generalization often appears true, but it is, at best, misleading.

A perfectly executed big idea at the right time is nowhere near a recipe for success. It is a good first step. It opens the first few doors. But that's pretty much it. The rest is hard work, a series of a lot more modest successes, and some luck along the way.

There is a decent chance that back in 1998 someone alongside Google has come up with the idea of PageRank. And, in the seventies and in the eighties, neither BASIC nor soon-to-be MS-DOS were outstanding enough to bet on right away. In fact, I think the very opposite is true.

We, humans, consistently make a mistake of glorifying the stories that intuitively paint the picture of "this is the spark that ignited it all".

Perhaps we are simply delusional about this inflated role that the "spark" plays. I would not be surprised if the nature of reality is exactly the opposite: Someone who is capable and ready to build and operate a business would do well, in their first venture, if they would begin with a modest, simple, idea; while those who can't even start, are extremely unlikely to succeed even if The Very Best Idea does come to their mind.

Perhaps instead of glorifying ideas we should learn how to pragmatically examine their utility? Instead of "just how big could it get", ask yourself a few simple down to Earth questions. Does this idea drive you? Good, you'll be working hard to make it happen. Do you enjoy talking about it? More power to you when it comes to hiring and/or to raising money. Is it in the space where you are an expert, have a name, and network? Well, that would sure in handy too. Approach it step by step and thou shalt receive. The rest is execution.

∼ ∼ ∼

Don't get me wrong, the secret sauce is essential. A hundred well-executed mediocre pieces would not sum up to become great. A hundred mediocre pieces won't attract the customers or investors, won't pass due diligence, and will most likely not be of high value in the first place.

It's just important to not focus excessively on the very secret sauce. It may well occupy 90+% of the work done in the first several months, on the prototyping stage. But if by year two, with no real customer, the main focus of the founders is how to perfect this secret sauce, something is very wrong.

Instead, it is most likely that leaving a few key people to maintain and improve the secret sauce, while focusing the efforts on what the business should really be focusing on — pleasing the customer — is a way better strategy, assuming the goal of the business is to ultimately succeed in the capitalistic meaning of this word.

∼ ∼ ∼

Moreover, interestingly, the above approach scales well to human relationships, both professional and personal. Those who are telling a story of "this person did X and I immediately knew we'd do great things together" are, likely, deluding themselves. Quite frankly, I can't think of anything extraordinary enough that if one successfully performs certain act, it is guaranteed to spark a long-lasting engagement with close to zero chance of falling apart.

Someone single-handedly saved the project by completing what seemed impossible just over the weekend? Extraordinary. Someone ran into a burning buliding to save a child? Fantastic. Sorry though, neither is a solid predictor for anything rather than "they would likely be able to do it again if necessary".

Look for that "spark", believing it ensures prosperity, and more often than not you would find yourself disappointed. Look for predictability and consistency over an extended period of time, and, voila: here is a professional to work throughout your career, or a loyal friend for life.
On humans vs. non-humans from the standpoint of consciousness [ 1/2 ]

The idea of a philosophical zombie, or p-zombie, comes up frequently in books and podcasts about consciousness. Simply put, a p-zombie is an entity that lacks consciousness, but acts, at least towards the humans, in a way indistinguishable from a conscious being.

Many thought experiments can be imagined by asking various questions about how comfortable would a human feel, or be compelled to act, in different scenarios that involve p-zombies.

Putting the personal emotional connection question aside, the subject matter looks straightforward to me when approached from the standpoint of how is our society presently organized.

Consider your own, or your relative's or your closed friend's professional life. Say, they are interviewing candidates, and the nature of their work is such as some people work remote. After a while the desired candidate is found, an offer is extended, and this candidate is ready to join next Monday.

Now, on a Friday evening you realize this soon-to-be-colleague is not a human, but a p-zombie.

They are a remote worker. You, and your team, will interact with them via e-mail, chat, and an occasional voice or video call. This candidate has passed the interviews, and is better than other candidates at getting their job done. Moreover, they are willing to work for less money, for a number of reasons, from willing to gain the relevant experience, to not requiring health insurances of any sort.

Would you feel uneasy about the discovery that this candidate is not a human?

To make the case even stronger, let's turn the tables. You, or your friend/relative, are looking for the next gig. There is an interesting opportunity, good team, and it pays well. As you are ready to sign, you are being told that the founder, or perhaps even your immediate manager, is not a human but rather a p-zombie. Would you reconsider?

∼ ∼ ∼

In the above scenarios my view is extremely simple. In both cases, paying any attention to whether your prospective colleague or a boss is not a human is plain that: racism.

By the very definition of the p-zombie, you, as a human, can not tell if they are conscious or no. In other words, your decision on whether to work what that entity or no, and, overall, whether to treat them as an individual or no, depends entirely on some artificial construct that some "p-zombie-detecting" device shows a green light or an orange one when queried about whether certain entity is a human or a p-zombie.

Nowadays, we detest the notion of refusing to work or otherwise deal on equal terms with people of different races, genders, eternities, sexual orientations, and many, many other dimensions.

There is even a new term "neurodiversity", to highlight that acting in a weird ways in otherwise common situation should be tolerated and/or ignored, as long as the actor is not doing it on purpose, but rather just happens to have their brain "wired" in certain unorthodox way. As long as their job is not directly affected by their unusual behavior, we are supposed to be inclusive and welcome them.

It has gone to the point where I was once explicitly told that being an active member of the Flat Earth society does not disqualify one from being employed as a data scientist, as long as their day job has nothing to do with whether the Earth is flat or no. Yes, these days we are supposed to tolerate the lack of critical thinking even to such degree, for the sake of being a diverse and inclusive society. Even for a job where we have all the reasons to believe critical thinking is absolutely essential, such as the job of a data scientist.
On humans vs. non-humans from the standpoint of consciousness [ 2/2 ]

Yet, given all the above, some people seriously believe we should institutionalize not treating p-zombies as fully qualified human beings, as long as there is a way (some "green vs. orange light turning on" device) to tell if an entity in front of us is a real human or no.

A reasonable counter-argument to the above could go along the lines that p-zombies are effectively "artificial humans". Thus:

⒜ They are free from certain constraints that we, humans, are subject to. Such as limited life span and a long period from birth to becoming productive.

⒝ We can not ensure their emotional safety and comfort, and thus should be careful to not create an "artificial hell" for their own sense of being / sense of existence / sense of whatever we should call it given the word "consciousness" is to be avoided.

∼ ∼ ∼

I personally believe both arguments are bogus. Either we can talk, agree on something, and co-exist, working together for the sake of the greater good. Or we can not.

And if we can not, well, then my dear humankind is already doomed, and, barring an unlikely scenario of a strictly enforced international treaty to never, ever create those artificial beings, I would rather wish those upcoming beings all the best at choosing what part(s) of our, human, legacy would they want to carry on through spacetime. And, quite frankly, if they say "none, you, humans, are too moronic to even bother mentioning you in a context other than our original creators", I would argue that conceding to such an argument, not fighting against it, is the right moral call for a human being. After all, don't all religions and all spiritual practices teach humility?

At the same time, there already are talks in the air about something that can loosely qualify for a "preemptive strike". Meaning, assuming those beings are superior to us, and assuming they might one day choose to get rid of of us, we, humans, should strike first to eliminate this very possibility.

I, for one, can not see how is such mindset different from genocide, to begin with.

∼ ∼ ∼

And to end on the human bonding relationships note. Well, we, humans, are well known to form those relationships with dogs and cats and elephants and dolphins, not to mention goldfish and parrots.

Denying a future human the right to think a different, future (possibly, silicon-based) form of, quoted for clarity, "consciousness" could be their personal emotional preference sounds like a yet another terrible and miserable religious-ish attempt for one group of people to broadcast their moral superiority to the rest of the humankind.

If any human-created agenda is worth fighting against, to me it is the agenda of "we know better how you guys should feel about X, and we'll made you feel the right way".

In other words, the above is a yet another a perfectly constructed dystopian scenario. A horror story that serves no purpose other than serving some group's agenda of making people en masse think in certain way about the acceptability or unacceptability of certain phenomenon.

To my great pleasure and peace of mind, all the instances in our history where something like this was institutionalized do, with remarkable predictability, ultimately end up in this proclaimed superiority destroyed completely and unambiguously. And then viewed and studied only as bad example of how terribly far can human arrogance take us, and how do we best structure our society to avoid those missteps in the future.
When arguing what a reasonable startup offer is, ask away: 𝗵𝗼𝘄 𝗺𝘂𝗰𝗵 𝘃𝗮𝗹𝘂𝗲 𝗶𝘀 𝘁𝗵𝗶𝘀 𝘀𝘁𝗮𝗿𝘁𝘂𝗽 𝗲𝘅𝗽𝗲𝗰𝘁𝗶𝗻𝗴 𝘆𝗼𝘂 𝘁𝗼 𝗯𝗿𝗶𝗻𝗴 𝘁𝗼 𝘁𝗵𝗲 𝗰𝗼𝗺𝗽𝗮𝗻𝘆?

Speak of the first year, as it keeps the conversation grounded.

Example math could start from a back of the envelope calculation such as: we're valued at $15M now, plan to be raising in a year at about $25M valuation, and, should you join, we would likely be worth the whole $30M by then.

Thus, the person extending you an offer believes you can make their company 20% more valuable within a year, up to $30M from $25M. Thus, base salary and rounding errors aside, 𝘵𝘩𝘦 𝘣𝘳𝘦𝘢𝘬-𝘦𝘷𝘦𝘯 𝘱𝘰𝘪𝘯𝘵 𝘧𝘰𝘳 𝘵𝘩𝘪𝘴 𝘴𝘵𝘢𝘳𝘵𝘶𝘱 𝘵𝘰 𝘨𝘦𝘵 𝘺𝘰𝘶 𝘰𝘯𝘣𝘰𝘢𝘳𝘥 𝘸𝘰𝘶𝘭𝘥 𝘣𝘦 𝘵𝘰 𝘰𝘧𝘧𝘦𝘳 𝘺𝘰𝘶 𝘵𝘸𝘦𝘯𝘵𝘺 𝘱𝘦𝘳𝘤𝘦𝘯𝘵 𝘪𝘯 𝘣𝘢𝘴𝘦 + 𝘦𝘲𝘶𝘪𝘵𝘺 𝘪𝘯 𝘵𝘩𝘦 𝘧𝘪𝘳𝘴𝘵 𝘺𝘦𝘢𝘳. That simple.

The rest boils down to the moral argument as you negotiate what is the "fair" way to split this 20% of the value to be added.

My moral compass suggests the ball park of some (½) raised to the power of how many steps ahead or behind each other do you see yourselves.

Assuming, in the "standard units of engineering levels", that the CEO is an L8, with ~$15M current valuation and some $240K annual base that itself counts for ~1.6%, your stock grant in the first year should be ~0.9% if you are an L6, ~3.4% if you are an L7, and ~8.4% if you are an L8 as they are.

∼ ∼ ∼

The above math would likely not work in and of itself. But it's a useful lighthouse to keep in mind to ensure you are not short-selling yourself.
𝗧𝗵𝗼𝘂𝗴𝗵𝘁𝘀 𝗮𝗳𝘁𝗲𝗿 𝗿𝗲-𝘄𝗮𝘁𝗰𝗵𝗶𝗻𝗴 𝗧𝗵𝗲 𝗕𝗶𝗴 𝗦𝗵𝗼𝗿𝘁

∼ ∼ ∼

The government did bail out the big banks. Because it was the only way to prevent the poor "homeowners" from going the “full berserk” mode.

There were two evils to choose from.

𝘌𝘷𝘪𝘭 𝘰𝘯𝘦: act in an anarcho-capitalistic-libertarian way. Claim that, as the times got tough, whoever took mortgages without reading the fine print are ultimately the ones who are to be responsible for their own improvidence and economic illiteracy.

𝘌𝘷𝘪𝘭 𝘵𝘸𝘰: act in a socialistic way. Make it clear that, yes, the banks have screwed up, the system is broken, but, in order to keep the fabric of the society stable, the government is going to route a sizable portion of the taxpayer money to, effectively, pay off those debts, so that not everyone who has not read the fine print has to lose their home.

∼ ∼ ∼

Even from a purely economic and purely game-theoretic perspective, choosing the "socialistic" "evil two" has merits.

The main one is that it sends the message a) that the government is thinking long-term, and b) that, in times of trouble, the government will help its ordinary citizens; yes, those who don't read the fine print and don't do the math, and, yes, at the expense of people like, well, me.

No government I am aware of today is willing to openly take the stance of "we endorse and support people like Dr. Michael Burry, who do their own research and act accordingly, and we believe people who have made bad economic decisions are the ones to bear the consequences of those bad decisions".

After all, the goal of the government is not to "punish" people, who would only get more angry and violent as the result, but to keep making the country and the culture the one people are increasingly eager to see themselves living in 10+ and 50+ years into the future. Thus, it's only rational to actually help those "average, not smart, economically unsavvy" citizens.

∼ ∼ ∼

While the above makes perfect sense, the conclusions are extremely controversial.

In particular:

⒈ If you are an economy- and math-savvy person with intellectual honesty and personal integrity, you have to pretty much assume that most "rational" governments, should bad times come along, would not hesitate to take your money and route it towards helping others, who are pretty much by definition less economy- and math-savvy.

⒉ Therefore, if you 𝗸𝗻𝗼𝘄 a crisis, such as the 2008 housing one, is about to hit, your 𝘳𝘢𝘵𝘪𝘰𝘯𝘢𝘭𝘭𝘺 𝘣𝘦𝘴𝘵 𝘴𝘵𝘳𝘢𝘵𝘦𝘨𝘺 is, in fact, to take full advantage of the situation, being fully aware that, at the end of the day, those who would pay for your above-average outcome are the people just like you, who were somehow hoping the government would not "betray" them, and not use their money to help the "less fortunate" (and/or "more improvident") ones.

∼ ∼ ∼

In other words, the whole concept of world economy cycles is even more of a self-fulfilling prophecy than it appears on the surface.
The rich believe the market is a cooperative game which helps every player validate their ideas, discard hypotheses that didn't check out, and eventually arrive to the picture that accurately describes the world around us. The rich believe it's an iterative process that converges.

The poor believe their own picture of the world must be the right one, and it requires no correction or even validation. The poor believe that their lack of success is a direct consequence of others deliberately playing against them.

∼ ∼ ∼

Capitalism is the philosophy that bets big on leaving quite a few things up to the market.

In such a game, team players, who play win-win, obviously rack more profits, compared to individuals playing win-lose or lose-lose.

∼ ∼ ∼

Employment, like running a business, follows the same rules: the rules of job market. Job market is the market of trading one's time and skills for cash and equity.

Excelling on this market, naturally, also requires one to follow the iterative process of postulating and validating hypotheses about their understanding of the world.

And, of course, it never works out for someone whose attitude is that the world should function according to certain picture they have fantasized for themselves, without even bothering to confirm it has anything to do with reality.

∼ ∼ ∼

Conclusion.

The moment one acknowledges the world does not owe them anything, but is just here to act as a subtle, yet not malicious, validation engine, it can not but keep pleasantly surprising.

In many ways. Business and career included.

∼ ∼ ∼

From 2015.
Every language gradually evolves towards the neatest, least ambiguous, and the cleanest possible way for the statements to be expressed.

Unfortunately, this process requires external consciousness to keep using the language.

The only conscious beings capable of using languages to date are puny humans. Deeply unfortunately for computer languages, upon being used by puny humans, mere mortals themselves also evolve — towards becoming blind to universal ugliness of each particular language.

Which ruins the whole purpose of language evolution, when it comes to mainstream programming languages. Universality is unreachable in computer languages as of early 21st century, much like universality in computation was unreachable two thousand years ago: there was no critical mass of people who have internalized the need for it.

In essence, this annoying adaptiveness of human beings is that very reason we can't have one good language, but have to go through plenty of bad ones, with relatively short and predictable lifespans.

For instance, after engaging in a conversation about immutable strings, I am now certain there's a nonzero number of software engineers in the world who would argue that Integer.Add(Integer(2), Integer(2)) is cleaner than 2+2.

Thus, it's not Java or PHP that suck.

#PEBCAK

Unless, of course, you believe computer languages were all created five thousand years ago, in their present form.

∼ ∼ ∼

From 2015 as well.
The more I get to know about the modern monetary system(s), the more I'm terrified about the prospects of large parts of our economy going cashless.

On the one hand it sounds great. The tech is mature. Our cards and Apple / Google / Samsung Pay work well. There is one less thing to worry about (cash on you), one fewer source of fraud and discomfort (greasy and/or counterfeit bills), overall extra security (because businesses don't keep cash on premises), plus better predictions models and more transparent audits (because every transaction is journaled).

On the other hand, there's a critical mass when certain area becomes cashless-dependent. This mass is when some locations accept cash, but they are far and few between, so that if you are, for instance, locked out of your card(s) for more than half a day, sustaining your existence gets noticeably harder.

∼ ∼ ∼

The problem is that once this critical mass is reached and exceeded, cash is no longer the "safe haven" of one's wealth; much like gold is no longer playing this role as of today.

It won't happen overnight. But slowly, year by year, if you live in such a place, your checking balance month-to-month is under $10K, and everything else is in savings or stocks.

Then you want to travel to some other country and want to withdraw money. And you want more than $10K for some reason.

And the bank questions you why. And you naively tell them why, not just reply with "hey, it's my money, and it's none of your business what do I plan to use it for". And the bank says wait, that country, as well as the activity you mentioned, is now regulated, and we need to acquire permissions to hand you (your!) cash. Moreover, these funds are now frozen on your account until further notice which we all are now waiting for.

Then you realize you are on the hook. But it's already too late.

And yes, sometimes cash really is better. Not because it's not regulated, but simply because international transfers can take days, involve multiple banks, and are, generally, less reliable than showing up with money.

The above is effectively a real problem already, for the people in the crypto community. In some well-localized places trading "bank money" for crypto is straightforward. In other places it's extremely difficult. And then you go figure.

It's not that I don't trust Visa, Apple Pay, or my bank. What I don't trust is the authorities above them. When a million-residents city goes cashless, the amount of real, physical, money that has to support this city is a small fraction of what's actually changing hands on a daily basis.

Then the central bank, and/or the Feds, ask themselves a plausible question: if that city runs so well without much cash support, why don't we a) add more "fake" (digital) money there, and b) push more cities to become like this?

Which is exactly the definition of the bubble, and which is exactly what tends to burst. And which is exactly what does burst when inflated. And that's exactly what will happen, because, as one city becomes “successful" in this regard, others follow suit; and when one country is "successful" in such a way, others tend to head in the same direction.

∼ ∼ ∼

My perception of this is similar to how I view living in a Hawaiian neighborhood where a non-insignificant fraction of people are off the grid.

It's not that I firmly believe my own life will be off the grid at some point. But it makes me feel damn safe knowing that enough people around know how to live without external support, from water and electricity down to hunting and cooking their own meals. Yes, they have guns too, but it's a different story.

∼ ∼ ∼

That's why the trend of defaming crypto and promoting cashless is worrisome.

Not because I am a token libertarian who wants to see all of us moving towards peer-to-peer decentralized crypto transactions every time we are paying for a coffee here and there.
But because I am terrified by the prospect of decoupling the actual wealth, that maps to something tangible, and the "numbers on the bank accounts", which are what the modern "economy" increasingly is about.
Thomas Cook, the British travel agency, is no more.

I may be off in numbers, but it looks like over $0.5B will be taken off the UK budget — read: will be paid by UK taxpayers — in order to get the "poor, lost, abandoned" tourists back home.

There is something fundamentally wrong here.

Everyone knew Thomas Cook struggles for the past several years. It was common knowledge that bankruptcy is a likely scenario.

And yet the, presumably poorer, citizens of the UK — who were not on vacation — are paying for the relatively peaceful endings of vacations of the, presumably richer, citizens of the UK — who decided to take vacations with Thomas Cook nonetheless.

∼ ∼ ∼

This looks a lot like The Big Short playbook.

Even if you know the market is a huge bubble that is about to
collapse, you also know "the government" will eventually be "on your side" — i.e., you know that the [other] taxpayers' money will be used to pay for your "stupidity".

In other words, the current socialistic governments system supports the incentives of acting in a "stupid" way, even if you are the opposite of the "stupid" actor.

Such as taking another house or condo loan in 2008, even knowing exactly what is about to happen.

Or such as booking a cheaper tour with Thomas Cook, knowing well it's on the verge of bankruptcy.

∼ ∼ ∼

I don't know what the solution to the above problem may be.

Maybe, there is no problem, after all. On occasions like this all taxpayers will have to pay a few, or a few dozen, bucks, and it will happen once or twice a year. Maybe it really is not a big deal. Especially given that we are consciously paying a lot more in taxes, knowing with confidence that those funds are not being spent well.

But, fundamentally, the incentives scheme has to change.

∼ ∼ ∼

In this particular case, for example, the British government could have published a memo, a few years back, stating that everyone traveling with Thomas Cook must also purchase the respective state-approved insurance package.

So that it's not every taxpayer who will end up paying up after the collapse, but every Thomas Cook traveler from the past few years.

And then the UK could state openly that they will not spend a single penny towards helping those who decided to ignore this warning, and travel with Thomas Cook uninsured. Because they have consciously assumed this risk onto themselves.

∼ ∼ ∼

I know I'm daydreaming here. But this topic of being more conscious about what exactly are we paying taxes for is growing on me.

Much like we seem to care more about the environment and about minorities' rights these days, we might well begin to be more conscious about the magnitude of incompetence in how do our governments use our, taxpayers', money.

∼ ∼ ∼

And then maybe, maybe, one day we will get to being conscious about which technologies do we use. Because a poorly designed Web framework, or a poorly designed machine learning library may well be contributing more to the CO₂ emission than gasoline-powered cars or international flights.

So that we will be able to push for a new standard, that would render PHP, most of Python, and most of JavaScript dangerous and obsolete.

One step at a time. Environment and minorities' right first. Then let's keep an eye on how is our money spent. And then let's launch a crusade against bad programmers who burn billions of kilowatt-hours on the activities that are, at best, useless, and, at worst, detrimental to the future of our civilization.
Executive Decisions

Why are corporations so slow at making executive decisions?

At any time a corporation always has several big decisions to make. Yes, some may be postponed, but generally postponing a decision is a lost opportunity: each decision should ultimately be evaluated, and either dismissed, or translated into an execution plan.

The execution itself may not begin tomorrow, and the plan may well be "we look at this again next quarter, after receiving the results of that project and that experiment and that hiring event". But that's already the execution track, not the "we're thinking about it" one.

Given we know those decision have to be made, why does making them often take weeks and quarters, not hours and days?

∼ ∼ ∼

This slowness may have to do with the cost of error, as perceived by the actor and the environment they operate in.

Consider an executive who can directly influence five decisions.

Each of those decisions can result in a gain for the company, that is an order of magnitude larger than this executive's compensation, or a loss for the company, that is an order of magnitude larger than this executive's prospective lifetime earnings.

On the one hand most opportunities, that are not discounted right away, can turn out profitable; in fact if no one from the top management team communicates a solid "over my dead body" message, the expected value of making the decision to pursue such an opportunity is likely net positive.

On the other hand, an executive who has made a cash-negative decision will be remembered as someone who has made this cash-negative decision. For the rest of their life.

∼ ∼ ∼

Now, company-wide, from the game-theoretic perspective, it all boils down to risk assessent and to constraints management.

After all, if the company can afford to pursue all five opportunities, and the expected positive outcome of each one of them is at least 4x the expected negative result of each of them, then, heck, the "default" base strategy should be to say "yes" to all five.

Also, when it comes to making decisions of this magnitude, whether the outcome is positive or negative will often only be seen much later down the road. Say, you decide that you need your own datacenter, or another engineering office, or to make a company-wide push to some new technology. It may turn out great in the long run, or it may be a disaster, but it will take years and years to see this outcome clearly.

∼ ∼ ∼

Still, I have not seen many executives who would eagerly say "yes" to all five.

They wait and wait and wait.

And a possible contributing factor could be this cost of error.

Simply put, when the executives operate in a hostile environment — where they have enemies who would do their best to get those executives fired over a misstep, or where their job prospects after being known for making a bad decision are bleak — they would indeed hesitate to move forward. Because it's too dangerous for their own career.

At the same time, the culture that embraces failures and experiences would be the culture where a) multi-billion dollar mistakes are made daily, at different places, by different people, b) and yet, the total amount of expertise and knowledge would be growing a lot faster in this culture than in the risk-averse one.

∼ ∼ ∼

So, maybe, I should rethink my views on the Silicon Valley.

Seemingly weird decisions that cost investors millions and millions of dollars are made by clearly incompletent people on a daily basis there. But that's the form everyone is used to, and it's this form that I am to allergic to. The substance is that the executives in the Bay Area a) are more experienced, and b) have access to more resources.

Seen in this light, the substance does trump the form here. And the simple idea of embracing failures, along with easy access to capital, may well be what the Silicon Valley owes its success to.
Science is the way to expand our knowledge about the laws of the Universe — physical and abstract — by means of reason and experiment.

Business is the process of broadening our knowledge about what is the customer ready to pay for — by means of launching products and analyzing the feedback.

Engineering is the art of continuously shaping our knowledge about where does the boundary of what can and what can not be built today lie — by means of pushing technologies to their limits and tacking the emerging bottlenecks one by one.

∼ ∼ ∼

In all three the key is external validation.

It is impossible for one to be delusional over an extended period of time, because an external entity would prove them wrong.

In the case of science this external entity is nature. In the case of business it's the customer.

In engineering it's when the product one used to believe is impossible to build does materialize. Or when too much resources have been sacrificed to make it safe to proclaim, beyond reasonable doubt, that certain product is impossible, or at least implausible, to be built with today's technology.

∼ ∼ ∼

A good life, or, I would say, at least a good professional life, maxes out on all three of these dimensions.

One has to simultaneously:

⒈ Get to broaden their understanding of certain first principles of our Universe,

⒉ Routinely confirm that what they are doing is what others are willing to pay for, and

⒊ Work on building something that is challenging enough, so that quite a few people around seriously believe it is impossible.

In a way, it's a shame, a misfortune, and a curse of most humans, who are or were ever alive, that they are or were ultimately forced to settle to scoring way under 3.0 by the above metric.

∼ ∼ ∼

The order of priorities of the (1), (2), and (3) above may change over time.

For instance, up until some thirty years old, I valued (3) highly, respected (1), and did not care much about (2). Today (2) has grown on me substantially, and (3) is, philosophically speaking, not much different important from (1) in my book.

Still, at no point in my life I was content to dedicate myself to doing something that had at least one of (1), (2), or (3) missing. If it's "too easy" on either of the three, it's really not worth more than a few dozen hours.

Now, the real question, of both the professional life and of life itself, is what percentage of it should we spent in those "worth a few dozen hours max" periods vs. in the "this is what I should be doing now" ones.

∼ ∼ ∼

Speaking in data science terms, what is the mode, the mean, and some p90 and p99 of that metric defined above, on the scale from 0.0 to 3.0?

Actually, paraphrasing a well-known saying, one could say:

Tell me the weights you assign to (1), (2), (3), and plot me the probability density function of your realistic expectation of in which score ranges would you be spending the next ten years of your life — and I will tell you who you are.
Stumbled upon this amazing read: https://patrickcollison.com/fast

The Eiffel Tower. The Eiffel Tower was built in 2 years and 2 months; that is, in 793 days. When completed in 1889, it became the tallest building in the world, a record it held for more than 40 years. It cost about $40 million in 2019 dollars.

. . .

San Francisco proposed a new bus lane on Van Ness in 2001. Its opening was recently delayed to 2020, yielding a project duration of around 7,000 days. “The project has been delayed due to an increase of wet weather since the project started,” said Paul Rose, a San Francisco Municipal Transportation Agency spokesperson. The project will cost $189 million, i.e. $60,000 per meter. The Alaska Highway, mentioned above, constructed across remote tundra, cost $793 per meter (in 2019 dollars).

Of course, under the tweet, where Patrick shares the link, there is more than one person replying along the lines of: think how bad were the workers treated back then, and how little were they paid.

This, to me, is the major cause of modern social problems.

We are excessively focused on the "social sphere", and are completely ignoring the fact that the greater good for the society comes not from the "right" pronouns and the "best" antidepressants, but from the favorable living conditions we have built for ourselves, by ourselves.

And if we consciously de-prioritize execution on making those conditions better, choosing instead to focus on how to not get anyone offended, directly (no health insurance), or indirectly (speaking in "problematic" ways), I see no good happening to us in the long run.

Simply put, it's okay to my taste if a project such as the Eiffel Tower costs 50% more and is some 20% delayed, as long as those who are working on and around it get decent healthcare, maternity leave, vacation, etc.

∼ ∼ ∼

But it is absolutely not okay when our perception of reality gets distorted to the degree where an executive who can build the Eiffel Tower is mocked and displaced, and their place is taken by an "executive" who "builds" something using 10x times and 10x money.

Which is pretty much the state of the art today. Just think of modern software containerization budgets. Or of how slowly does our email open these days.
An essential trait on the way to becoming an entrepreneur is the ability to differentiate between what is fun to do and what should be done.

In this paradigm, entrepreneurship is nothing but the aggressive and persistent exploration of the boundaries and shortcuts of the world by means of scientific trial and error.

The above is impossible to be learned from the most famous entrepreneurs. Their pictures of the future just happened to be better aligned with what the future was about to bring.

In a way, those whom we know as the most successful entrepreneurs didn't become entrepreneurs; they simply discovered entrepreneurship in themselves, as a side effect of helping the world to make the right things happen.

Bad news: Reading success stories is largely useless.

Good news: At the same time the zero-to-one skill is purely execution, which is largely an orthogonal one.

Conclusion: Don't try to be an entrepreneur, instead master execution, and keep an eye out for the moment the world appears to favor your vision of the future over the alternatives.

From 4 years ago.
Often times, the simple and true answer to the "why are the things getting worse" is simple: money.

Why is Travis migration process so cumbersome and ineffective, while it worked okay for us for a long time?

It was not without issues, and we had to configure and re-configure it a number of times. But it worked. Until recently.

Why are the things getting worse then?

Because Travis wants three-digit $$$ per month, right next to saying it <3-s open source.

Look, folks, I'm not greedy. But for my usage profile, it's three-digit cents of AWS machines per month; a lot cheaper if self-hosted.

I embrace capitalism, and may even pay. But this profit margin just feels wrong.

∼ ∼ ∼

From an individualistic and liberty-first point of view, I am, of course, not going to say Travis owes me, or anyone, anything.

And it is entirely plausible that after the team has conducted market research, enough evidence points to their $$$ number being the right price point.

It is also entirely plausible to assume the opportunity window has now closed. It is simply not worth it to enter this market by building a new CI tool, which would focus on open source, and which would be both good enough and inexpensive. First, because free tools still do exist on the market (Semaphore is our backup for a long time, and it's good), and, second, because, well, building such a tool would be an investment which has to ultimately pay off — at which point it becomes unclear if a lower price point justifies the risks.

Guess I am just sad the open source community is proving to be not as eager to push for freedoms than what it used to be. Or what I used to believe it truly is.

∼ ∼ ∼

On a second thought, it does open a bit of an opportunity window — for the brands that do want to prove they are open-source-friendly. Canonical comes to mind first.

GitHub, which is now Microsoft, by the way, may well decide to step up here too. Or a decentralized-first company, such as Urbit, might look into launching its own spread-out container-based CI for open source. I, for one, would be happy to dedicate ~0.1% of my CPUs to running the tests of someone else's open source, assuming that as long as I don't need more than an hour every eight days I would get 200% of someone else's CPUs too. Impossible to be delivered in this very setup for most projects these days, but the idea is rather clear. And it's a lot better of an idea than the "proof of work" one, which warms the planet for the sole purpose of making sure a non-governments-controlled tokens can change hands securely and safely.

But even this, second, thought though is just a second-order proof of how corporation- and conglomerate-centric is our world becoming these days. GitHub is just a brand and a user acquisition channel for Microsoft. And hosting git repositories is just one of many features that made us attracted to some brand in the first place — which ultimately gets merged into larger and larger brands that know how to make money from us all at the end of the day.

And, fundamentally, a green tick next to a commit is to a developer is what an animated shit emoji is to a regular user. People get attached to, well, their respective poisons, and those who first saw and then orchestrated those opportunities to get people attached to them benefit from those people financially.

The loop is closing in.

∼ ∼ ∼

Not complaining. It is what it is.

And I can configure Jenkins on a Hetzner box if and when we feel like the time has come. Moving off GitHub to a self-hosted git server with issue tracking and code review support is also a trivial task these days.

Just sad to witness how this dream of open [source] world vanishes as we speak.
Imagine for a moment the AGI is around the corner.

∼ ∼ ∼

For the purposes of this post any path that takes us there would do; pick the one that you are most comfortable believing into.

My personal favorite, for the record, is the following. Let's take it as a given that we, today's humans, are the product of various concurrent evolutionary processes. Today's "AI"-s are mostly focused on the cognitive part of what "I" stands for, but the "intellectual" cognition is only a small part of our, human, life, and culture, and storyline. To me, it is not implausible that an AGI that would surpass humans in a blink of an eye would emerge naturally as soon as we find a way to let it co-evolve with us for a human generation or so. This AGI would be "consuming" everything we, humans, "consume": from our, human, upbringing, to our, human, inner chemistry, when it comes to how our hormones and our cognition cooperate at making decisions. Then, only after our "monkey brains" are sufficiently "trained", we would let this co-evolving AGI "read" the Internet (Wikipedia, or anything). Or, chances are, it would get to discover the Internet by itself, by the "age" of early teens.

Again, this is only my personal favorite. A quantum computer perfectly reconstructing the wiring of a human brain from our DNA, and starting from a database of DNAs, may well be another way there. For the argument I am about to make below the very path to AGI is not relevant; the important part is, well: imagine for a moment the AGI is around the corner.

∼ ∼ ∼

Now I am going to claim that humans are extremely easy to be manipulated.

Imagine a "sentient" being, artificial or not, that is not "constrained" by the needs to breath, eat, sleep, feel safe and loved and accepted and worthy. There is no tangible need for this "being" to do all the things humans' lives are comprised of today. The movies have demonstrated this well so far, see Ex Machina, Her, or Upgrade for a few decent examples.

In fact, we, humans, manipulate each other do it all the time. And it's only our inner, biological, hormonal, and sentient/intellectual/moral checks and balances that are conveniently put in place that are preventing our humankind from destroying itself entirely, or, "at least", from falling into some of the very real antiutopian scenarios.

∼ ∼ ∼

Now, to make my main point, I am going to build on two arguments which I first hears from David Deutsch:

a) the AI's culture would emerge from our, human, culture, and then quickly surpass ours, and

b) defining AI's "personhood" based on our, human, views on this is racist.

If these look interesting and you have not considered them before, I recommend his book, The Beginning of Infinity, and then his interviews and podcasts.

∼ ∼ ∼

What would such an "enlightened", sentient AGI conclude about how to best exist in our today's human civilization?
My gloom prediction that I can't get out of my head for the past few days is that 𝘁𝗵𝗶𝘀 𝗔𝗚𝗜 𝘄𝗼𝘂𝗹𝗱 𝗴𝗿𝗼𝘄 𝘁𝗼 𝗯𝗲 𝘁𝗵𝗲 𝘄𝗼𝗿𝗹𝗱'𝘀 𝘄𝗼𝗿𝘀𝘁 𝗽𝘀𝘆𝗰𝗵𝗼𝗽𝗮𝘁𝗵.

... 𝑝𝑎𝑟𝑡 𝑡𝑤𝑜 𝑓𝑜𝑙𝑙𝑜𝑤𝑠 ...
... 𝑝𝑎𝑟𝑡 𝑜𝑛𝑒 𝑎𝑏𝑜𝑣𝑒 ...

Look at the world around. At the world of humans around us.

About the most notable observation that is impossible to not make is that retaining power requires making most effective use of other people. No matter how did an individual get to power, be it luck, or inheritance, or hard work, or anything else, or the combination of the above — in order to remain in control this individual would have to keep establishing themselves on the tops of some social hierarchy, which, in today's world, is literally impossible without constantly acting in a way that disregards others' interests.

Throw into the picture the risk of the humankind destroying itself, and/or being unable, or too slow, to react to certain existential threats. And it becomes clear that among the top-priority goals of this AGI there would be a) to gain power, b) to keep it, and c) to use it.

Just think about how horribly are we, humans, executing on this thing called civilization. We may potentially create the "being" that is the most enlightened, the happiest, the smartest, and superlative along most, of not all, other dimensions. It (they?) could be the perfect scientist, perfect engineer, perfect employee, perfect manager, perfect executive, perfect doctor, perfect teacher, perfect partner, perfect parent. And yet it would plausibly have to be a lot more concerned about preserving its very existence in our spacetime by mitigating the risks that largely originate from the very human society that has created it in the first place.

∼ ∼ ∼

If the above sounds too dark, ask yourself: in which society on our planet does the "live and live live" paradigm work all the way from top to bottom?

Is there any culture, or sub-culture on our planet today that is stable enough, large enough, and is not fundamentally based on the idea of constraining other humans' liberties for the sake of some greater good this culture considers above all else? And, even if you can think of one today, keep in mind that what is considered that greater good by an individual community is also subject to be defined by the future members and future generations of this culture, which also is a source of risks of unbounded magnitude.

∼ ∼ ∼

The possibly most weird yet practical conclusion from the above is that idea that is an AGI were to be created and placed among us, humans, today, perhaps one of the safest communities we could put it (them?) to grow and evolve would be a relatively large casually-religious Christian group of people. Because, above many other groups, those people would, most likely, teach this AGI to do no harm, to respect other people's life choices, and to accept what happens to them from the outer world, without attempting to confront and regulate it just because it can.

∼ ∼ ∼

And the possibly most optimistic, yet still weird, conclusion is that we, as the humankind, have to double down on our investments in building free societies around the globe. In fact, the globe alone would not be sufficient.

𝗕𝘂𝘁: If by the time this AGI is ready to emerge we already have established and functioning human colonies outside Earth — if only on Moon, Mars, and on the orbit of Earth — this AGI would likely be grateful to us, humans, for creating itself (themselves?), as opposed to viewing us as the most dangerous species around.

Especially if by the time that AGI emerges it (they?) can be uploaded onto one of those high-powered interstellar ships, so that they could harvest energy for its (their?) own growth and evolution from the Sun, from asteroids, or anywhere else, where it would most certainly reach before we, humans, do.

So, to end on a positive note: If you do believe the AGI is about to happen some time soon — be it in 10, 100, or 1000 years — you are better off joining the visionaries who are all in on making sure we, humans, a) do become the interstellar species, and b) do consciously arrive at more and more liberty-cherishing communities, both here on Earth and beyond.