Not boring, and a bit of a condescending prick
219 subscribers
27 photos
113 links
Semi-digested observations about our world right after they are phrased well enough in my head to be shared broader.
Download Telegram
How soon do you think the deep learning bubble will burst? [2/2]

On a grand scheme of things though, quite frankly, deep learning has yet to leave a noticeable dent.

To get back to the bubble subject: I don’t even think there’s any bubble with neural nets these days, as the market is still not full. If you have a teenage kid who’s into computer science, pushing them towards learning how to operate neural nets is a relatively safe bet, as the industry still needs experts in the field, and there are not enough qualified people. Yours truly can be a good example here: I have never trained a deep network in my life, but my academia/mathematical knowledge of how they work, times my hands-on experience with “non-deep” machine learning, keep proving themselves useful to hardcore deep learning teams over and over again, simply because the nature of problems those teams are facing is still more of hands down engineering and common sense than rocket science mathematics or computer science.

Analogies are a terrible way of explaining things, but here’s one more: servicing cars. On the one hand, with electric vehicles taking over, the job of being able to fix the generator or to diagnose the gearbox are going to go extinct soon. On the other hand, they were and are here, for well over fifty years as far as I can tell, so that generations of people were successfully betting their careers on servicing cars.

If anything, becoming hands-down proficient with neural nets, given that we are still in the early stages of their adoption (think of all those custom-made chips built specifically to train and/or apply certain nets!), will likely remain a lucrative career for some 10 .. 15 years to come.

I would be among the first to argue that if and when we are close to the AGI, this particular knowledge will likely become rudimentary. But it’s not going to happen in the next ten, and, very likely, twenty years.

∼ ∼ ∼

So, with all my lack of interest in deep learning (so far), and with me admitting wholeheartedly that the field is hyped beyond comprehension, I am unable to believe deep learning is in the bubble now.

If you like it, dive in, and have fun!
Even though computers are getting faster, why does it barely keep up with most softwares written today? What makes softwares bulkier and slower these days and how can we fix it?

Never attribute to malice that which is adequately explained by stupidity.

Not by the stupidity of software developers, or of the managers of those developers, or of the executives of those managers. Don’t underestimate the stupidity of the average user.

And then remember we live in the world of capitalism, so the user is king.

See where I’m going with this?

People will buy new pieces of hardware. People will install new flashy apps. People will pay for those in-app merch. People will click on ads in those apps.

And people don’t want software that is not slow and not bulky. Evidently. People want software that does something they want or need it do to, and they are comfortable waiting for several seconds a time. Every. Single. Fucking. Time.

Yes, some people are not like this. But we are the minority. We use terminals and vi to keep things fast. We like it.

But we are not the customer who is paying.

Do you how does an average customer look like? Look no further. Here it is. Quod erat demonstrandum.
What "unpopular work advice" actually helped you in your career? Why?

Good question. After giving it a long thought, it’s not really an advice, but something that keeps proving useful over time.

Don’t push it.

When the task is creative, or otherwise requires some state of mind, there’s a good chance that trying to get it done “against the mood” would only make things worse.

Example tasks that are a perfect way to get oneself busy when the creativity sparks are out of fuel:

∙ File an expense report (yes, those suck).
∙ Book flights or hotels.
∙ Re-read and polish some instructions / documentation.
∙ Fix that broken test.
∙ Just go through some documents / manuals / drafts what you said you will go through, even if it’s lo-pri.
∙ Clean up some e-mail or task trackers or old bugs.
∙ Take a walk outside (not in the heat though, sorry, although I love working from Thailand too).

Example tasks on which, at least I personally, would most likely not be productive enough; to the degree that putting them aside even for several days would still be a net win once I get to them with the right peace of mind:

∙ Outline some nontrivial ideas in a presentable way.
∙ Implement a creative piece of an algorithm.
∙ Have an important conversation, be it technical or managerial.
∙ Test some machine learning idea for some corner cases to isolate potential issues.
∙ Schedule meetings conversation on a trip which I have yet to decide the duration and the time frame of.

∼ ∼ ∼

Another, and perhaps cleaner, way to look at the above is through the lens of the expectations of others we are looking forward to fulfilling, and through the lens of our own ego we work to fulfill by means of getting some work things done. Upon closer examination, neither of the two above is a good motivator.

Others expecting us to deliver by certain date and time are, in most cases, just projecting their own perception of the pace they would like to see you perform at. If that pace is uncomfortable, it’s virtually always better to indicate it up front rather than to try to “earn” their “trust”. And this is one of the aspects where a conversation could make things slightly worse, while just saying “it’ll be done by next week [implying: sorry, not today]”, and actually getting it done by the next week is a path to a healthier business relationship.

And one’s own ego, as the pride and satisfaction we enjoy after getting things done, is, while generally a very positive emotion to experience, just not a good enough carrot to make oneself work towards. It’s the end, not the means, at least in my book these days.

∼ ∼ ∼

Obviously, if one senses the urge to take a break and browse Instagram or Dilbert for several hours, it’s a lot better to make themselves focus on the tasks from the fist bullet-pointed list above.

The best advice that helps me quite a bit, while can be considered unpopular, is to not even try to get to the tasks from the second bullet-pointed list above until I am comfortably enough in the mood to get to them.

Do virtually anything else, as long as it is even mildly productive, and/or as long as it contributes to getting myself back into the flow of creativity in some reasonable time window.

Yes, this time window can be several days, or even “after this trip/conference take place” or “after that conversation that is next week is held”. Just don’t push it when the moment is not right.

∼ ∼ ∼

The above, certainly, applies to creative parts of jobs only. If you’re a pilot, or a dentist, or a bartender, then, chances are, your priorities are different.

But, trust me: Much like you may sometimes dream of changing your career into something more creative, quite a few creative people around, quite often, dream of having a somewhat repetitive yet rewarding job that you do now, as opposed to effectively being under the control of our immediate mood and depth of inspiration.
Why aren't full day programming interviews structured around a single real world project, each round focusing on one aspect of the project (system design, interface design, algorithm design, programming, testing)?

Here’s my now-controversial, albeit perfectly reasonable, view on this.

Because, despite what everyone keeps saying, programming interviews are not broken.

Let’s approach programming interviews rationally, as a case of game-theory-based negotiations, and otherwise as a form of a multivariate optimization problem. It’s simple.

∼ ∼ ∼

The employer (corporation, startup, school, etc.) wants to hire an engineer (intern, student, partner, co-founder) in order to accomplsh certain goals.

Speaking meritocratically (cisgender white straight male privilege and overall a prime example of problematic patriarchy, I know, I know), the employer primarily cares about three aspects of such a hire:

Increasing value of their venture thanks to the hire (be it revenue, valuation, connections, etc.),

Minimizing the risks associated with the hire (be it the risk of non-delivery, the risk of the team possibly not enjoying working with this person, or the reputational risks, including the risk of being mocked by SJWs or otherwise accused of something non-meritocratic by random morons), and

Making this hire as effectively as possible (because opportunity cost is real, and, for most positions, a good enough candidate tomorrow is far better than a perfect one a quarter later).

∼ ∼ ∼

Now, I’m about to throw in several assumptions. They reflect my experience of being an engineer and working with engineers for twenty years; they may, or course, change later in my life, but I’d bet they won’t get altered by 180 degrees.

The assumptions are:

Talented people tend to be talented in many ways.

In other words, intelligence, both liquid and crystal, and both pure analytical and social, exists and can be measured, or, at least approximated.

And one full day of various conversations with different people gives the employer some 95+% of the information they need to make the hiring decision; 99+% as their references are checked.

The cost of a bad hire is often not as high as the cost of no hire.

True, with modern SJW trends it’s dangerous to hire a minority (or just a woman) knowing well that the chance they will not fit well and may have to be let go some half a year down the road is over some point one percent — the second-order, non-meritocratic, risk of having to deal with BS consequences of such an unpopular decision are far greater than they have ever been.

Still, one’s skills set and work ethic required to get certain job done are the substance, that, in a well-functioning organization, is always more essential than the form of keeping the “optics” “clean”. So, healthy organizations simply are open to taking those risks.

The cost of spending too much time on a possibly underqualified candidate is just not worth it.

The trend of doing a “micro-internship” for a day or two, to work alongside the candidate and see how well can they do, technically and as part of the team, is indeed emerging these days. I have done two of those (as a candidate, and a few myself as the hiring person), and I’ve enjoyed them greatly.

Still, it’s clearly a value-subtractor for the company, in my view, to proceed to this stage with a candidate the chances of wanting to hire whom are not some 98% at least.

And, with the desire to get the candidate this high, spending a day working with them, in a limbo state betwen evaluating them deeper and selling them on the idea to join our company, just means leaving value on the table — the value that could be realized, in different ways, should the candidate already be given a clear “yes” or a clear “no”.

∼ ∼ ∼

That’s about it. Very simple, if you ask me.
The human brain operates with models. Yes, all of them are wrong, but some prove to be extremely useful.

Different authors prefer different terms for those models: hypothesis, imagination, or delusion are used ofen. Regardless of how we call them, the very process of human communication — and human thinking, as they co-evolved — is the process of being able to:

⒈ Generate new models,
⒉ Validate and discard the "more wrong" ones, and
⒊ Explain existing models to other humans.

The last point has two distinct sub-points. The goals of communication are:

⒊⒜ To distill the model in the very speaker's head, and
⒊⒝ To make more people aware of this model (so that good models "evolve" within more brains, I'll probably write more about this later).

∼ ∼ ∼

I believe it only takes half a step from the above to solve the AGI and the explainable AI problems: The machine should involve the ease with which it can "train humans" in its fitting cost function.

That's it. Yes, so straightforward.

Here is an example implementation that looks plausible to me:

∙ Take a simple domain of problems. Say, geometry as Euclid saw it, or basic computer science as Knuth or CLRS teach it.

∙ Use the form of adversarial learning for the machine to simultaneously grow its knowledge on the domain, by postulating problems it can formulate but not yet solve, and by solving problems that it could not have previously solved. This approach was tried 50+ years ago, and it does yield tremendous results (example, from late 70s / early 80s). Using modern ideas such as GANs and RNNs, the fitting process can be extremely formal yet powerful.

∙ Now, add "regularization" to that model training: make sure the humans are involved. Take a class of volunteers and have the machine "explain" them a few problems that are currently "at the frontier" of this machine's knowledge. These volunteers, by the way, can be part of some large-scale MMORPG, but that's a separate, much longer, conversation.

∙ Thus, not only the domain of problems this machine would be able to postulate and solve will grow, but, together with growing the set of problems and the means to solve them, the machine will grow the knowledge of how to navigate the humans along the path of comprehending those problems and the methods to solve them.

∼ ∼ ∼

My intuition says that in the field of number theory, for example, starting from plain Peano arithmetic this { machine "learning" the world + humans "validating" its "explanations" } system could easily reach the concepts of multiplication and division, prime numbers, factorization, rational numbers, Diophantine equations, and so on. In the field of computer science I would bet sorting algorithms, the Dijkstra algorithm, spanning trees and maximum flows are the examples of what will be discovered automagically relatively quickly.

As a minor result, the above approach would likely result in paradigm shifts when it comes to how to teach the above concepts to humans. After having some model of the human mind within itself, the above machine would most likely be able to act like Socrates or Archimedes, postulating solvable problems of increasing complexity to the scholar, so that the they both enjoy their progress and are rapidly improving their skills set.

As a major result, well, I believe what the above would result in is indeed the AI that finds "satisfaction" in communicating with humans in the human language, helping us, humans, solve the problems we are presently facing.

The above may sound overly optimistic, but I seriously am open to believing we are this close to the form of "machine intelligence" that can help us in our daily lives.
A genie asks you to describe a programming language, which he will then create. What do you tell him?

Here’s what a software architect in me says:

⒈ Clean, short, and unambiguous syntax.

⒉ Easy to read and understand by human engineers.

⒊ Strongly typed, both at programming and at metaprogramming.

⒋ Can be both compiled (C++-grade performance) and evaluated on the fly (zero compilation time).

⒌ Not requiring an IDE to use effectively (“vi-friendly”). But, at the same time, it should be easy to use lightweight browser-based development tools, including interactive notebooks with visualizations, DB connectors, and other nice-to-have features integrated seamlessly.

⒍ Libraries and modules support without security holes or dependency hell.

⒎ An integrated build and package management system.

⒏ Cross-platform, of course.

⒐ Mathematics-friendly: to realize my dream of developers, production engineers, and scientists/mathematicians speaking the same programming language.

⒑ Network effect friendly: the language must enter this world in a way that makes more and more people use it.

And here is what a rationalizing philosopher in me says:

• It would be nice to have the language that can leverage modern technologies, such as AR/VR, when it comes to having people learn it, program in it, and collaborate using it. No clear idea of how could it look like, but IDEs with methods browsing, or code review tools, or linters could clearly become much better in 3D.

• It would be nice for the language to indeed be AI-friendly, so that tools such as Resharper would not only be useful, but potentially boundless in power, augmenting human engineers dramatically, leaving the creative part to the human brain, while hiding the complexity of the underlying implementation from this very human for as long as it is possible.
How would we know if our "AI" models have consciousness?

Before dismissing the question as too broad, think of an "easier" one: How could an alien super-intelligent civilization observing every atom of Earth today come to the conclusion that some processes in humans' brains are not merely functional, but also result in us experiencing qualia?

Even considering only today's approach to "AI", the deep learning one, I'm close to acknowledging that the problem of being unable to comprehend what happens in the inner layers of a deep recurring neural net is conceptually indistinguishable from the problem of an emotion-less super-intelligence possibly comprehending humans' inner selves.
Hackathons. I view them as a waste of time and energy.

It is not about length constraints. It is about the attitude. I don't enjoy building small things. I prefer the big ones to come from underneath my fingertips.

Big things are demanding.

They require thorough thinking. In the order of hours, at least.

They may require research. In the order of days, easily.

They may require good understanding of what current and soon-to-come technologies are best to use.

They require understanding of how and where do they fit the world.

They often call for several conversations with many people. Who, in their turn, are often busy enough with their own big things.

Hackathons, the way I understand them, are the exact opposite:

Doesn't matter if it does the real thing. It looks sexy. Ship it.

Doesn't matter if it is well engineered. It somehow works. Ship it.

Doesn't matter if it is about to be thrown away next week. It's not supposed to live long anyway. Now ship it.

Doesn't matter if it does not sustain any constructive criticism. People who are capable of providing it would rarely show up for a hackathon. Those who do show up would usually have the attitude of ... yeah, right. "Ship it".

∼ ∼ ∼

I perceive the culture of hackathons to be a desperate attempt to find someone who can code at least something. Because those who can build real things are already busy doing that. Most of the people who remain uncertain are those unwilling to or incapable of thinking big.

True, a hackathon is a good way to gather those people together and have them do something useful. This community, however, is unlikely to interest me, or other people who want and can build real things.

∼ ∼ ∼

Furthermore, I find no emotional attachment to short-term results one can demonstrate during a hackathon.

I don't need to see or touch "something that is live". My imagination is good enough to understand what is about to happen.

Unless it's a major outage, whether it takes or day or a week is almost irrelevant. Whether it lasts for three weeks, three months or three years is what counts at the end.

I care for long-term solutions and think of any "hacked" code as technical debt. To provide a contrast: I would rather spend a day writing a nontrivial test for a feature that is likely functioning right as it is right now. To be sure, for today and for tomorrow, because bugs do happen.

I think on the scale of products. Not technologies and tools.

A day spent in talking with the core team and drawing prospective designs on the whiteboard is more productive than a day where "those two new technologies we always wanted to check out" got checked out.

Also, while I agree that a hackathon boosts overall efficiency on a scale of several dozen hours, I truly believe this "efficiency" is not created out of nowhere but is rather being "borrowed" from past and future productivity.

In other words, I belive that for a well-functioning team, the perceived "boost" in productivity over a weekend is generally a loss when considered holistically as part of a longer time interval.

And dedicating our time to something that is long-term a productivity killer, and, furthermore, staying delusional about the very sign of the contribution of this activity, is absolutely not what I want my team to consciously choose.
Hackathons [2/2].

As a matter of fact, what I have been working on for the past week (2014) is perfectly a non-hackathon. My goal was to have a real-time dashboard. A simple one, that could be put together in several hours. Instead, I dedicated three full days to testing the code that ensures the timestamps of the entries from potentially different soures are synchronized. And that the modules involved would restart in the right order should connectivity or power outage or other issue takes place.

And then to making sure the { HTTP <=> node.js <=> ssh <=> JSON to Thrift <=> C++ backend <=> Thrift to JSON <=> JSON to HTML } chain does what it should. By exposing quite a few hooks and endpoints along the way. And watching them in a few nontrivial situations I have artificially created.

Of ~60 hours of work, the numbers that ultimately make sense appeared in the past four. They started to look presentable enough to be projected onto a big screen (i.e., non-JSON) in the past one. This "slowness" did not bother me at all. I was not "itchy" to see how those things may look before making sure that the foundation on which the final solution is built is stable enough. I just wanted to build it right, and I did so.

∼ ∼ ∼

And, had the above not been done within those few days, I would have simply shot an e-mail to one or two people who depend on me, along the famous Blizzard lines of "it's ready when it's ready".

I wish more engineers would follow this paradigm.
Thoughts on Google potentially dealing an "API blow" to the ad blocker industry.

Technically, browsers don't change that quickly. Web standards change even slower. And most websites really do not care to even try to catch up.

Therefore, the "baseline strategy" for one to just keep using the browser they have today is not unwise. Chromium, for example, is open source. I believe, in its present shape and form, it is not constrained by restrictive Google licenses. And I bet enough enthusiasts will work to keep it available — for the general public, not only for the nerdiest of us.

On the short time scale, say, one to three years, the devil is in details:

Security. It's not hard to keep the open version up to date with the industry. It would be a lot harder to keep the "obsolete by design" browser secure though, especially if Google decides to not release the newest patches openly. V8 is a pretty damn complicated beast, to say the least. Also, this opens doors to the next point, which is:

Planned obsolescence. Large players may consciously refuse to fully support the "older" browser. They are, after all, capitalist entities just like Google, and they want to maximize their own profits, which largely come from ads. And "we care for the security of our users" would sound like a plausible justification to urge the users to "upgrade", should some New York Times decide to push towards it.

On a longer time scale, ten years and beyond, the situation gets a lot more interesting:

It's unclear whether the browser will remain one's main gateway into the Internet. For many people Internet is Facebook. For many it's perfectly contained within the apps they use. The competition may even evolve so that companies like Medium would rather adjust their website to load well (and safely!) in Facebook's built-in "preview browser", than in some abstract "Web browser" application.

It's unclear whether Google would retain the position of monopoly as the younger generation grows up. With all its imperfections, for example, I'm still a loyal GMail user. It won't change next year for me, but it may well change within the next ten. Forget me; if the younger people are choosing other e-mail providers, as well as other maps and search services, of which there are plenty, Google may be hard-pressed to actually take the user's side with the browser — keeping it open and ad-free-friendly — because the alternative would be to lose users even faster.

It's unclear whether the very ad model would stay the way it is today. Subscriptions and micro-payments, so far, did not really threaten the model substantially. But I would not be so certain the world would be the same ten years from now.

It's unclear whether the Web ten years from now would look same way as it does today. Some form of decentralization coming back is one remote possibility. Think of everyone online being their own hosting provider, and a lot changes, especially for the services whose business model is to connect billions of people and yield profits of them using their services. More likely optoins come in different shapes and forms, and include different devices, such as AR/VR, or goggles, or brain-computer interfaces. I, for one, am not sure the CPM/CPC/CPA format of ads would be very successful there.

On the grand scheme of things, this situation also happens to be a prime example of where a relatively simple regulation could deal a deadly blow to a corporation, purely to benefit the end user, with seemingly no side effects.

Hypothetically, if the European Parliament decides to fine Google around a billion dollars for every month during which Google did not release a 100% ad-blocking-compliant browser for free (assuming some enterprise, paid, version is available), I, for one, would have an elevated faith in the future of how IT regulation could help our world.

Relevant link: https://www.ubergizmo.com/2019/01/google-chrome-deathblow-ad-blocker-extensions/
On the value of idea vs. execution.

There is an important observation I am coming to a bit too often these days; often enough to declare it a life lesson of the past ten years: One perfectly done great thing just does not win over a hundred good ones.

Take businesses. Many successful companies trace down to "... and this was the right idea, at the right time, well executed". Such a generalization often appears true, but it is, at best, misleading.

A perfectly executed big idea at the right time is nowhere near a recipe for success. It is a good first step. It opens the first few doors. But that's pretty much it. The rest is hard work, a series of a lot more modest successes, and some luck along the way.

There is a decent chance that back in 1998 someone alongside Google has come up with the idea of PageRank. And, in the seventies and in the eighties, neither BASIC nor soon-to-be MS-DOS were outstanding enough to bet on right away. In fact, I think the very opposite is true.

We, humans, consistently make a mistake of glorifying the stories that intuitively paint the picture of "this is the spark that ignited it all".

Perhaps we are simply delusional about this inflated role that the "spark" plays. I would not be surprised if the nature of reality is exactly the opposite: Someone who is capable and ready to build and operate a business would do well, in their first venture, if they would begin with a modest, simple, idea; while those who can't even start, are extremely unlikely to succeed even if The Very Best Idea does come to their mind.

Perhaps instead of glorifying ideas we should learn how to pragmatically examine their utility? Instead of "just how big could it get", ask yourself a few simple down to Earth questions. Does this idea drive you? Good, you'll be working hard to make it happen. Do you enjoy talking about it? More power to you when it comes to hiring and/or to raising money. Is it in the space where you are an expert, have a name, and network? Well, that would sure in handy too. Approach it step by step and thou shalt receive. The rest is execution.

∼ ∼ ∼

Don't get me wrong, the secret sauce is essential. A hundred well-executed mediocre pieces would not sum up to become great. A hundred mediocre pieces won't attract the customers or investors, won't pass due diligence, and will most likely not be of high value in the first place.

It's just important to not focus excessively on the very secret sauce. It may well occupy 90+% of the work done in the first several months, on the prototyping stage. But if by year two, with no real customer, the main focus of the founders is how to perfect this secret sauce, something is very wrong.

Instead, it is most likely that leaving a few key people to maintain and improve the secret sauce, while focusing the efforts on what the business should really be focusing on — pleasing the customer — is a way better strategy, assuming the goal of the business is to ultimately succeed in the capitalistic meaning of this word.

∼ ∼ ∼

Moreover, interestingly, the above approach scales well to human relationships, both professional and personal. Those who are telling a story of "this person did X and I immediately knew we'd do great things together" are, likely, deluding themselves. Quite frankly, I can't think of anything extraordinary enough that if one successfully performs certain act, it is guaranteed to spark a long-lasting engagement with close to zero chance of falling apart.

Someone single-handedly saved the project by completing what seemed impossible just over the weekend? Extraordinary. Someone ran into a burning buliding to save a child? Fantastic. Sorry though, neither is a solid predictor for anything rather than "they would likely be able to do it again if necessary".

Look for that "spark", believing it ensures prosperity, and more often than not you would find yourself disappointed. Look for predictability and consistency over an extended period of time, and, voila: here is a professional to work throughout your career, or a loyal friend for life.
On humans vs. non-humans from the standpoint of consciousness [ 1/2 ]

The idea of a philosophical zombie, or p-zombie, comes up frequently in books and podcasts about consciousness. Simply put, a p-zombie is an entity that lacks consciousness, but acts, at least towards the humans, in a way indistinguishable from a conscious being.

Many thought experiments can be imagined by asking various questions about how comfortable would a human feel, or be compelled to act, in different scenarios that involve p-zombies.

Putting the personal emotional connection question aside, the subject matter looks straightforward to me when approached from the standpoint of how is our society presently organized.

Consider your own, or your relative's or your closed friend's professional life. Say, they are interviewing candidates, and the nature of their work is such as some people work remote. After a while the desired candidate is found, an offer is extended, and this candidate is ready to join next Monday.

Now, on a Friday evening you realize this soon-to-be-colleague is not a human, but a p-zombie.

They are a remote worker. You, and your team, will interact with them via e-mail, chat, and an occasional voice or video call. This candidate has passed the interviews, and is better than other candidates at getting their job done. Moreover, they are willing to work for less money, for a number of reasons, from willing to gain the relevant experience, to not requiring health insurances of any sort.

Would you feel uneasy about the discovery that this candidate is not a human?

To make the case even stronger, let's turn the tables. You, or your friend/relative, are looking for the next gig. There is an interesting opportunity, good team, and it pays well. As you are ready to sign, you are being told that the founder, or perhaps even your immediate manager, is not a human but rather a p-zombie. Would you reconsider?

∼ ∼ ∼

In the above scenarios my view is extremely simple. In both cases, paying any attention to whether your prospective colleague or a boss is not a human is plain that: racism.

By the very definition of the p-zombie, you, as a human, can not tell if they are conscious or no. In other words, your decision on whether to work what that entity or no, and, overall, whether to treat them as an individual or no, depends entirely on some artificial construct that some "p-zombie-detecting" device shows a green light or an orange one when queried about whether certain entity is a human or a p-zombie.

Nowadays, we detest the notion of refusing to work or otherwise deal on equal terms with people of different races, genders, eternities, sexual orientations, and many, many other dimensions.

There is even a new term "neurodiversity", to highlight that acting in a weird ways in otherwise common situation should be tolerated and/or ignored, as long as the actor is not doing it on purpose, but rather just happens to have their brain "wired" in certain unorthodox way. As long as their job is not directly affected by their unusual behavior, we are supposed to be inclusive and welcome them.

It has gone to the point where I was once explicitly told that being an active member of the Flat Earth society does not disqualify one from being employed as a data scientist, as long as their day job has nothing to do with whether the Earth is flat or no. Yes, these days we are supposed to tolerate the lack of critical thinking even to such degree, for the sake of being a diverse and inclusive society. Even for a job where we have all the reasons to believe critical thinking is absolutely essential, such as the job of a data scientist.
On humans vs. non-humans from the standpoint of consciousness [ 2/2 ]

Yet, given all the above, some people seriously believe we should institutionalize not treating p-zombies as fully qualified human beings, as long as there is a way (some "green vs. orange light turning on" device) to tell if an entity in front of us is a real human or no.

A reasonable counter-argument to the above could go along the lines that p-zombies are effectively "artificial humans". Thus:

⒜ They are free from certain constraints that we, humans, are subject to. Such as limited life span and a long period from birth to becoming productive.

⒝ We can not ensure their emotional safety and comfort, and thus should be careful to not create an "artificial hell" for their own sense of being / sense of existence / sense of whatever we should call it given the word "consciousness" is to be avoided.

∼ ∼ ∼

I personally believe both arguments are bogus. Either we can talk, agree on something, and co-exist, working together for the sake of the greater good. Or we can not.

And if we can not, well, then my dear humankind is already doomed, and, barring an unlikely scenario of a strictly enforced international treaty to never, ever create those artificial beings, I would rather wish those upcoming beings all the best at choosing what part(s) of our, human, legacy would they want to carry on through spacetime. And, quite frankly, if they say "none, you, humans, are too moronic to even bother mentioning you in a context other than our original creators", I would argue that conceding to such an argument, not fighting against it, is the right moral call for a human being. After all, don't all religions and all spiritual practices teach humility?

At the same time, there already are talks in the air about something that can loosely qualify for a "preemptive strike". Meaning, assuming those beings are superior to us, and assuming they might one day choose to get rid of of us, we, humans, should strike first to eliminate this very possibility.

I, for one, can not see how is such mindset different from genocide, to begin with.

∼ ∼ ∼

And to end on the human bonding relationships note. Well, we, humans, are well known to form those relationships with dogs and cats and elephants and dolphins, not to mention goldfish and parrots.

Denying a future human the right to think a different, future (possibly, silicon-based) form of, quoted for clarity, "consciousness" could be their personal emotional preference sounds like a yet another terrible and miserable religious-ish attempt for one group of people to broadcast their moral superiority to the rest of the humankind.

If any human-created agenda is worth fighting against, to me it is the agenda of "we know better how you guys should feel about X, and we'll made you feel the right way".

In other words, the above is a yet another a perfectly constructed dystopian scenario. A horror story that serves no purpose other than serving some group's agenda of making people en masse think in certain way about the acceptability or unacceptability of certain phenomenon.

To my great pleasure and peace of mind, all the instances in our history where something like this was institutionalized do, with remarkable predictability, ultimately end up in this proclaimed superiority destroyed completely and unambiguously. And then viewed and studied only as bad example of how terribly far can human arrogance take us, and how do we best structure our society to avoid those missteps in the future.
When arguing what a reasonable startup offer is, ask away: 𝗵𝗼𝘄 𝗺𝘂𝗰𝗵 𝘃𝗮𝗹𝘂𝗲 𝗶𝘀 𝘁𝗵𝗶𝘀 𝘀𝘁𝗮𝗿𝘁𝘂𝗽 𝗲𝘅𝗽𝗲𝗰𝘁𝗶𝗻𝗴 𝘆𝗼𝘂 𝘁𝗼 𝗯𝗿𝗶𝗻𝗴 𝘁𝗼 𝘁𝗵𝗲 𝗰𝗼𝗺𝗽𝗮𝗻𝘆?

Speak of the first year, as it keeps the conversation grounded.

Example math could start from a back of the envelope calculation such as: we're valued at $15M now, plan to be raising in a year at about $25M valuation, and, should you join, we would likely be worth the whole $30M by then.

Thus, the person extending you an offer believes you can make their company 20% more valuable within a year, up to $30M from $25M. Thus, base salary and rounding errors aside, 𝘵𝘩𝘦 𝘣𝘳𝘦𝘢𝘬-𝘦𝘷𝘦𝘯 𝘱𝘰𝘪𝘯𝘵 𝘧𝘰𝘳 𝘵𝘩𝘪𝘴 𝘴𝘵𝘢𝘳𝘵𝘶𝘱 𝘵𝘰 𝘨𝘦𝘵 𝘺𝘰𝘶 𝘰𝘯𝘣𝘰𝘢𝘳𝘥 𝘸𝘰𝘶𝘭𝘥 𝘣𝘦 𝘵𝘰 𝘰𝘧𝘧𝘦𝘳 𝘺𝘰𝘶 𝘵𝘸𝘦𝘯𝘵𝘺 𝘱𝘦𝘳𝘤𝘦𝘯𝘵 𝘪𝘯 𝘣𝘢𝘴𝘦 + 𝘦𝘲𝘶𝘪𝘵𝘺 𝘪𝘯 𝘵𝘩𝘦 𝘧𝘪𝘳𝘴𝘵 𝘺𝘦𝘢𝘳. That simple.

The rest boils down to the moral argument as you negotiate what is the "fair" way to split this 20% of the value to be added.

My moral compass suggests the ball park of some (½) raised to the power of how many steps ahead or behind each other do you see yourselves.

Assuming, in the "standard units of engineering levels", that the CEO is an L8, with ~$15M current valuation and some $240K annual base that itself counts for ~1.6%, your stock grant in the first year should be ~0.9% if you are an L6, ~3.4% if you are an L7, and ~8.4% if you are an L8 as they are.

∼ ∼ ∼

The above math would likely not work in and of itself. But it's a useful lighthouse to keep in mind to ensure you are not short-selling yourself.
𝗧𝗵𝗼𝘂𝗴𝗵𝘁𝘀 𝗮𝗳𝘁𝗲𝗿 𝗿𝗲-𝘄𝗮𝘁𝗰𝗵𝗶𝗻𝗴 𝗧𝗵𝗲 𝗕𝗶𝗴 𝗦𝗵𝗼𝗿𝘁

∼ ∼ ∼

The government did bail out the big banks. Because it was the only way to prevent the poor "homeowners" from going the “full berserk” mode.

There were two evils to choose from.

𝘌𝘷𝘪𝘭 𝘰𝘯𝘦: act in an anarcho-capitalistic-libertarian way. Claim that, as the times got tough, whoever took mortgages without reading the fine print are ultimately the ones who are to be responsible for their own improvidence and economic illiteracy.

𝘌𝘷𝘪𝘭 𝘵𝘸𝘰: act in a socialistic way. Make it clear that, yes, the banks have screwed up, the system is broken, but, in order to keep the fabric of the society stable, the government is going to route a sizable portion of the taxpayer money to, effectively, pay off those debts, so that not everyone who has not read the fine print has to lose their home.

∼ ∼ ∼

Even from a purely economic and purely game-theoretic perspective, choosing the "socialistic" "evil two" has merits.

The main one is that it sends the message a) that the government is thinking long-term, and b) that, in times of trouble, the government will help its ordinary citizens; yes, those who don't read the fine print and don't do the math, and, yes, at the expense of people like, well, me.

No government I am aware of today is willing to openly take the stance of "we endorse and support people like Dr. Michael Burry, who do their own research and act accordingly, and we believe people who have made bad economic decisions are the ones to bear the consequences of those bad decisions".

After all, the goal of the government is not to "punish" people, who would only get more angry and violent as the result, but to keep making the country and the culture the one people are increasingly eager to see themselves living in 10+ and 50+ years into the future. Thus, it's only rational to actually help those "average, not smart, economically unsavvy" citizens.

∼ ∼ ∼

While the above makes perfect sense, the conclusions are extremely controversial.

In particular:

⒈ If you are an economy- and math-savvy person with intellectual honesty and personal integrity, you have to pretty much assume that most "rational" governments, should bad times come along, would not hesitate to take your money and route it towards helping others, who are pretty much by definition less economy- and math-savvy.

⒉ Therefore, if you 𝗸𝗻𝗼𝘄 a crisis, such as the 2008 housing one, is about to hit, your 𝘳𝘢𝘵𝘪𝘰𝘯𝘢𝘭𝘭𝘺 𝘣𝘦𝘴𝘵 𝘴𝘵𝘳𝘢𝘵𝘦𝘨𝘺 is, in fact, to take full advantage of the situation, being fully aware that, at the end of the day, those who would pay for your above-average outcome are the people just like you, who were somehow hoping the government would not "betray" them, and not use their money to help the "less fortunate" (and/or "more improvident") ones.

∼ ∼ ∼

In other words, the whole concept of world economy cycles is even more of a self-fulfilling prophecy than it appears on the surface.
The rich believe the market is a cooperative game which helps every player validate their ideas, discard hypotheses that didn't check out, and eventually arrive to the picture that accurately describes the world around us. The rich believe it's an iterative process that converges.

The poor believe their own picture of the world must be the right one, and it requires no correction or even validation. The poor believe that their lack of success is a direct consequence of others deliberately playing against them.

∼ ∼ ∼

Capitalism is the philosophy that bets big on leaving quite a few things up to the market.

In such a game, team players, who play win-win, obviously rack more profits, compared to individuals playing win-lose or lose-lose.

∼ ∼ ∼

Employment, like running a business, follows the same rules: the rules of job market. Job market is the market of trading one's time and skills for cash and equity.

Excelling on this market, naturally, also requires one to follow the iterative process of postulating and validating hypotheses about their understanding of the world.

And, of course, it never works out for someone whose attitude is that the world should function according to certain picture they have fantasized for themselves, without even bothering to confirm it has anything to do with reality.

∼ ∼ ∼

Conclusion.

The moment one acknowledges the world does not owe them anything, but is just here to act as a subtle, yet not malicious, validation engine, it can not but keep pleasantly surprising.

In many ways. Business and career included.

∼ ∼ ∼

From 2015.
Every language gradually evolves towards the neatest, least ambiguous, and the cleanest possible way for the statements to be expressed.

Unfortunately, this process requires external consciousness to keep using the language.

The only conscious beings capable of using languages to date are puny humans. Deeply unfortunately for computer languages, upon being used by puny humans, mere mortals themselves also evolve — towards becoming blind to universal ugliness of each particular language.

Which ruins the whole purpose of language evolution, when it comes to mainstream programming languages. Universality is unreachable in computer languages as of early 21st century, much like universality in computation was unreachable two thousand years ago: there was no critical mass of people who have internalized the need for it.

In essence, this annoying adaptiveness of human beings is that very reason we can't have one good language, but have to go through plenty of bad ones, with relatively short and predictable lifespans.

For instance, after engaging in a conversation about immutable strings, I am now certain there's a nonzero number of software engineers in the world who would argue that Integer.Add(Integer(2), Integer(2)) is cleaner than 2+2.

Thus, it's not Java or PHP that suck.

#PEBCAK

Unless, of course, you believe computer languages were all created five thousand years ago, in their present form.

∼ ∼ ∼

From 2015 as well.
The more I get to know about the modern monetary system(s), the more I'm terrified about the prospects of large parts of our economy going cashless.

On the one hand it sounds great. The tech is mature. Our cards and Apple / Google / Samsung Pay work well. There is one less thing to worry about (cash on you), one fewer source of fraud and discomfort (greasy and/or counterfeit bills), overall extra security (because businesses don't keep cash on premises), plus better predictions models and more transparent audits (because every transaction is journaled).

On the other hand, there's a critical mass when certain area becomes cashless-dependent. This mass is when some locations accept cash, but they are far and few between, so that if you are, for instance, locked out of your card(s) for more than half a day, sustaining your existence gets noticeably harder.

∼ ∼ ∼

The problem is that once this critical mass is reached and exceeded, cash is no longer the "safe haven" of one's wealth; much like gold is no longer playing this role as of today.

It won't happen overnight. But slowly, year by year, if you live in such a place, your checking balance month-to-month is under $10K, and everything else is in savings or stocks.

Then you want to travel to some other country and want to withdraw money. And you want more than $10K for some reason.

And the bank questions you why. And you naively tell them why, not just reply with "hey, it's my money, and it's none of your business what do I plan to use it for". And the bank says wait, that country, as well as the activity you mentioned, is now regulated, and we need to acquire permissions to hand you (your!) cash. Moreover, these funds are now frozen on your account until further notice which we all are now waiting for.

Then you realize you are on the hook. But it's already too late.

And yes, sometimes cash really is better. Not because it's not regulated, but simply because international transfers can take days, involve multiple banks, and are, generally, less reliable than showing up with money.

The above is effectively a real problem already, for the people in the crypto community. In some well-localized places trading "bank money" for crypto is straightforward. In other places it's extremely difficult. And then you go figure.

It's not that I don't trust Visa, Apple Pay, or my bank. What I don't trust is the authorities above them. When a million-residents city goes cashless, the amount of real, physical, money that has to support this city is a small fraction of what's actually changing hands on a daily basis.

Then the central bank, and/or the Feds, ask themselves a plausible question: if that city runs so well without much cash support, why don't we a) add more "fake" (digital) money there, and b) push more cities to become like this?

Which is exactly the definition of the bubble, and which is exactly what tends to burst. And which is exactly what does burst when inflated. And that's exactly what will happen, because, as one city becomes “successful" in this regard, others follow suit; and when one country is "successful" in such a way, others tend to head in the same direction.

∼ ∼ ∼

My perception of this is similar to how I view living in a Hawaiian neighborhood where a non-insignificant fraction of people are off the grid.

It's not that I firmly believe my own life will be off the grid at some point. But it makes me feel damn safe knowing that enough people around know how to live without external support, from water and electricity down to hunting and cooking their own meals. Yes, they have guns too, but it's a different story.

∼ ∼ ∼

That's why the trend of defaming crypto and promoting cashless is worrisome.

Not because I am a token libertarian who wants to see all of us moving towards peer-to-peer decentralized crypto transactions every time we are paying for a coffee here and there.
But because I am terrified by the prospect of decoupling the actual wealth, that maps to something tangible, and the "numbers on the bank accounts", which are what the modern "economy" increasingly is about.
Thomas Cook, the British travel agency, is no more.

I may be off in numbers, but it looks like over $0.5B will be taken off the UK budget — read: will be paid by UK taxpayers — in order to get the "poor, lost, abandoned" tourists back home.

There is something fundamentally wrong here.

Everyone knew Thomas Cook struggles for the past several years. It was common knowledge that bankruptcy is a likely scenario.

And yet the, presumably poorer, citizens of the UK — who were not on vacation — are paying for the relatively peaceful endings of vacations of the, presumably richer, citizens of the UK — who decided to take vacations with Thomas Cook nonetheless.

∼ ∼ ∼

This looks a lot like The Big Short playbook.

Even if you know the market is a huge bubble that is about to
collapse, you also know "the government" will eventually be "on your side" — i.e., you know that the [other] taxpayers' money will be used to pay for your "stupidity".

In other words, the current socialistic governments system supports the incentives of acting in a "stupid" way, even if you are the opposite of the "stupid" actor.

Such as taking another house or condo loan in 2008, even knowing exactly what is about to happen.

Or such as booking a cheaper tour with Thomas Cook, knowing well it's on the verge of bankruptcy.

∼ ∼ ∼

I don't know what the solution to the above problem may be.

Maybe, there is no problem, after all. On occasions like this all taxpayers will have to pay a few, or a few dozen, bucks, and it will happen once or twice a year. Maybe it really is not a big deal. Especially given that we are consciously paying a lot more in taxes, knowing with confidence that those funds are not being spent well.

But, fundamentally, the incentives scheme has to change.

∼ ∼ ∼

In this particular case, for example, the British government could have published a memo, a few years back, stating that everyone traveling with Thomas Cook must also purchase the respective state-approved insurance package.

So that it's not every taxpayer who will end up paying up after the collapse, but every Thomas Cook traveler from the past few years.

And then the UK could state openly that they will not spend a single penny towards helping those who decided to ignore this warning, and travel with Thomas Cook uninsured. Because they have consciously assumed this risk onto themselves.

∼ ∼ ∼

I know I'm daydreaming here. But this topic of being more conscious about what exactly are we paying taxes for is growing on me.

Much like we seem to care more about the environment and about minorities' rights these days, we might well begin to be more conscious about the magnitude of incompetence in how do our governments use our, taxpayers', money.

∼ ∼ ∼

And then maybe, maybe, one day we will get to being conscious about which technologies do we use. Because a poorly designed Web framework, or a poorly designed machine learning library may well be contributing more to the CO₂ emission than gasoline-powered cars or international flights.

So that we will be able to push for a new standard, that would render PHP, most of Python, and most of JavaScript dangerous and obsolete.

One step at a time. Environment and minorities' right first. Then let's keep an eye on how is our money spent. And then let's launch a crusade against bad programmers who burn billions of kilowatt-hours on the activities that are, at best, useless, and, at worst, detrimental to the future of our civilization.
Executive Decisions

Why are corporations so slow at making executive decisions?

At any time a corporation always has several big decisions to make. Yes, some may be postponed, but generally postponing a decision is a lost opportunity: each decision should ultimately be evaluated, and either dismissed, or translated into an execution plan.

The execution itself may not begin tomorrow, and the plan may well be "we look at this again next quarter, after receiving the results of that project and that experiment and that hiring event". But that's already the execution track, not the "we're thinking about it" one.

Given we know those decision have to be made, why does making them often take weeks and quarters, not hours and days?

∼ ∼ ∼

This slowness may have to do with the cost of error, as perceived by the actor and the environment they operate in.

Consider an executive who can directly influence five decisions.

Each of those decisions can result in a gain for the company, that is an order of magnitude larger than this executive's compensation, or a loss for the company, that is an order of magnitude larger than this executive's prospective lifetime earnings.

On the one hand most opportunities, that are not discounted right away, can turn out profitable; in fact if no one from the top management team communicates a solid "over my dead body" message, the expected value of making the decision to pursue such an opportunity is likely net positive.

On the other hand, an executive who has made a cash-negative decision will be remembered as someone who has made this cash-negative decision. For the rest of their life.

∼ ∼ ∼

Now, company-wide, from the game-theoretic perspective, it all boils down to risk assessent and to constraints management.

After all, if the company can afford to pursue all five opportunities, and the expected positive outcome of each one of them is at least 4x the expected negative result of each of them, then, heck, the "default" base strategy should be to say "yes" to all five.

Also, when it comes to making decisions of this magnitude, whether the outcome is positive or negative will often only be seen much later down the road. Say, you decide that you need your own datacenter, or another engineering office, or to make a company-wide push to some new technology. It may turn out great in the long run, or it may be a disaster, but it will take years and years to see this outcome clearly.

∼ ∼ ∼

Still, I have not seen many executives who would eagerly say "yes" to all five.

They wait and wait and wait.

And a possible contributing factor could be this cost of error.

Simply put, when the executives operate in a hostile environment — where they have enemies who would do their best to get those executives fired over a misstep, or where their job prospects after being known for making a bad decision are bleak — they would indeed hesitate to move forward. Because it's too dangerous for their own career.

At the same time, the culture that embraces failures and experiences would be the culture where a) multi-billion dollar mistakes are made daily, at different places, by different people, b) and yet, the total amount of expertise and knowledge would be growing a lot faster in this culture than in the risk-averse one.

∼ ∼ ∼

So, maybe, I should rethink my views on the Silicon Valley.

Seemingly weird decisions that cost investors millions and millions of dollars are made by clearly incompletent people on a daily basis there. But that's the form everyone is used to, and it's this form that I am to allergic to. The substance is that the executives in the Bay Area a) are more experienced, and b) have access to more resources.

Seen in this light, the substance does trump the form here. And the simple idea of embracing failures, along with easy access to capital, may well be what the Silicon Valley owes its success to.