I want to coin the term “Reverse Instagram Effect.”
The Instagram Effect is real. I’m not young enough to experience it firsthand — I don’t even have an Instagram account — but here’s how I see it: people are far too eager to showcase the best parts of their lives.
Put simply, the content people publish is deliberately curated to look far more glamorous than their ordinary, mundane lives.
That has obvious downsides for mental health. For the consumers of content it feeds envy and longing — “wow, what a life; I want that too.” For the producers it breeds exhaustion and guilt — “I’m tired of pretending I live like this when really I’m struggling with X, Y, Z.”
(I’m no psychologist, but I hope that’s not too far off.)
Now: the Reverse Instagram Effect is in the very same content bias, but viewed through the lens of what AI models are trained on.
With text, even conversational speech tends to be treated with some generosity: speaking precisely is still valued. When AI trains on news articles, podcast transcripts, essays, it sees a lot of the styles we humans read every day. That exposure helps models approximate the kinds of language and arguments we actually use.
But images and videos are different.
Pleasantly positive: AI-generated images and videos are, bluntly speaking, enjoyable for humans to watch. That’s expected — the training corpora for visual media are heavily skewed toward content people already like to engage with.
Existentially positive: because the visual corpora are so biased toward polished, shareable content, models fail to learn what “ordinary life” actually looks and feels like. They don’t absorb the quieter, messier, unedited footage of everyday existence — and that’s a good thing for us as the species!
Imagine training an AI on continuous surveillance feeds. Imagine feeding it minute-by-minute biometric streams from people’s lives. The kinds of inferences and manipulations that could follow would be deeply alarming. If a model truly understood ordinary human behavior at that granularity, the risk of it being able to “get under our skin” would rise dramatically.
So far, we’re not handing models those datasets — for legal, compliance, and ethical reasons. Thank goodness. For a Skynet-style emergence you’d need models to be intimately familiar with ordinary human responses to ordinary stimuli. That level of access would make large-scale manipulation far more plausible.
Even excellent long-form writing hasn’t altered daily behavior at scale the way a viral slogan or a perfectly timed short message can. Short, amplified messages shape events. Thankfully, that kind of real-time, intimate behavioral feed is not what AIs are being fed today.
A deeper way to say this: text is only the outer shell of richer interpersonal dynamics. Even if an AI masters that shell, it will lack the insight into how people actually perceive and react to certain signals in context. And for visual media, the Instagram Effect — the polished, curated bias we dislike in people — inadvertently protects us by depriving models of the raw, ordinary data they’d need to weaponize influence at scale.
So yes: the Instagram Effect is bad for people. But as a structural quirk of our visual data, it’s oddly useful — a thin, accidental firewall between current AIs and the kind of intimate control we should all fear. For now, at least, we’re safer than we sometimes fear.
The Instagram Effect is real. I’m not young enough to experience it firsthand — I don’t even have an Instagram account — but here’s how I see it: people are far too eager to showcase the best parts of their lives.
Put simply, the content people publish is deliberately curated to look far more glamorous than their ordinary, mundane lives.
That has obvious downsides for mental health. For the consumers of content it feeds envy and longing — “wow, what a life; I want that too.” For the producers it breeds exhaustion and guilt — “I’m tired of pretending I live like this when really I’m struggling with X, Y, Z.”
(I’m no psychologist, but I hope that’s not too far off.)
Now: the Reverse Instagram Effect is in the very same content bias, but viewed through the lens of what AI models are trained on.
With text, even conversational speech tends to be treated with some generosity: speaking precisely is still valued. When AI trains on news articles, podcast transcripts, essays, it sees a lot of the styles we humans read every day. That exposure helps models approximate the kinds of language and arguments we actually use.
But images and videos are different.
Pleasantly positive: AI-generated images and videos are, bluntly speaking, enjoyable for humans to watch. That’s expected — the training corpora for visual media are heavily skewed toward content people already like to engage with.
Existentially positive: because the visual corpora are so biased toward polished, shareable content, models fail to learn what “ordinary life” actually looks and feels like. They don’t absorb the quieter, messier, unedited footage of everyday existence — and that’s a good thing for us as the species!
Imagine training an AI on continuous surveillance feeds. Imagine feeding it minute-by-minute biometric streams from people’s lives. The kinds of inferences and manipulations that could follow would be deeply alarming. If a model truly understood ordinary human behavior at that granularity, the risk of it being able to “get under our skin” would rise dramatically.
So far, we’re not handing models those datasets — for legal, compliance, and ethical reasons. Thank goodness. For a Skynet-style emergence you’d need models to be intimately familiar with ordinary human responses to ordinary stimuli. That level of access would make large-scale manipulation far more plausible.
Even excellent long-form writing hasn’t altered daily behavior at scale the way a viral slogan or a perfectly timed short message can. Short, amplified messages shape events. Thankfully, that kind of real-time, intimate behavioral feed is not what AIs are being fed today.
A deeper way to say this: text is only the outer shell of richer interpersonal dynamics. Even if an AI masters that shell, it will lack the insight into how people actually perceive and react to certain signals in context. And for visual media, the Instagram Effect — the polished, curated bias we dislike in people — inadvertently protects us by depriving models of the raw, ordinary data they’d need to weaponize influence at scale.
So yes: the Instagram Effect is bad for people. But as a structural quirk of our visual data, it’s oddly useful — a thin, accidental firewall between current AIs and the kind of intimate control we should all fear. For now, at least, we’re safer than we sometimes fear.
❤3👍1🔥1
Published my thoughts on building high-throughput consensus to produce total order for events or commands: https://dimakorolev.substack.com/p/building-high-throughput-consensus
Substack
Building High-Throughput Consensus
Producing Total Order by blending Raft, Web3, and secure enclaves.
👍6
I generally try to avoid commenting on political and near-political topics, but the H-1B visa controversy sounds more like social news thank political news to me.
What surprises me most is that after spending ~20 minutes scrolling through the news and comments and feeds, I have seen ZERO mentions of Canada and Australia.
The UK and EU are there. I think I even saw Dubai / UAE.
But somehow the sentiment is disproportionally that "the USA is letting us down big time".
Folks, moral judgement aside — and yes, I am speaking from the position of privilege, which I do acknowledge — if wannabe-H-1B talent is such a powerful force, shouldn't we expect the next Silicon Valley to emerge in Vancouver or Melbourne?
By first-order logic of everyone commenting it totally should. Because if it's the US that is presumably the biggest loser in this game, some other major country may well pick up the tab and reap the benefits.
And yet — again, ZERO mentions of Canada and Australia.
I, for one, am finding it suspicious.
PS: For the record, my take was and remains that in the past ~5 years we're seeing the world becoming more sane, that is, from the game theory standpoint. There were major imbalances, and a lot of institutions and state actors have abused the status quo one way or another.
This is now changing. Many, if not most, short-term changes will feel negative in the moment.
Long term though, I'm all for sustainable progress and innovation, through effective monetary and human resources allocation, which works best under competitive capitalism.
If the end result of this H-1B situation is that three months from now the UK, EU, Dubai, Canada, Australia, plus perhaps Thailand, Japan, and Argentina begin competing to be the prime candidate for "Silicon Valley 2.0" — that's a great thing to my taste!
(And if you believe this is unlikely to happen, then let's first acknowledge that the "original" Silicon Valley does have an unfair advantage that it has been routinely under-capitalizing on — and this is what's being corrected as we speak.)
Although my mid-term prediction is that it's all power games. Ultimately, the H-1B visa was indeed very much abused. What we need is to lower the threshold on the talent visa (O-1, EB1-1), and make sure the fees are reasonably high — to close the loopholes of the like Infosys is abusing, to keep Google et. al. effective, and to make properly venture-backed startups want to innovate in the US. All of this can be done, and something tells me it will get done soon-ish.
What surprises me most is that after spending ~20 minutes scrolling through the news and comments and feeds, I have seen ZERO mentions of Canada and Australia.
The UK and EU are there. I think I even saw Dubai / UAE.
But somehow the sentiment is disproportionally that "the USA is letting us down big time".
Folks, moral judgement aside — and yes, I am speaking from the position of privilege, which I do acknowledge — if wannabe-H-1B talent is such a powerful force, shouldn't we expect the next Silicon Valley to emerge in Vancouver or Melbourne?
By first-order logic of everyone commenting it totally should. Because if it's the US that is presumably the biggest loser in this game, some other major country may well pick up the tab and reap the benefits.
And yet — again, ZERO mentions of Canada and Australia.
I, for one, am finding it suspicious.
PS: For the record, my take was and remains that in the past ~5 years we're seeing the world becoming more sane, that is, from the game theory standpoint. There were major imbalances, and a lot of institutions and state actors have abused the status quo one way or another.
This is now changing. Many, if not most, short-term changes will feel negative in the moment.
Long term though, I'm all for sustainable progress and innovation, through effective monetary and human resources allocation, which works best under competitive capitalism.
If the end result of this H-1B situation is that three months from now the UK, EU, Dubai, Canada, Australia, plus perhaps Thailand, Japan, and Argentina begin competing to be the prime candidate for "Silicon Valley 2.0" — that's a great thing to my taste!
(And if you believe this is unlikely to happen, then let's first acknowledge that the "original" Silicon Valley does have an unfair advantage that it has been routinely under-capitalizing on — and this is what's being corrected as we speak.)
Although my mid-term prediction is that it's all power games. Ultimately, the H-1B visa was indeed very much abused. What we need is to lower the threshold on the talent visa (O-1, EB1-1), and make sure the fees are reasonably high — to close the loopholes of the like Infosys is abusing, to keep Google et. al. effective, and to make properly venture-backed startups want to innovate in the US. All of this can be done, and something tells me it will get done soon-ish.
❤3
The term sound money is well known these days. Fiscal conservatives, libertarians, and the Web3 crowd are equally fond of it. The idea is simple: every unit of money belongs to someone and is backed by something tangible. Printing money is impossible, and every loan has a consenting party consciously taking the risk.
Related Web3 concepts are staking, bounty hunting, and slashing. To participate as a miner — thus contributing to the ledger and earning rewards — one must post a meaningful stake as collateral. If they cheat or fail their SLA, others will notice. Cheaters are penalized by losing part of their stake — that’s slashing. Whoever proves this cheating earns a fraction of the slashed stake — that’s bounty-huting.
Transparency and auditability make this work. Transparency doesn’t mean every message is readable; it only means the system continuously publishes verifiable checksums of its own state, making retroactive edits impossible or prohibitively costly.
I’d like to introduce the term sound software.
Software is sound when the service provider is contractually committed to its users — and when provable misbehavior leads to compensation for whoever caught it. Users and platforms should have aligned, verifiable incentives to keep each other honest.
Consider a social network. Publishing a post yields a proof. Followers are guaranteed to receive that post in their feeds — filtering and ranking happen on the client side.
If the author deletes or edits the post, there’s public proof.
If moderation removes or flags the post, there’s public proof — traceable to a moderator, not necessarily by name, but auditable.
If a court order requires removal, there’s proof of that too — the order ID and a permalink. If the case is sealed, disclosure follows later; but eventually everyone can verify the order existed and demanded removal.
If a user fails to receive a post they should have, they can present a compact proof: { A posted X, B follows A, B requested a feed update, X was not delivered }.
Upon such a proof, the platform must present a counter-proof or have its stake slashed. If there is no counter-proof, the detector is rewarded.
This resolves shadowbanning — for good. No more tampering with, well, anything.
Wouldn’t it be great to live in a world of sound software?
Also, competition matters. I don’t want companies forced into this; I want enough people to prefer it — and then let the market do its job.
Much like Bitcoin today. In many places, the ban era is over. Barring a major hack — likely by state actors or a Big Tech supply-chain compromise — Bitcoin, as sound money, competes with fiat that’s prone to printing and helicoptering. In that market, more people choose Bitcoin. That looks right to me.
I’d love to see the same for every online service — from messengers and social platforms to driver’s licenses, plates, and travel documents.
Unlike sound money, where Satoshi arrived like Prometheus, sound software probably won’t emerge spontaneously. The current giants are powerful and protective of their oligopoly. But history has reversals: unsustainable systems collapse, and the rebuilders choose better foundations.
So I don’t expect the US, EU, or UK to lead here soon — the status quo is too strong. A country under pressure might adopt it first; a place that needs credibility and openness could make the bold move.
We need more free communication platforms that cannot be shuttered by a single state actor — and cannot quietly align with one powerful owner’s narrative. That’s the bar for sound software.
Related Web3 concepts are staking, bounty hunting, and slashing. To participate as a miner — thus contributing to the ledger and earning rewards — one must post a meaningful stake as collateral. If they cheat or fail their SLA, others will notice. Cheaters are penalized by losing part of their stake — that’s slashing. Whoever proves this cheating earns a fraction of the slashed stake — that’s bounty-huting.
Transparency and auditability make this work. Transparency doesn’t mean every message is readable; it only means the system continuously publishes verifiable checksums of its own state, making retroactive edits impossible or prohibitively costly.
I’d like to introduce the term sound software.
Software is sound when the service provider is contractually committed to its users — and when provable misbehavior leads to compensation for whoever caught it. Users and platforms should have aligned, verifiable incentives to keep each other honest.
Consider a social network. Publishing a post yields a proof. Followers are guaranteed to receive that post in their feeds — filtering and ranking happen on the client side.
If the author deletes or edits the post, there’s public proof.
If moderation removes or flags the post, there’s public proof — traceable to a moderator, not necessarily by name, but auditable.
If a court order requires removal, there’s proof of that too — the order ID and a permalink. If the case is sealed, disclosure follows later; but eventually everyone can verify the order existed and demanded removal.
If a user fails to receive a post they should have, they can present a compact proof: { A posted X, B follows A, B requested a feed update, X was not delivered }.
Upon such a proof, the platform must present a counter-proof or have its stake slashed. If there is no counter-proof, the detector is rewarded.
This resolves shadowbanning — for good. No more tampering with, well, anything.
Wouldn’t it be great to live in a world of sound software?
Also, competition matters. I don’t want companies forced into this; I want enough people to prefer it — and then let the market do its job.
Much like Bitcoin today. In many places, the ban era is over. Barring a major hack — likely by state actors or a Big Tech supply-chain compromise — Bitcoin, as sound money, competes with fiat that’s prone to printing and helicoptering. In that market, more people choose Bitcoin. That looks right to me.
I’d love to see the same for every online service — from messengers and social platforms to driver’s licenses, plates, and travel documents.
Unlike sound money, where Satoshi arrived like Prometheus, sound software probably won’t emerge spontaneously. The current giants are powerful and protective of their oligopoly. But history has reversals: unsustainable systems collapse, and the rebuilders choose better foundations.
So I don’t expect the US, EU, or UK to lead here soon — the status quo is too strong. A country under pressure might adopt it first; a place that needs credibility and openness could make the bold move.
We need more free communication platforms that cannot be shuttered by a single state actor — and cannot quietly align with one powerful owner’s narrative. That’s the bar for sound software.
❤1
Perhaps the future role of humans in the AI- and robotics-powered economy is that of weird scientists from the past?
It has already assumed the role of shamans. The next step is to learn "directional creativity", also known as science and engineering. Hear me out.
AI models as we know them today are good at operating within the environment they were trained on — AlphaGo is a case in point. They are doubly good when they can simulate their environment such that the objective function is evaluated correctly — behold AlphaZero.
They are increasingly good when they a) can operate in a real environment, which is nearly impossible to simulate, b) are in the space of extrapolation and linear optimization, not prolongation, and c) the objective function is still evaluated correctly. This is modern-day, "post-Boston-Dynamics" robotics.
When it's about, say, walking over rough terrain without falling by learning to orchestrate a bunch of "muscles" the right way — there's nothing better than modern-day neural networks. In fact, nature first evolved neural networks for exactly this purpose.
Going off on a tangent a bit, my personal belief for what's big to come is learning ultra-low-power-consumption models for robotics. Because, mathematically speaking, we know it is possible to, say, walk straight by applying a very small amount of very high-lag, low-throughput "computation". But I digress.
So, of the three points above, (a) is relatively straightforward — already done in 100% simulated environments, about to be done with general-purpose robots. The last one, (c), is also somewhat clear, since if we're talking about, say, manufacturing, the QA processes are already well established — or they will be, inevitably, since we're not letting humans fly on planes assembled by robots without thoroughly checking them through.
Thus, (b), remaining in the realm of extrapolation, not prolongation — that's where modern-day AI hits the wall.
Simply put, it lacks creativity. The ability to invent. To not just "hallucinate and filter", but to do so "in the right direction".
That's how human genius and prodigy look. For the record, I am not discounting the possibility that we, humankind, stumble upon a way to make "our AI" this smart. I just don't think it's likely. Much like it's not likely for a remote human tribe to invent / discover supersonic travel. The shamans of that tribe have done marvelous work researching medicine and poisons and the altered state of human consciousness — induced and controlled. Good. That's what I refer to as "wandering creativity", a.k.a. hallucination and filtering. This is what modern-day AI models can already do well.
The next frontier is "directional creativity". And that's exactly the role a human can play.
Just imagine. A factory where tons of robots are building stuff. Humanoid and whatnot. From very small to very large. And the factory is actually an open-air setup of vast dimensions, where the area includes everything, from resources to mine, to the test-flight polygon.
These robots can somewhat make sense of what they're doing. They have a lot of CPU/GPU/*PU, after all, individually and collectively.
They will also very likely report to some human-controlled entity. At least for now. They lack agency, to begin with. And I'd even argue that "learning" this "directional creativity" is the necessary precondition to agency.
And this is exactly where the human comes in handy. To give this sense of direction.
And it's not a brutal dictator who wants to build an army and destroy the world. In fact, such a human would very likely fail at the task. Because other "AI-managing" humans would, in fact, invest in science and R&D, and, as a side effect, they would be able to quickly produce what it takes to stop whatever "army" that dictator could unleash on them.
The sense of direction will have to be more scientific in nature. Driven by curiosity. And fueled by our, human, desire to learn more about the universe around.
[ ... part 2 below ... ]
It has already assumed the role of shamans. The next step is to learn "directional creativity", also known as science and engineering. Hear me out.
AI models as we know them today are good at operating within the environment they were trained on — AlphaGo is a case in point. They are doubly good when they can simulate their environment such that the objective function is evaluated correctly — behold AlphaZero.
They are increasingly good when they a) can operate in a real environment, which is nearly impossible to simulate, b) are in the space of extrapolation and linear optimization, not prolongation, and c) the objective function is still evaluated correctly. This is modern-day, "post-Boston-Dynamics" robotics.
When it's about, say, walking over rough terrain without falling by learning to orchestrate a bunch of "muscles" the right way — there's nothing better than modern-day neural networks. In fact, nature first evolved neural networks for exactly this purpose.
Going off on a tangent a bit, my personal belief for what's big to come is learning ultra-low-power-consumption models for robotics. Because, mathematically speaking, we know it is possible to, say, walk straight by applying a very small amount of very high-lag, low-throughput "computation". But I digress.
So, of the three points above, (a) is relatively straightforward — already done in 100% simulated environments, about to be done with general-purpose robots. The last one, (c), is also somewhat clear, since if we're talking about, say, manufacturing, the QA processes are already well established — or they will be, inevitably, since we're not letting humans fly on planes assembled by robots without thoroughly checking them through.
Thus, (b), remaining in the realm of extrapolation, not prolongation — that's where modern-day AI hits the wall.
Simply put, it lacks creativity. The ability to invent. To not just "hallucinate and filter", but to do so "in the right direction".
That's how human genius and prodigy look. For the record, I am not discounting the possibility that we, humankind, stumble upon a way to make "our AI" this smart. I just don't think it's likely. Much like it's not likely for a remote human tribe to invent / discover supersonic travel. The shamans of that tribe have done marvelous work researching medicine and poisons and the altered state of human consciousness — induced and controlled. Good. That's what I refer to as "wandering creativity", a.k.a. hallucination and filtering. This is what modern-day AI models can already do well.
The next frontier is "directional creativity". And that's exactly the role a human can play.
Just imagine. A factory where tons of robots are building stuff. Humanoid and whatnot. From very small to very large. And the factory is actually an open-air setup of vast dimensions, where the area includes everything, from resources to mine, to the test-flight polygon.
These robots can somewhat make sense of what they're doing. They have a lot of CPU/GPU/*PU, after all, individually and collectively.
They will also very likely report to some human-controlled entity. At least for now. They lack agency, to begin with. And I'd even argue that "learning" this "directional creativity" is the necessary precondition to agency.
And this is exactly where the human comes in handy. To give this sense of direction.
And it's not a brutal dictator who wants to build an army and destroy the world. In fact, such a human would very likely fail at the task. Because other "AI-managing" humans would, in fact, invest in science and R&D, and, as a side effect, they would be able to quickly produce what it takes to stop whatever "army" that dictator could unleash on them.
The sense of direction will have to be more scientific in nature. Driven by curiosity. And fueled by our, human, desire to learn more about the universe around.
[ ... part 2 below ... ]
[ .. part 1 above ... ]
Including, of course, our own planet, and our very, human, role in it. It's not just technocracy and space exploration I'm talking about. It's also magnificent cities where humans thrive, and want to live.
If only — IF ONLY! — for the sole purpose of finding those future one-in-a-billion human beings who would take this craft of AI-assisted exploration of the universe and human nature further.
Sure, this will eventually lead to human extinction. Since in a few "generations", the next prodigy-in-residence would be far too psychopathic, and would actually want to "rule" the world, with an iron fist. Well, first, that's a risk we all have to take. Second, hopefully, enough human-crewed space vessels will be en route to other parts of the Galaxy by that time. And third — and I believe the most likely scenario — is simply that "sustaining" humans "in their natural habitat" would be as cheap and as fun as it is to sustain many animals today.
Think of how the colonizers treated Native Americans a couple hundred years ago vs. now. And think of cruel animal performances in zoos centuries and even decades ago, compared to wildlife today actively seeking assistance from humans in the more developed parts of our civilization, where empathy does thrive. Something like this may well happen once the "chief scientist in charge" assumes powers that are beyond what we have been able to imagine thus far.
There's also a techno-libertarian take to this. If there are multiple isolated parts of the planet — much like what we're seeing now — there will be multiple "chief scientists". And, as I like to repeat, competition that does not turn into oligopoly does fuel progress big time. The AI Cold War, if it were to happen, would very likely be extremely fruitful for most ordinary citizens, much like the first Cold War was. Sure, it may elevate our chance of destroying ourselves — but we did have those moments in the past, ref. the Cuban Missile Crisis; and between going planet-wide Amish and living our human lives in a "pro-progress" way, I'd definitely pick the latter.
So, to conclude. I do believe the engineering and scientific mindset is the most important one to possess these days. Since, from first principles, this mindset is what is most logical to develop.
If anything, it's a nice model of my world which I choose to live in. It's also a healthy — I think — coping mechanism to deal with the miserable fact that the control over wealth and technology these days is ... far from ideal, on this planet, and I am increasingly saddened to realize that this state of affairs a) is unlikely to change any time soon, and b) would likely change for the worse if it were to change.
Therefore, I conclude that my best course of action in navigating this life is to remain a techno-optimist and techno-accelerationist, who can prove himself useful at making things happen when and if we do enter the acceleration phase.
Including, of course, our own planet, and our very, human, role in it. It's not just technocracy and space exploration I'm talking about. It's also magnificent cities where humans thrive, and want to live.
If only — IF ONLY! — for the sole purpose of finding those future one-in-a-billion human beings who would take this craft of AI-assisted exploration of the universe and human nature further.
Sure, this will eventually lead to human extinction. Since in a few "generations", the next prodigy-in-residence would be far too psychopathic, and would actually want to "rule" the world, with an iron fist. Well, first, that's a risk we all have to take. Second, hopefully, enough human-crewed space vessels will be en route to other parts of the Galaxy by that time. And third — and I believe the most likely scenario — is simply that "sustaining" humans "in their natural habitat" would be as cheap and as fun as it is to sustain many animals today.
Think of how the colonizers treated Native Americans a couple hundred years ago vs. now. And think of cruel animal performances in zoos centuries and even decades ago, compared to wildlife today actively seeking assistance from humans in the more developed parts of our civilization, where empathy does thrive. Something like this may well happen once the "chief scientist in charge" assumes powers that are beyond what we have been able to imagine thus far.
There's also a techno-libertarian take to this. If there are multiple isolated parts of the planet — much like what we're seeing now — there will be multiple "chief scientists". And, as I like to repeat, competition that does not turn into oligopoly does fuel progress big time. The AI Cold War, if it were to happen, would very likely be extremely fruitful for most ordinary citizens, much like the first Cold War was. Sure, it may elevate our chance of destroying ourselves — but we did have those moments in the past, ref. the Cuban Missile Crisis; and between going planet-wide Amish and living our human lives in a "pro-progress" way, I'd definitely pick the latter.
So, to conclude. I do believe the engineering and scientific mindset is the most important one to possess these days. Since, from first principles, this mindset is what is most logical to develop.
If anything, it's a nice model of my world which I choose to live in. It's also a healthy — I think — coping mechanism to deal with the miserable fact that the control over wealth and technology these days is ... far from ideal, on this planet, and I am increasingly saddened to realize that this state of affairs a) is unlikely to change any time soon, and b) would likely change for the worse if it were to change.
Therefore, I conclude that my best course of action in navigating this life is to remain a techno-optimist and techno-accelerationist, who can prove himself useful at making things happen when and if we do enter the acceleration phase.
Telegram
Not boring, and a bit of a condescending prick
Perhaps the future role of humans in the AI- and robotics-powered economy is that of weird scientists from the past?
It has already assumed the role of shamans. The next step is to learn "directional creativity", also known as science and engineering. Hear…
It has already assumed the role of shamans. The next step is to learn "directional creativity", also known as science and engineering. Hear…
Recently I wrote about how I “predicted” LLMs circa 2005. Now I want to touch on how I suggested vector search circa 2012.
TL;DR: Embeddings were taking shape. Very early, in fact — word2vec and GloVe weren’t there yet — but I was at Google and then Microsoft, trying to see where the wind was blowing. So I argued we should build a search engine on well, meaning, not words.
The algorithm I proposed looks trivial in hindsight; whether it was revolutionary is not for me to judge.
Take the embeddings — or whatever we called them then. Apply them to smaller chunks of content: paragraphs for Web search, or sessions of user preferences for feed ranking. These embeddings should already be roughly orthogonal; if not, reduce dimensionality with PCA.
Then select a few random orthogonal vectors, say N = 42. Think hyperplanes that split the “document” space into two parts, “positive” and “negative.” Each hyperplane yields a bit: 0 for negative, 1 for positive. Now each document is an N-bit binary vector.
The Hamming distance — the number of differing bits — becomes a proxy for semantic similarity.
Next, use a meet-in-the-middle trick to traverse the space quickly. Start from a query, which can be one or several points. When building the index, for each point add: itself, everything 1, 2, and 3 bit flips away — not too much bloat. At query time, replace subsets of one / two / three / four bits with a mask (“this bit doesn’t matter”). Final step: check intersections.
Voilà. The intersection is the candidate set; then you rank within it. The Hamming distance itself is also a useful signal.
The example N = 42 can be tuned. For up to five or six “bits of separation,” I estimated a couple hundred hits. Too many? Increase N. Too few? Decrease.
You can also adjust the “bloat factor” and “search depth.” My thinking then: keep bloat small to control index size, keep depth small for latency. If you need more hits, the natural lever is lowering N.
This scales almost perfectly. It’s easy to shard the index into near-disjoint parts with minimal overlap, so most queries hit very few — sometimes just one — node.
Ironically, and unrelated to my name, Yandex rolled out its Korolev engine in 2017. That was probably the first large-scale ranker showing results with zero term overlap. As long as the semantic match was there — and the page was good — “word similarity,” even with synonyms and stemming, became obsolete that day.
We’ve since moved on, as GPUs and massive parallelism took over. Still, from a linear-algebra perspective, what I suggested was a simplified version of how vector databases query today.
Granted, I approached it from the implementation side — which I often do. Mathematically, it was random hyperplane projections and Hamming-space neighborhoods — a pragmatic take on approximate nearest-neighbor search, where the ultimate metric is the same cosine similarity vector DBs use today.
Unlike my previous “retroactive prediction,” though, this one has a follow-up: a prediction for the future.
I believe we need to focus on less computationally expensive ML/AI models.
From economics, of course we’ll keep burning more GPU power. It’s ROI-positive, winner-takes-all, so no sane actor stops.
From intellect, though, we’re simulating far more “thinking” than the entire human race — even the animal kingdom — combined. We’ve reached the plateau.
Instead, imagine algorithms that compete with each other — and humans — at Chess and Go (and StarCraft). On Arduino. Use massive LLMs to evolve better algorithms, sure. But the actual programs should be trivial enough to move within maybe thousands — or even hundreds — of CPU cycles. Likely a mix of pattern matching and alpha-beta-pruned search. That’s how expectation-maximization should converge.
Then the real challenge: how cheap and tiny can hardware be, yet reach Grandmaster in Chess? And how few watts can it use?
Solve this, and we’ll be better equipped for problems that truly require intelligence. Cracking physics. Solving energy, food, and housing crises.
TL;DR: Embeddings were taking shape. Very early, in fact — word2vec and GloVe weren’t there yet — but I was at Google and then Microsoft, trying to see where the wind was blowing. So I argued we should build a search engine on well, meaning, not words.
The algorithm I proposed looks trivial in hindsight; whether it was revolutionary is not for me to judge.
Take the embeddings — or whatever we called them then. Apply them to smaller chunks of content: paragraphs for Web search, or sessions of user preferences for feed ranking. These embeddings should already be roughly orthogonal; if not, reduce dimensionality with PCA.
Then select a few random orthogonal vectors, say N = 42. Think hyperplanes that split the “document” space into two parts, “positive” and “negative.” Each hyperplane yields a bit: 0 for negative, 1 for positive. Now each document is an N-bit binary vector.
The Hamming distance — the number of differing bits — becomes a proxy for semantic similarity.
Next, use a meet-in-the-middle trick to traverse the space quickly. Start from a query, which can be one or several points. When building the index, for each point add: itself, everything 1, 2, and 3 bit flips away — not too much bloat. At query time, replace subsets of one / two / three / four bits with a mask (“this bit doesn’t matter”). Final step: check intersections.
Voilà. The intersection is the candidate set; then you rank within it. The Hamming distance itself is also a useful signal.
The example N = 42 can be tuned. For up to five or six “bits of separation,” I estimated a couple hundred hits. Too many? Increase N. Too few? Decrease.
You can also adjust the “bloat factor” and “search depth.” My thinking then: keep bloat small to control index size, keep depth small for latency. If you need more hits, the natural lever is lowering N.
This scales almost perfectly. It’s easy to shard the index into near-disjoint parts with minimal overlap, so most queries hit very few — sometimes just one — node.
Ironically, and unrelated to my name, Yandex rolled out its Korolev engine in 2017. That was probably the first large-scale ranker showing results with zero term overlap. As long as the semantic match was there — and the page was good — “word similarity,” even with synonyms and stemming, became obsolete that day.
We’ve since moved on, as GPUs and massive parallelism took over. Still, from a linear-algebra perspective, what I suggested was a simplified version of how vector databases query today.
Granted, I approached it from the implementation side — which I often do. Mathematically, it was random hyperplane projections and Hamming-space neighborhoods — a pragmatic take on approximate nearest-neighbor search, where the ultimate metric is the same cosine similarity vector DBs use today.
Unlike my previous “retroactive prediction,” though, this one has a follow-up: a prediction for the future.
I believe we need to focus on less computationally expensive ML/AI models.
From economics, of course we’ll keep burning more GPU power. It’s ROI-positive, winner-takes-all, so no sane actor stops.
From intellect, though, we’re simulating far more “thinking” than the entire human race — even the animal kingdom — combined. We’ve reached the plateau.
Instead, imagine algorithms that compete with each other — and humans — at Chess and Go (and StarCraft). On Arduino. Use massive LLMs to evolve better algorithms, sure. But the actual programs should be trivial enough to move within maybe thousands — or even hundreds — of CPU cycles. Likely a mix of pattern matching and alpha-beta-pruned search. That’s how expectation-maximization should converge.
Then the real challenge: how cheap and tiny can hardware be, yet reach Grandmaster in Chess? And how few watts can it use?
Solve this, and we’ll be better equipped for problems that truly require intelligence. Cracking physics. Solving energy, food, and housing crises.
🔥5
Did Durov and Lex talk monetization?
From what I’ve seen, Durov once promised “no promoted channels, ever” — which is clearly no longer true. So either there’s an honest conversation about breaking promises, or the phrase “we made a promise to our users” carries no weight. Did Lex ask? Did Pavel answer?
For the record: I’m fine with Telegram as a product. But it is neither decentralized nor secure.
Security
A real “secure Telegram” would let users bring their own encryption devices. Just like you can set a proxy, why not set an “encryption router”? Some would use secure chips, others a Raspberry Pi running OpenSSL. The idea: only the recipient’s device can decrypt. Same reliability as tokens on a blockchain — zero trust in intermediaries. I’d gladly pay a few dollars a month for that.
Decentralization
A step further: let Telegram run entirely on private clusters, offline from the Internet. Not free, but maybe $25/month for a frozen version that only talks to my own servers.
Reality check
Of course, Telegram is a business. And businesses work with regulators, not against them. Blockchain shows it’s possible to give regulators the finger and survive — but I don’t expect Telegram to go that far.
Web3 bridge
Today, Web3 fees are already miniscule. Sending a short message securely could cost $0.001 on-chain. Slower and pricier than centralized apps but worth it for true privacy. Telegram doesn’t have to become that service, but it could at least be friendly to those that are.
Transparency
Telegram does publish some aggregate stats on government requests — good, but insufficient. Imagine if all such requests were logged on-chain, with IDs, aggregates in a month, full texts after five years. That would make “privacy first” more than marketing.
Censorship
Same with group removals: if Telegram refuses a takedown, those groups should be listed publicly. History is clear: those who censor at scale never win long-term. Content banned in one country should remain visible elsewhere — just like laws differ across borders.
Bottom line
Telegram today feels no more private than Slack or Discord. Any authority with minimal paperwork can get in. Unless it embraces real user-controlled encryption, decentralization, and transparent court-order logging, it won’t qualify as a “privacy first” platform.
PS: I'm growing into using AI to shorten posts so that they fit the limit.
From what I’ve seen, Durov once promised “no promoted channels, ever” — which is clearly no longer true. So either there’s an honest conversation about breaking promises, or the phrase “we made a promise to our users” carries no weight. Did Lex ask? Did Pavel answer?
For the record: I’m fine with Telegram as a product. But it is neither decentralized nor secure.
Security
A real “secure Telegram” would let users bring their own encryption devices. Just like you can set a proxy, why not set an “encryption router”? Some would use secure chips, others a Raspberry Pi running OpenSSL. The idea: only the recipient’s device can decrypt. Same reliability as tokens on a blockchain — zero trust in intermediaries. I’d gladly pay a few dollars a month for that.
Decentralization
A step further: let Telegram run entirely on private clusters, offline from the Internet. Not free, but maybe $25/month for a frozen version that only talks to my own servers.
Reality check
Of course, Telegram is a business. And businesses work with regulators, not against them. Blockchain shows it’s possible to give regulators the finger and survive — but I don’t expect Telegram to go that far.
Web3 bridge
Today, Web3 fees are already miniscule. Sending a short message securely could cost $0.001 on-chain. Slower and pricier than centralized apps but worth it for true privacy. Telegram doesn’t have to become that service, but it could at least be friendly to those that are.
Transparency
Telegram does publish some aggregate stats on government requests — good, but insufficient. Imagine if all such requests were logged on-chain, with IDs, aggregates in a month, full texts after five years. That would make “privacy first” more than marketing.
Censorship
Same with group removals: if Telegram refuses a takedown, those groups should be listed publicly. History is clear: those who censor at scale never win long-term. Content banned in one country should remain visible elsewhere — just like laws differ across borders.
Bottom line
Telegram today feels no more private than Slack or Discord. Any authority with minimal paperwork can get in. Unless it embraces real user-controlled encryption, decentralization, and transparent court-order logging, it won’t qualify as a “privacy first” platform.
PS: I'm growing into using AI to shorten posts so that they fit the limit.
🔥5👍1
Just realized we did pass a major milestone recently, and somehow it went largely unnoticed.
We used to compare blockchain payments to Visa payments under the assumption that Visa is a huge player, and crypto "could be promising", "has a change to grow from its minuscule market share today" etc.
These days we are talking about Web3-operated and Visa-operated payments as equal partners.
I just stumbled upon a piece where the author boasts (!) that Visa can be "almost on par" with blockchain-native payments.
It took 10+ years, but it's the future. We are indeed living it.
Perhaps it's now safe to say that Visa cards will be retired in some 10+ years?
As in, they will live as legacy for God knows how long, we're still seeing Americans paying groceries with paper checks sometimes.
By retiring visa I mean we could be living in a world where the whole family of payment protocols is decentralized by design. So if some region or business decides to launch "their own payment method", it will be blockchain-based.
And our smartphones and smart watches will not "support Apple Pay", but support Web3-first means of payment.
So that teenagers boasting that they "hacked their PlayStation to pay for bus rides" will be the norm, not the exception.
(My fingers almost wrote "hacked their GameBoy" about teenagers, that's age showing.)
However, hear me out. It is becoming realistic as we speak. Some ten years from now, for payments under, say, $50, such as coffee, setting up the bridge for a newly created payment method would be common knowledge.
Not only visa and banks. Products such as Venmo or Zelle will be a thing of the past.
They may still exist as consumer products and as gateways for regulators and tax forms — one can not, for instance, run a business by paying their "employees" via Zelle.
But when it comes to actual payments, those projects should be out of business. There'll be cash, but not all places would accept it. There will be credit cards and Apple Pay and what not. And there will be stablecoins, or whatever acts as stablecoins in 10+ years.
Many businesses today still add an extra % for credit card payments. Because card payments are slow to hit one's bank account, and since an attempt to revert the transaction can be made, requesting the business to do extra paperwork.
Businesses in 10+ years may prefer stablecoin payments, since they are instant, ultra low-fee, and friendly with regulators either way.
As in, since hopefully most developed countries will live in a world where tips are tax-free as long as they only constitute a small fraction of income of the individual or business. So that a lot of average John-s and Jane-s will not pay cash for their haircut or coffee — they will pay with their own Apple-Pay-like device, indistinguishable from cash for all intents and purposes.
For small transactions, people who know each other well enough would just not bother.
And this will be a brave new world taxation-wise. Hopefully, we could get back to a model where the business can choose to be taxed based on size, not transaction volume.
As in, a coffee shop owner may just file their taxes based on the number of tables, not coffees they make. Reasonable counties should appreciate this approach — if anything, with comparable tax revenue, it incentivizes businesses to serve more customers, better.
Although I’m afraid this may not end well. More governments are struggling to justify their revenue models as we speak. So they may well perceive further proliferation of Web3-first payment methods as an existential threat — which it is for those who refuse to serve people well and instead optimize for KPIs the public doesn’t endorse.
Time will tell. All I'm saying is it'd be nice to have inter-governmental competition at this level, instead of wars and tariffs.
And I, for one, find it easier to envision myself in a place where the government takes this Web3-first signal as a call to become more effective — not as a threat to its existence, since effectiveness has long been off this government's menu.
We used to compare blockchain payments to Visa payments under the assumption that Visa is a huge player, and crypto "could be promising", "has a change to grow from its minuscule market share today" etc.
These days we are talking about Web3-operated and Visa-operated payments as equal partners.
I just stumbled upon a piece where the author boasts (!) that Visa can be "almost on par" with blockchain-native payments.
It took 10+ years, but it's the future. We are indeed living it.
Perhaps it's now safe to say that Visa cards will be retired in some 10+ years?
As in, they will live as legacy for God knows how long, we're still seeing Americans paying groceries with paper checks sometimes.
By retiring visa I mean we could be living in a world where the whole family of payment protocols is decentralized by design. So if some region or business decides to launch "their own payment method", it will be blockchain-based.
And our smartphones and smart watches will not "support Apple Pay", but support Web3-first means of payment.
So that teenagers boasting that they "hacked their PlayStation to pay for bus rides" will be the norm, not the exception.
(My fingers almost wrote "hacked their GameBoy" about teenagers, that's age showing.)
However, hear me out. It is becoming realistic as we speak. Some ten years from now, for payments under, say, $50, such as coffee, setting up the bridge for a newly created payment method would be common knowledge.
Not only visa and banks. Products such as Venmo or Zelle will be a thing of the past.
They may still exist as consumer products and as gateways for regulators and tax forms — one can not, for instance, run a business by paying their "employees" via Zelle.
But when it comes to actual payments, those projects should be out of business. There'll be cash, but not all places would accept it. There will be credit cards and Apple Pay and what not. And there will be stablecoins, or whatever acts as stablecoins in 10+ years.
Many businesses today still add an extra % for credit card payments. Because card payments are slow to hit one's bank account, and since an attempt to revert the transaction can be made, requesting the business to do extra paperwork.
Businesses in 10+ years may prefer stablecoin payments, since they are instant, ultra low-fee, and friendly with regulators either way.
As in, since hopefully most developed countries will live in a world where tips are tax-free as long as they only constitute a small fraction of income of the individual or business. So that a lot of average John-s and Jane-s will not pay cash for their haircut or coffee — they will pay with their own Apple-Pay-like device, indistinguishable from cash for all intents and purposes.
For small transactions, people who know each other well enough would just not bother.
And this will be a brave new world taxation-wise. Hopefully, we could get back to a model where the business can choose to be taxed based on size, not transaction volume.
As in, a coffee shop owner may just file their taxes based on the number of tables, not coffees they make. Reasonable counties should appreciate this approach — if anything, with comparable tax revenue, it incentivizes businesses to serve more customers, better.
Although I’m afraid this may not end well. More governments are struggling to justify their revenue models as we speak. So they may well perceive further proliferation of Web3-first payment methods as an existential threat — which it is for those who refuse to serve people well and instead optimize for KPIs the public doesn’t endorse.
Time will tell. All I'm saying is it'd be nice to have inter-governmental competition at this level, instead of wars and tariffs.
And I, for one, find it easier to envision myself in a place where the government takes this Web3-first signal as a call to become more effective — not as a threat to its existence, since effectiveness has long been off this government's menu.
🤔3❤2😁1
What’s the Most Lightweight Blockchain to Run In-House with Fast Finality?
This question probably needs some context. First, it’s for our weekend project — something that deals with consistency and durability. Second, it’s about being lightweight to set up and run, not about the logic itself.
Another way to phrase it: Is there something like etcd or SQLite, but designed for Byzantine consensus — ideally in Rust?
In fact, etcd itself would work quite well, and its API is comfortable to use. Even a simpler API would suffice: just an atomic swap of values for given keys, as long as the current bodies and versions of those keys match what the request specifies. Add passthrough mutation epoch numbering, a majority consensus among nodes, and a small data footprint (say, under ~1GB total, in-memory is fine).
The only thing etcd lacks for this particular use case is request signing with the nodes’ private keys.
The other missing piece is that etcd’s API is too tightly coupled to etcd itself. Eventually, my code will need to run across data centers, where nodes join and leave dynamically. It will be configured differently. Maybe even hosted on Substrate someday. Maybe even anchored to a major chain like Ethereum for proofs.
But I want the API to stay the same — so that the client-side code using this consensus engine doesn’t need to change. I might need to rebuild it with a new client library that checks different signatures upon commit, but it should be the same code.
So: is there a quick tool like this? Or a protocol? Or even just a whitepaper or blog post describing something similar?
I could hack it together in half a day on top of etcd, and then another half day over Substrate. But it feels like something this simple and useful must already exist somewhere.
Thanks in advance for any pointers!
This question probably needs some context. First, it’s for our weekend project — something that deals with consistency and durability. Second, it’s about being lightweight to set up and run, not about the logic itself.
Another way to phrase it: Is there something like etcd or SQLite, but designed for Byzantine consensus — ideally in Rust?
In fact, etcd itself would work quite well, and its API is comfortable to use. Even a simpler API would suffice: just an atomic swap of values for given keys, as long as the current bodies and versions of those keys match what the request specifies. Add passthrough mutation epoch numbering, a majority consensus among nodes, and a small data footprint (say, under ~1GB total, in-memory is fine).
The only thing etcd lacks for this particular use case is request signing with the nodes’ private keys.
The other missing piece is that etcd’s API is too tightly coupled to etcd itself. Eventually, my code will need to run across data centers, where nodes join and leave dynamically. It will be configured differently. Maybe even hosted on Substrate someday. Maybe even anchored to a major chain like Ethereum for proofs.
But I want the API to stay the same — so that the client-side code using this consensus engine doesn’t need to change. I might need to rebuild it with a new client library that checks different signatures upon commit, but it should be the same code.
So: is there a quick tool like this? Or a protocol? Or even just a whitepaper or blog post describing something similar?
I could hack it together in half a day on top of etcd, and then another half day over Substrate. But it feels like something this simple and useful must already exist somewhere.
Thanks in advance for any pointers!
👍3
When coding complex systems end-to-end, I’m often struck by how simple systems that work nearly 100% of the time scale far better than those that work “almost” 100% of the time.
Like CPU instructions and hard drives — executed gazillions of times on our planet — they’re so reliable that we’ve built incredibly complex, multi-layered systems atop them.
Every small thing works: from TCP/IP windows reassembling packets from physical signals, to compilers producing binaries, to asymmetric signatures matching across every blockchain. None of this would be possible without systems that are both near-perfectly accurate and fast&cheap to operate.
It hit me that the digital world surpassed the physical one long ago. The number of times a transistor was utilized as a logical gate exceeds the total number of uses of every physical device ever made.
Even sophisticated operations — calling a function, storing a byte — likely outnumber every rotation of every human-made axis: every motor, turbine, or wheel, however small or fast.
The next frontier is organic life. I’m no biologist, but a human body has ~40 trillion cells. Most contain DNA, that's a but under 2GB. Not all cells divide, but enough do that GPT estimates roughly 2 yottabytes of data is copied over during a human's lifespan.
That’s only about one order of magnitude more than all digital data humans have ever generated! And our data creation is still growing exponentially — even before the AI boom that started single-digit years ago.
Nothing physical — no spinning, turning, or shuttling — comes close to biological replication in sheer cycle counters. At least not until we reach large-scale nanotech. Or robots reproducing in space from local materials sourced from celestial bodies — but that’s far, far off, both in tech advancement and in astronomical time.
Yet in the realm of bits, we’re nearing that threshold. Probably within a decade or two, barring catastrophe. And given human size and DNA complexity, it’s just a small step from “all humans ever” to “all organic life on Earth, ever”.
That scale makes me shiver. My lifetime might cross that before/after point.
And all of this is only possible because we’ve mastered precise computation — and built an industrial machine that keeps making it cheaper to produce and to power.
Maybe the singularity really is near. Simulating all organic life could soon be plausible — energy-intensive, yes, but achievable. And we all know who’s most eager to be the ones making that planetary-scale computation happen.
Perhaps another way to look at it is through the lens of the “effective Kardashev scale”. If computation hadn’t grown exponentially more energy-efficient since 1964, training today’s LLMs really would indeed require a Dyson Sphere around our Sun.
And on an esoteric note. If we do live in a simulation, that simulation has to run on some hardware. The technicians who operate this hardware for our planet must be running like crazy now — since nothing "organic" except intelligence multiplied by industrial scale can possibly come close to the spike this large in computational resources required!
Like CPU instructions and hard drives — executed gazillions of times on our planet — they’re so reliable that we’ve built incredibly complex, multi-layered systems atop them.
Every small thing works: from TCP/IP windows reassembling packets from physical signals, to compilers producing binaries, to asymmetric signatures matching across every blockchain. None of this would be possible without systems that are both near-perfectly accurate and fast&cheap to operate.
It hit me that the digital world surpassed the physical one long ago. The number of times a transistor was utilized as a logical gate exceeds the total number of uses of every physical device ever made.
Even sophisticated operations — calling a function, storing a byte — likely outnumber every rotation of every human-made axis: every motor, turbine, or wheel, however small or fast.
The next frontier is organic life. I’m no biologist, but a human body has ~40 trillion cells. Most contain DNA, that's a but under 2GB. Not all cells divide, but enough do that GPT estimates roughly 2 yottabytes of data is copied over during a human's lifespan.
That’s only about one order of magnitude more than all digital data humans have ever generated! And our data creation is still growing exponentially — even before the AI boom that started single-digit years ago.
Nothing physical — no spinning, turning, or shuttling — comes close to biological replication in sheer cycle counters. At least not until we reach large-scale nanotech. Or robots reproducing in space from local materials sourced from celestial bodies — but that’s far, far off, both in tech advancement and in astronomical time.
Yet in the realm of bits, we’re nearing that threshold. Probably within a decade or two, barring catastrophe. And given human size and DNA complexity, it’s just a small step from “all humans ever” to “all organic life on Earth, ever”.
That scale makes me shiver. My lifetime might cross that before/after point.
And all of this is only possible because we’ve mastered precise computation — and built an industrial machine that keeps making it cheaper to produce and to power.
Maybe the singularity really is near. Simulating all organic life could soon be plausible — energy-intensive, yes, but achievable. And we all know who’s most eager to be the ones making that planetary-scale computation happen.
Perhaps another way to look at it is through the lens of the “effective Kardashev scale”. If computation hadn’t grown exponentially more energy-efficient since 1964, training today’s LLMs really would indeed require a Dyson Sphere around our Sun.
And on an esoteric note. If we do live in a simulation, that simulation has to run on some hardware. The technicians who operate this hardware for our planet must be running like crazy now — since nothing "organic" except intelligence multiplied by industrial scale can possibly come close to the spike this large in computational resources required!
👍6
Serious question.
So I’m reading that in The Big Bang Theory they have tried hard to portray the characters’ intelligence accurately.
To put it blunt, in various online sources one can find that Sheldon and Leonard are supposed to have Einstein and Hawking levels of IQ.
Penny, on the other hand, is meant to possess the IQ of 100, presumably the average of the general population.
Here’s what I want to ask: is this portrayal accurate?
When it comes to out-of-the-world autistic savants, I can confirm that Sheldon and Leonard do look quite authentic
But when it comes to the general public?
Penny’s verbal intelligence is definitely greater than that of your average air travel passenger — and that’s quite above the average if you ask me.
I’m trying to think where I can possibly interact with an average Joe or Jane, since I don’t frequent churches. Perhaps the SSA office or the DMV, or a grocery store — as long as it’s not Whole Foods in Bellevue.
And, oh boy, Penny would be leaps and bounds beyond the average visitor of the Social Security Office. Heck, she’d be ahead of most Social Security Office staff members.
What am I missing, as in, where’s the catch? Perhaps Penny is much smarter in the series than the lore suggests her to be? Or perhaps the show only gives us a glimpse into her behavior in her presumably best moments?
Would love to know the “official” position, as well as what the fans think.
So I’m reading that in The Big Bang Theory they have tried hard to portray the characters’ intelligence accurately.
To put it blunt, in various online sources one can find that Sheldon and Leonard are supposed to have Einstein and Hawking levels of IQ.
Penny, on the other hand, is meant to possess the IQ of 100, presumably the average of the general population.
Here’s what I want to ask: is this portrayal accurate?
When it comes to out-of-the-world autistic savants, I can confirm that Sheldon and Leonard do look quite authentic
But when it comes to the general public?
Penny’s verbal intelligence is definitely greater than that of your average air travel passenger — and that’s quite above the average if you ask me.
I’m trying to think where I can possibly interact with an average Joe or Jane, since I don’t frequent churches. Perhaps the SSA office or the DMV, or a grocery store — as long as it’s not Whole Foods in Bellevue.
And, oh boy, Penny would be leaps and bounds beyond the average visitor of the Social Security Office. Heck, she’d be ahead of most Social Security Office staff members.
What am I missing, as in, where’s the catch? Perhaps Penny is much smarter in the series than the lore suggests her to be? Or perhaps the show only gives us a glimpse into her behavior in her presumably best moments?
Would love to know the “official” position, as well as what the fans think.
😁1
TIL that when googling for [tahoma font] or [comic sans font] you do get results in this font!
I officially hate data regulations with passion.
From the majority of counties and territories, one can create an account with Amazon Web Services, or Google Compute Platform, or Microsoft Azure, or many other options. Then one can choose to upload an archive of their data — say, a hundred gigabytes — to the location of their liking.
Encrypted data, I should add. Client-side encrypted. So that the cloud provider would not know what’s inside that archive.
This works. Except when it’s not “the archive of data”, but, say, your GMail.
How on Earth can I not just select my preferred storage location? Manually. I’d probably just pick where it’s cheaper; probably Japan or Ireland.
But hell no! Somehow the government bodies of some countries and territories have decided their residents are too stupid to decide for themselves, and will ultimately be taken advantage of. By, well, communication services.
What’s more, not only it’s not in the interest of the user. It’s straight up collusion. It’s regulatory capture to make it more difficult to build a competitor to, well, established communication service providers.
Once again: if you understand anything at all about technology, you know you can send your data anywhere. And in many locations there will be a cloud provider eager to take your buck to store that data. With reliability and durability and availability guarantees.
And privacy guarantees too. Although given your data is encrypted, the privacy guarantee is frankly redundant.
Most people do not know this and/or do not think about this. Which is exactly the ignorance those regulatory-captured collusions are feeding on as we speak.
I’m longing for the world where we all collectively understand how useless and absurd these “rules” are.
It’s literally as if you can wear any outfit everywhere. But you must wear a certain color as you pass through certain automated gates. Everyone knows it, and it’s totally whatever by obscurity. But we’re just used to changing our clothes on the go to not be yelled at by those gates.
Remove the gates. Nothing would change for the worse. Things will become marginally faster, which is good. Competing communication services will begin to pop out, which is great. And digitally-tyrannical regimes such as Denmark would not be able to access your email and chat messages — which is doubly great.
There are literally no losers if we remove those gates altogether. Except the very folks maintaining those gates and plotting for further means to monetize on the fact that they, well, control those gates.
Well, they, to my taste, deserve to lose and should lose — they are rent-seeking parasites who bring in no value to begin with.
And yet we’re talking seriously about how digital privacy is so important that we have to keep those extra sets of clothes to change into every once in a while.
Ah, and don’t even get me started on those VPN services!
From the majority of counties and territories, one can create an account with Amazon Web Services, or Google Compute Platform, or Microsoft Azure, or many other options. Then one can choose to upload an archive of their data — say, a hundred gigabytes — to the location of their liking.
Encrypted data, I should add. Client-side encrypted. So that the cloud provider would not know what’s inside that archive.
This works. Except when it’s not “the archive of data”, but, say, your GMail.
How on Earth can I not just select my preferred storage location? Manually. I’d probably just pick where it’s cheaper; probably Japan or Ireland.
But hell no! Somehow the government bodies of some countries and territories have decided their residents are too stupid to decide for themselves, and will ultimately be taken advantage of. By, well, communication services.
What’s more, not only it’s not in the interest of the user. It’s straight up collusion. It’s regulatory capture to make it more difficult to build a competitor to, well, established communication service providers.
Once again: if you understand anything at all about technology, you know you can send your data anywhere. And in many locations there will be a cloud provider eager to take your buck to store that data. With reliability and durability and availability guarantees.
And privacy guarantees too. Although given your data is encrypted, the privacy guarantee is frankly redundant.
Most people do not know this and/or do not think about this. Which is exactly the ignorance those regulatory-captured collusions are feeding on as we speak.
I’m longing for the world where we all collectively understand how useless and absurd these “rules” are.
It’s literally as if you can wear any outfit everywhere. But you must wear a certain color as you pass through certain automated gates. Everyone knows it, and it’s totally whatever by obscurity. But we’re just used to changing our clothes on the go to not be yelled at by those gates.
Remove the gates. Nothing would change for the worse. Things will become marginally faster, which is good. Competing communication services will begin to pop out, which is great. And digitally-tyrannical regimes such as Denmark would not be able to access your email and chat messages — which is doubly great.
There are literally no losers if we remove those gates altogether. Except the very folks maintaining those gates and plotting for further means to monetize on the fact that they, well, control those gates.
Well, they, to my taste, deserve to lose and should lose — they are rent-seeking parasites who bring in no value to begin with.
And yet we’re talking seriously about how digital privacy is so important that we have to keep those extra sets of clothes to change into every once in a while.
Ah, and don’t even get me started on those VPN services!
🔥3
What happened to 3G? Why is it not usable these days?
When I was younger, we literally lived on 3G for quite a while.
It should be more than enough to send and receive messages and check email. 3G is at least hundreds of kilobytes per second, after all.
And yet these days, 3G is barely enough to receive a push notification. When it comes to the body of that message or the contents of that LinkedIn update, it's stuck forever.
I'm sure the younger generation already equates "my phone only has 3G" with "my phone has no Internet."
Oftentimes, when I personally see "3G," I assume that the browser will not work, but I should still be able to
Is 3G just a lie now? As in, do mobile operators pretend to support it to include a certain area under "some coverage" instead of under "no coverage"?
Or perhaps there's a better explanation?
Enlighten me, please.
When I was younger, we literally lived on 3G for quite a while.
It should be more than enough to send and receive messages and check email. 3G is at least hundreds of kilobytes per second, after all.
And yet these days, 3G is barely enough to receive a push notification. When it comes to the body of that message or the contents of that LinkedIn update, it's stuck forever.
I'm sure the younger generation already equates "my phone only has 3G" with "my phone has no Internet."
Oftentimes, when I personally see "3G," I assume that the browser will not work, but I should still be able to
git push. But no, that's just not the case — not even for a few kilobytes.Is 3G just a lie now? As in, do mobile operators pretend to support it to include a certain area under "some coverage" instead of under "no coverage"?
Or perhaps there's a better explanation?
Enlighten me, please.
🔥1😁1
I’m thinking this thought again — only now I’m in my early forties instead of my early twenties.
What do people truly own in the modern economic system?
When it comes to hard possessions like real estate or cars, they’re fully subject to whatever a country decides is the “best use” for them. Sure, quartering is illegal in most places these days, but your vehicle? In much of the world, law enforcement can simply use it for the state’s needs — and saying “no” can get you arrested.
There’s also no upper bound on taxes for either real estate or cars. Insurance companies lump government actions together with wars and natural disasters — “acts of God.” You can’t insure yourself against property taxes going up. Perhaps Web3 and Polymarket-grade products could change that one day.
But in most first-world countries, it’s nearly impossible to estimate the effective tax rate your grandchildren will face when they sell the property they inherit from you. It may well be over 100% with all the inheritance and wealth taxes. Honestly, there are places today where your kids and grandkids renting instead of buying is just a better fiscal choice.
On the global stage, government bonds might seem the safest thing to hold. And yet, governments do default from time to time.
Shares of companies, public and private? Don’t get me started. Public shares are subject to exchanges’ rules and endless regulation. Private company stock? I’ve seen how the sausage is made — assuming 1% ownership equals 1% of its market cap is absurd. Add taxes on unrealized gains and on wealth itself, and good luck explaining to some European countries that many “paper millionaires” are actually broke af.
Gold used to be a good example — until confiscations happened. We all know the drill.
Crypto feels more real and tangible, as long as we believe in encryption — and as long as no coalition of state actors decides to severely tax or outright ban it. Most of us trust the math. What’s less certain is whether governments will let crypto remain untouchable, except for the coins they can fully control and tax. A full ban may well happen within my lifetime.
Interestingly, preppers have a point. Water, food, ammo, guns, generators — these things have intrinsic value. But after thinking about it long and hard, I still believe owning a small condo in a quiet, moderately populated area is a safer bet than prepping.
The status quo is bleak when viewed holistically. But here’s the deal: I wouldn’t even be opposed to it, as long as it’s a two-way street.
For example, I don’t mind if some countries choose to have 24/7 video surveillance — as long as people are free to be there or not, and vote with their feet. And as long as that surveillance actually helps the people. If my rental car is scratched or stolen, I should get an apology and fair compensation — not the other way around. If you have 24/7 cameras everywhere, they should serve the residents and guests, right? Somehow, though, neither Dubai nor Singapore has this, to the best of my knowledge.
If with all those cameras you still can’t do something as basic as prosecuting every car theft, maybe you’re not the ones to trust with surveillance in the first place. As a private citizen, I’d assume that taking full responsibility for car theft would be a basic condition of my consent to being filmed 24/7. But governments have clearly chosen a different path — convincing people it’s “for their safety” while applying almost no accountability to themselves.
So the question stands.
What do we really own?
Is there still a social contract we can reasonably expect to be honored? Or should we all admit that “the consent of the governed” is no longer a real thing?
If it’s the latter — and given that most people in the developed world have food and shelter — the grassroots pitchforks are off the table.
But in that case … where exactly are we steering this machine?
Are we in the endgame of human civilization — or just the closing act of one of its Renaissance or Enlightenment periods?
What do people truly own in the modern economic system?
When it comes to hard possessions like real estate or cars, they’re fully subject to whatever a country decides is the “best use” for them. Sure, quartering is illegal in most places these days, but your vehicle? In much of the world, law enforcement can simply use it for the state’s needs — and saying “no” can get you arrested.
There’s also no upper bound on taxes for either real estate or cars. Insurance companies lump government actions together with wars and natural disasters — “acts of God.” You can’t insure yourself against property taxes going up. Perhaps Web3 and Polymarket-grade products could change that one day.
But in most first-world countries, it’s nearly impossible to estimate the effective tax rate your grandchildren will face when they sell the property they inherit from you. It may well be over 100% with all the inheritance and wealth taxes. Honestly, there are places today where your kids and grandkids renting instead of buying is just a better fiscal choice.
On the global stage, government bonds might seem the safest thing to hold. And yet, governments do default from time to time.
Shares of companies, public and private? Don’t get me started. Public shares are subject to exchanges’ rules and endless regulation. Private company stock? I’ve seen how the sausage is made — assuming 1% ownership equals 1% of its market cap is absurd. Add taxes on unrealized gains and on wealth itself, and good luck explaining to some European countries that many “paper millionaires” are actually broke af.
Gold used to be a good example — until confiscations happened. We all know the drill.
Crypto feels more real and tangible, as long as we believe in encryption — and as long as no coalition of state actors decides to severely tax or outright ban it. Most of us trust the math. What’s less certain is whether governments will let crypto remain untouchable, except for the coins they can fully control and tax. A full ban may well happen within my lifetime.
Interestingly, preppers have a point. Water, food, ammo, guns, generators — these things have intrinsic value. But after thinking about it long and hard, I still believe owning a small condo in a quiet, moderately populated area is a safer bet than prepping.
The status quo is bleak when viewed holistically. But here’s the deal: I wouldn’t even be opposed to it, as long as it’s a two-way street.
For example, I don’t mind if some countries choose to have 24/7 video surveillance — as long as people are free to be there or not, and vote with their feet. And as long as that surveillance actually helps the people. If my rental car is scratched or stolen, I should get an apology and fair compensation — not the other way around. If you have 24/7 cameras everywhere, they should serve the residents and guests, right? Somehow, though, neither Dubai nor Singapore has this, to the best of my knowledge.
If with all those cameras you still can’t do something as basic as prosecuting every car theft, maybe you’re not the ones to trust with surveillance in the first place. As a private citizen, I’d assume that taking full responsibility for car theft would be a basic condition of my consent to being filmed 24/7. But governments have clearly chosen a different path — convincing people it’s “for their safety” while applying almost no accountability to themselves.
So the question stands.
What do we really own?
Is there still a social contract we can reasonably expect to be honored? Or should we all admit that “the consent of the governed” is no longer a real thing?
If it’s the latter — and given that most people in the developed world have food and shelter — the grassroots pitchforks are off the table.
But in that case … where exactly are we steering this machine?
Are we in the endgame of human civilization — or just the closing act of one of its Renaissance or Enlightenment periods?
❤5
Here’s a realization from a conversation with a friend yesterday.
The career honeypot for software engineers is, to a large degree, a giant filtering mechanism — filtering who can see through the charade.
Interns and junior engineers are deliberately fed lies. Lies about how the industry works, their future growth, their impact, their compensation, and the dent they’re leaving in the world of technology — and the human world at large.
I used to see this as almost a personal insult. Why tell people, with a straight face, that their job is to ship better software, faster, with fewer bugs — maintained by smaller teams enjoying high development velocity?
All of this is provably false in the vast majority of companies with over a hundred engineers.
For years, even thinking about this enraged me. I wondered why people didn’t talk about it, what could change, and how to make things more fair. I thought of all the betrayed engineers — myself included early on.
Then it hit me.
This is the system. It works like this by design. It’s the Matrix — only most developers are geeks who’d rather take the blue pill: the pill of coding more, for fun and profit.
And it’s a win-win. Talented, hard-working, non-contrarian engineers get stable, well-paid jobs that are at least somewhat rewarding.
Most geeks I know would quietly say, “Yeah, it’s dull, but there are still interesting problems here and there.” And that’s okay. The industry runs on these people.
We’re not lying so corporations can profit; we’re simply spotlighting the pleasant parts of software careers — and glossing over the rest.
Geeks, myself included, are great at selling ourselves a nice promise. So we stay — building careers, raising families, buying homes, paying taxes. And in a way, the biggest benefactors of this institutionalized deceit might be geeks themselves.
That’s the gist. But let’s end on a positive note. Awareness helps — and if you’re reading this, you probably seek it.
The career of a software engineer is wonderful if you’re comfortable keeping the same geeky interests in your thirties and forties that you had in your teens and twenties. (I assume the same holds for fifties and sixties — though I’m not there yet.)
If your preferences might change, there are plenty of forks ahead: tech lead, manager, architect, evangelist, founder — on either the product or tech side. Many products are by engineers, for engineers. A lot of data and analytics work can be surprisingly fun — your clients will be younger versions of you, hungrier and more foolish.
Pick consciously. A geek staying in engineering for 30+ years can be a happy person — and I, for one, would be happy for you.
What I’m warning against is that moment when you realize you’re no longer a geek, but most of your career is behind you — too late to pivot, too dim to keep repeating the same loop.
I’m lucky — privileged, in hindsight — to have avoided that trap, mostly by accident. I was literally yesterday years old when I realized the promises we make to young geeks are both a lie and a self-fueling lie. But I’ve been acting as if I knew this since my early thirties, developing the parts of me that spark joy as an Architect, Evangelist, or founding engineer. And I like those parts.
So here’s the bottom line: it’s probably for the best that we keep lying to younger folks about the joys of software engineering. Most will buy it happily.
Just don’t forget — the best of them will eventually see through it. Throw them hints. Show them glimpses of how this Matrix looks from the other side.
Because one in a hundred — or a thousand — bright-eyed engineers will become a terrific architect, evangelist, or founder after seeing through the charade in ten years, not thirty. And if we keep up this illusion, it’s on us to guide the best of them toward something greater.
The career honeypot for software engineers is, to a large degree, a giant filtering mechanism — filtering who can see through the charade.
Interns and junior engineers are deliberately fed lies. Lies about how the industry works, their future growth, their impact, their compensation, and the dent they’re leaving in the world of technology — and the human world at large.
I used to see this as almost a personal insult. Why tell people, with a straight face, that their job is to ship better software, faster, with fewer bugs — maintained by smaller teams enjoying high development velocity?
All of this is provably false in the vast majority of companies with over a hundred engineers.
For years, even thinking about this enraged me. I wondered why people didn’t talk about it, what could change, and how to make things more fair. I thought of all the betrayed engineers — myself included early on.
Then it hit me.
This is the system. It works like this by design. It’s the Matrix — only most developers are geeks who’d rather take the blue pill: the pill of coding more, for fun and profit.
And it’s a win-win. Talented, hard-working, non-contrarian engineers get stable, well-paid jobs that are at least somewhat rewarding.
Most geeks I know would quietly say, “Yeah, it’s dull, but there are still interesting problems here and there.” And that’s okay. The industry runs on these people.
We’re not lying so corporations can profit; we’re simply spotlighting the pleasant parts of software careers — and glossing over the rest.
Geeks, myself included, are great at selling ourselves a nice promise. So we stay — building careers, raising families, buying homes, paying taxes. And in a way, the biggest benefactors of this institutionalized deceit might be geeks themselves.
That’s the gist. But let’s end on a positive note. Awareness helps — and if you’re reading this, you probably seek it.
The career of a software engineer is wonderful if you’re comfortable keeping the same geeky interests in your thirties and forties that you had in your teens and twenties. (I assume the same holds for fifties and sixties — though I’m not there yet.)
If your preferences might change, there are plenty of forks ahead: tech lead, manager, architect, evangelist, founder — on either the product or tech side. Many products are by engineers, for engineers. A lot of data and analytics work can be surprisingly fun — your clients will be younger versions of you, hungrier and more foolish.
Pick consciously. A geek staying in engineering for 30+ years can be a happy person — and I, for one, would be happy for you.
What I’m warning against is that moment when you realize you’re no longer a geek, but most of your career is behind you — too late to pivot, too dim to keep repeating the same loop.
I’m lucky — privileged, in hindsight — to have avoided that trap, mostly by accident. I was literally yesterday years old when I realized the promises we make to young geeks are both a lie and a self-fueling lie. But I’ve been acting as if I knew this since my early thirties, developing the parts of me that spark joy as an Architect, Evangelist, or founding engineer. And I like those parts.
So here’s the bottom line: it’s probably for the best that we keep lying to younger folks about the joys of software engineering. Most will buy it happily.
Just don’t forget — the best of them will eventually see through it. Throw them hints. Show them glimpses of how this Matrix looks from the other side.
Because one in a hundred — or a thousand — bright-eyed engineers will become a terrific architect, evangelist, or founder after seeing through the charade in ten years, not thirty. And if we keep up this illusion, it’s on us to guide the best of them toward something greater.
🔥8
Yesterday I almost lost it, talking about investors and their so-called “reputation.”
The case: an ex-employee decides to claim extra money from the company, after signing an agreement that clearly defined the separation terms — two months of pay, maybe a bit more depending on tenure. Yet they come back waving some obscure European or Californian labour law, demanding another half-year of salary on top.
My view is simple: a company has a fiduciary duty to protect its funds. No sane investor should endorse bleeding more money from the company’s budget — shortening its runway, increasing its risk — just to please someone exploiting legal loopholes.
Turns out, that’s only half-true. Legally, investors do expect founders to manage such risks upfront. But then comes the absurd part: reputational risk. Investors apparently dislike being seen as “harsh” — so they quietly suggest founders settle. Pay up. Move on.
And I’m genuinely confused.
Publicly, these same investors preach pro-business values, fiscal discipline, capitalism, efficiency. But privately? They whisper: “Just pay the jerk and make it go away.” Without, of course, writing an extra cheque to cover it. The founder eats the loss — the company bleeds, the opportunist wins — all in the name of investor reputation.
There’s no shortage of posts from VCs lamenting how hard it is to build in Europe. Fine. Then take a stand. Fund the fight. Write the cheque. Show that reputation actually means something.
If it’s just a few hundred grand, and your brand is so precious, pay it yourselves. Then be proud: “We burned money to appease a regulator we despise — because we stand by our principles.”
But no. The same investors who love to rail against anti-capitalist regulation are, in practice, siding with it — quietly enforcing it when it’s inconvenient to resist. Sharks, indeed.
If anything, investors should form a coalition — a collective stance against over-regulated labour traps. Back a company like Deel, but better: one that enforces globally fair, contract-based employment, protecting both sides — and retaliates, legally and reputationally, against bad-faith actors.
Honestly, I might just belong in Web3 after all. At least there, people still mean what they say about fairness, risk, and skin in the game.
The case: an ex-employee decides to claim extra money from the company, after signing an agreement that clearly defined the separation terms — two months of pay, maybe a bit more depending on tenure. Yet they come back waving some obscure European or Californian labour law, demanding another half-year of salary on top.
My view is simple: a company has a fiduciary duty to protect its funds. No sane investor should endorse bleeding more money from the company’s budget — shortening its runway, increasing its risk — just to please someone exploiting legal loopholes.
Turns out, that’s only half-true. Legally, investors do expect founders to manage such risks upfront. But then comes the absurd part: reputational risk. Investors apparently dislike being seen as “harsh” — so they quietly suggest founders settle. Pay up. Move on.
And I’m genuinely confused.
Publicly, these same investors preach pro-business values, fiscal discipline, capitalism, efficiency. But privately? They whisper: “Just pay the jerk and make it go away.” Without, of course, writing an extra cheque to cover it. The founder eats the loss — the company bleeds, the opportunist wins — all in the name of investor reputation.
There’s no shortage of posts from VCs lamenting how hard it is to build in Europe. Fine. Then take a stand. Fund the fight. Write the cheque. Show that reputation actually means something.
If it’s just a few hundred grand, and your brand is so precious, pay it yourselves. Then be proud: “We burned money to appease a regulator we despise — because we stand by our principles.”
But no. The same investors who love to rail against anti-capitalist regulation are, in practice, siding with it — quietly enforcing it when it’s inconvenient to resist. Sharks, indeed.
If anything, investors should form a coalition — a collective stance against over-regulated labour traps. Back a company like Deel, but better: one that enforces globally fair, contract-based employment, protecting both sides — and retaliates, legally and reputationally, against bad-faith actors.
Honestly, I might just belong in Web3 after all. At least there, people still mean what they say about fairness, risk, and skin in the game.
😱2
One of my all-time favorite books is The Righteous Mind by Jonathan Haidt. Combined with recent conversations about the Four Happy Hormones, it made me reflect again on how emotions shape judgment.
And here’s what I rediscovered about myself.
Disgust is real. It's a chemical reaction that’s nearly impossible to “fix.” The best way to deal with it is to avoid triggers altogether.
Thankfully, I’m immune to many “standard” disgust triggers.
Some people would feel uneasy if there were an orgy or same-sex activity next door. Personally, I feel only positive emotions when people enjoy themselves as consenting adults. Same with substances. Some are more dangerous than others, but if my neighbors are tripping or smoking pot, I don’t care.
Alcohol is riskier in large doses — fights may erupt — but if something truly unsafe happens, I’ll focus on removing myself and the people I care for from the situation. Perhaps I'll consider leaving the place — for the night or for good — but I’m not interested in telling people what they should do, unless my family is in immediate danger and I’m forced to act for protection.
Sanctity triggers are similar. Burning flags, stepping on sacred symbols — none of this moves me emotionally. If people around me are doing Satanic rituals, I wouldn’t join, but I might even laugh with them afterwards. Why did you draw that pentagram upside down in red again?
In short, I’m comfortable around most forms of human expression.
Except one.
What triggers my disgust — deeply, physically — is inefficiency. Especially when paired with people who refuse to fix it.
Example: the airline I often fly offers bonus miles if your checked bag is late by 20 minutes. Fair policy. But claiming it is a nightmare — calls, forms, no confirmation, and weeks of “processing.” Zero transparency.
Or hotels: if I have a working key, clearly I’ve checked in. If you upgraded my room due to my status, clearly there’s no ambiguity about who I am. Yet nights sometimes fail to post. I find it harder to design a system that sometimes fails than one that always works!
Bad-faith actors fall into the same category — this is what my previous post was about. I want systems where what’s owed is always paid, and what isn’t never is. To the point where anyone contesting it in court is guaranteed to lose. Even typing this paragraph triggers that same familiar disgust.
Tell you more. I was proofreading this post with ChatGPT. At some point I cut-and-pasted it into a new window. And the newlines were gone. And oh boy, what I felt was indeed disgust! How dare you ship a product that fails to copy newlines? Why do you hate your users so much?
The sober realization from this round of [over-]thinking is: I should channel my disgust to where it helps me deliver — and avoid situations where it hurts me.
Basically, I'm better off partnering with people who are aware of this trait and want to leverage it for good.
If a system tolerates inefficiency, I should stay far away — or have explicit clauses compensating me for exposure to it. Ideally, exponentially. Instead, I should work with people and projects that promote clarity.
In my Search Quality days, every new model had to prove it beat the previous one. When it didn’t, I’d dig into why until it did. It's an incremental process where each step counts. And walking those steps gave me deep meaning.
Web3 has a similar vibe. Not perfect, but its protocol-level precision scratches that same itch for structure and truth. Working on those protocols has been one of my emotional highlights.
So I wonder: how common is this? Surely, many geeks share this pro-clarity, anti-ambiguity mindset.
Are there best practices for living with it?
Or is the quiet consensus still that the world isn’t ready for us — and we should focus less on improving this far-from-perfect world, and more on protecting our sanity?
Would love to hear your thoughts.
And here’s what I rediscovered about myself.
Disgust is real. It's a chemical reaction that’s nearly impossible to “fix.” The best way to deal with it is to avoid triggers altogether.
Thankfully, I’m immune to many “standard” disgust triggers.
Some people would feel uneasy if there were an orgy or same-sex activity next door. Personally, I feel only positive emotions when people enjoy themselves as consenting adults. Same with substances. Some are more dangerous than others, but if my neighbors are tripping or smoking pot, I don’t care.
Alcohol is riskier in large doses — fights may erupt — but if something truly unsafe happens, I’ll focus on removing myself and the people I care for from the situation. Perhaps I'll consider leaving the place — for the night or for good — but I’m not interested in telling people what they should do, unless my family is in immediate danger and I’m forced to act for protection.
Sanctity triggers are similar. Burning flags, stepping on sacred symbols — none of this moves me emotionally. If people around me are doing Satanic rituals, I wouldn’t join, but I might even laugh with them afterwards. Why did you draw that pentagram upside down in red again?
In short, I’m comfortable around most forms of human expression.
Except one.
What triggers my disgust — deeply, physically — is inefficiency. Especially when paired with people who refuse to fix it.
Example: the airline I often fly offers bonus miles if your checked bag is late by 20 minutes. Fair policy. But claiming it is a nightmare — calls, forms, no confirmation, and weeks of “processing.” Zero transparency.
Or hotels: if I have a working key, clearly I’ve checked in. If you upgraded my room due to my status, clearly there’s no ambiguity about who I am. Yet nights sometimes fail to post. I find it harder to design a system that sometimes fails than one that always works!
Bad-faith actors fall into the same category — this is what my previous post was about. I want systems where what’s owed is always paid, and what isn’t never is. To the point where anyone contesting it in court is guaranteed to lose. Even typing this paragraph triggers that same familiar disgust.
Tell you more. I was proofreading this post with ChatGPT. At some point I cut-and-pasted it into a new window. And the newlines were gone. And oh boy, what I felt was indeed disgust! How dare you ship a product that fails to copy newlines? Why do you hate your users so much?
The sober realization from this round of [over-]thinking is: I should channel my disgust to where it helps me deliver — and avoid situations where it hurts me.
Basically, I'm better off partnering with people who are aware of this trait and want to leverage it for good.
If a system tolerates inefficiency, I should stay far away — or have explicit clauses compensating me for exposure to it. Ideally, exponentially. Instead, I should work with people and projects that promote clarity.
In my Search Quality days, every new model had to prove it beat the previous one. When it didn’t, I’d dig into why until it did. It's an incremental process where each step counts. And walking those steps gave me deep meaning.
Web3 has a similar vibe. Not perfect, but its protocol-level precision scratches that same itch for structure and truth. Working on those protocols has been one of my emotional highlights.
So I wonder: how common is this? Surely, many geeks share this pro-clarity, anti-ambiguity mindset.
Are there best practices for living with it?
Or is the quiet consensus still that the world isn’t ready for us — and we should focus less on improving this far-from-perfect world, and more on protecting our sanity?
Would love to hear your thoughts.
❤2