It’s remarkable how many solid language-design choices emerge once you commit to treating types as a zero-overhead runtime abstraction.
🥰3👍1🤔1
The major cause of the software crisis is that the machines have become several orders of magnitude more powerful! To put it quite bluntly: as long as there were no machines, programming was no problem at all; when we had a few weak computers, programming became a mild problem, and now we have gigantic computers, programming has become an equally gigantic problem.
1972
Edsger Dijkstra, The Humble Programmer (EWD340), Communications of the ACM.
🔥9❤1
Writing Yaml config files used to be my least favorite activity — no schema, often no quick way to check their validity, no tab completion, no typing (of course).
On the bright side, doing this with LLMs is surprisingly pleasant.
I would not be surprised that among the first industries that completely switch to English as the definition language we will see the one of automatic container build / publish / put-together / test / retention policy / etc.
Sure, people who self-identify as Engineers, like me, would still want English to not be the Source-of-Truth of those "scripts".
But I'm already quite comfortable with a development practice where English instructions are pushed right to the repo alongside the very LLM-generated "code".
So that a good code reviewer — me — would, of course, first and foremost check the correctness of what will be executed; but then pay at least, if not more attention to how it was described, a.k.a. prompted.
And then the very history of prompts and "code reviews" and conversations on and about them, in a simple git repo / Github review tool, will become the very source-of-truth for the future LLMs to keep improving those scripts.
Definitely not bulletproof on DevOps scale. Nowhere near. But very, very good for smaller projects run by smaller teams.
On the bright side, doing this with LLMs is surprisingly pleasant.
I would not be surprised that among the first industries that completely switch to English as the definition language we will see the one of automatic container build / publish / put-together / test / retention policy / etc.
Sure, people who self-identify as Engineers, like me, would still want English to not be the Source-of-Truth of those "scripts".
But I'm already quite comfortable with a development practice where English instructions are pushed right to the repo alongside the very LLM-generated "code".
So that a good code reviewer — me — would, of course, first and foremost check the correctness of what will be executed; but then pay at least, if not more attention to how it was described, a.k.a. prompted.
And then the very history of prompts and "code reviews" and conversations on and about them, in a simple git repo / Github review tool, will become the very source-of-truth for the future LLMs to keep improving those scripts.
Definitely not bulletproof on DevOps scale. Nowhere near. But very, very good for smaller projects run by smaller teams.
👍3
Very good talk by Simon Peyton Jones.
He pretty much first describes how the idea of “consuming” instances, that culminated in Rust’s lifetimes and borrow checking, is very much a first-principles idea.
Then he says how, if framed just right, this “consuming” concept can be made “immutable”, naturally comparable to how IO abstracts away “mutating the World”, and thus almost monadic, but not quite.
Which allows integrating this concept into Haskell somewhat natively. Leveraging what we should be referring to as polymorphism of types in the first place.
And then “the rest”, that follows from the above in literally one step when looking at it from the right angle.
I also am finding it slightly amusing, and can totally relate, to mild bitterness that it turned out to be Rust, not Haskell, that took this very typesystem-centric idea mainstream.
https://youtu.be/t0mhvd3-60Y
He pretty much first describes how the idea of “consuming” instances, that culminated in Rust’s lifetimes and borrow checking, is very much a first-principles idea.
Then he says how, if framed just right, this “consuming” concept can be made “immutable”, naturally comparable to how IO abstracts away “mutating the World”, and thus almost monadic, but not quite.
Which allows integrating this concept into Haskell somewhat natively. Leveraging what we should be referring to as polymorphism of types in the first place.
And then “the rest”, that follows from the above in literally one step when looking at it from the right angle.
I also am finding it slightly amusing, and can totally relate, to mild bitterness that it turned out to be Rust, not Haskell, that took this very typesystem-centric idea mainstream.
https://youtu.be/t0mhvd3-60Y
YouTube
Simon Peyton Jones - Linear Haskell: practical linearity in a higher-order polymorphic language
Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.
👍3
Of many thoughts I have about this train of Agentic AI that I am currently on, a very positive one keeps standing out.
Capitalism f*cking works!
For many years I used to believe the world of products for tech people was so bad, that it's highly unlikely I'd be paying much for anything.
Effectively, the status quo was that mobile apps were under $5 a month, and other online services, except hosting, were under $10 a month.
Sure, I'm talking about it for years now that products such as Facebook should offer ad-free API-first experience for some $20 per month, so that we could use our own custom clients. This was a utopia with Facebook, but Twitter, now X, is actually exploring a similar path.
Nonetheless, I remained under the assumption that my "work & life setup" will not get substantially better over the years. I'll still be coding in vim, creating and reviewing pull requests from the browser, and occasionally opening up an IDE to debug some nontrivial behavior where debug-prints are not enough.
And most definitely I was under the assumption that I will not be paying anything substantial for any "dev tools", if you'd asked me ~five years ago. Perhaps some $5 monthly donation for a "better vim" or "better code review tool", something Superhuman-like. But nothing game-changing for sure.
Fast forward to today, and AI-assisted coding is here. And it is booming. And I am using the AI every day. And I am paying more for my models usage than for all other online services. Combined.
(Except perhaps my personal hosting, domains and the cloud, but that's beside the point. And it's changing quickly as we speak.)
Very soon I would be paying so much more for models that my very computer would become a commodity! Personally, I'm quite attached to the idea of having my own device, but the thought of it becoming unnecessary is more and more real as we speak.
And the best part is: nobody is forcing me to use the AI. This is the perfect Invisible Hand in action.
A decade or so ago I said "screw you, Market, you can't offer me anything".
The Market seemingly said "meh, well, I don't care about you".
We were content with each other.
Until we were not! Until the market found a way to my soul. By offering, gently and with no pushing whatsoever, something that I truly want to be using — the AI.
And offering it, I should add, at a very lucrative price point. That is, so far. But, given local models are getting better and better, I believe early-2026-grade AI coding assistants have a strict upper bound on their effective monthly price, and this upper bound is both already low enough and is going down rapidly.
What say I? I say: all hail the Market! The Market — delivers.
Capitalism f*cking works!
For many years I used to believe the world of products for tech people was so bad, that it's highly unlikely I'd be paying much for anything.
Effectively, the status quo was that mobile apps were under $5 a month, and other online services, except hosting, were under $10 a month.
Sure, I'm talking about it for years now that products such as Facebook should offer ad-free API-first experience for some $20 per month, so that we could use our own custom clients. This was a utopia with Facebook, but Twitter, now X, is actually exploring a similar path.
Nonetheless, I remained under the assumption that my "work & life setup" will not get substantially better over the years. I'll still be coding in vim, creating and reviewing pull requests from the browser, and occasionally opening up an IDE to debug some nontrivial behavior where debug-prints are not enough.
And most definitely I was under the assumption that I will not be paying anything substantial for any "dev tools", if you'd asked me ~five years ago. Perhaps some $5 monthly donation for a "better vim" or "better code review tool", something Superhuman-like. But nothing game-changing for sure.
Fast forward to today, and AI-assisted coding is here. And it is booming. And I am using the AI every day. And I am paying more for my models usage than for all other online services. Combined.
(Except perhaps my personal hosting, domains and the cloud, but that's beside the point. And it's changing quickly as we speak.)
Very soon I would be paying so much more for models that my very computer would become a commodity! Personally, I'm quite attached to the idea of having my own device, but the thought of it becoming unnecessary is more and more real as we speak.
And the best part is: nobody is forcing me to use the AI. This is the perfect Invisible Hand in action.
A decade or so ago I said "screw you, Market, you can't offer me anything".
The Market seemingly said "meh, well, I don't care about you".
We were content with each other.
Until we were not! Until the market found a way to my soul. By offering, gently and with no pushing whatsoever, something that I truly want to be using — the AI.
And offering it, I should add, at a very lucrative price point. That is, so far. But, given local models are getting better and better, I believe early-2026-grade AI coding assistants have a strict upper bound on their effective monthly price, and this upper bound is both already low enough and is going down rapidly.
What say I? I say: all hail the Market! The Market — delivers.
❤5🔥2
TIL that in DB schema design world, this "standard" solution exists:
“Identifying relationship via composite primary key”.
For cases such as one-to-one-to-many in databases, where the obvious solution is the third normal form, with a dedicated table to ID one-to-one pairs on ...
... instead of actually imposing 3NF under the hood, ORM systems (including
This may be a trivial piece of knowledge for you, if you're working in this space. Or, more likely, it may be totally unrelated to you, like it was for me.
But oh my God. There exists a perfectly legal solution, and the very job of the ORM is to create a thin wrapper layer for the user, while keeping the data model clean. And instead the ORMs are quietly creating a big mess under the hood, which creates all sorts of problems down the road. Problems that would not exist at all if the ORMs were actually designed well from day one.
Oh well. Hopefully not the last big revelation of my professional life.
“Identifying relationship via composite primary key”.
For cases such as one-to-one-to-many in databases, where the obvious solution is the third normal form, with a dedicated table to ID one-to-one pairs on ...
... instead of actually imposing 3NF under the hood, ORM systems (including
SQLAlchemy!) will actually keep those non-3NF UNIQUE constraints in junction tables.This may be a trivial piece of knowledge for you, if you're working in this space. Or, more likely, it may be totally unrelated to you, like it was for me.
But oh my God. There exists a perfectly legal solution, and the very job of the ORM is to create a thin wrapper layer for the user, while keeping the data model clean. And instead the ORMs are quietly creating a big mess under the hood, which creates all sorts of problems down the road. Problems that would not exist at all if the ORMs were actually designed well from day one.
Oh well. Hopefully not the last big revelation of my professional life.
I'm quite happy my career includes both ML/AI and Web3.
In the day and age of more and more exploits coming up in AI, the concept of human accountability is becoming increasingly important.
And what's better for individual accountability than having a proof that one's private key was used to sign some transaction?
I can't wait for the world where there two branches converge.
The engineer deploying code, or accessing production data, must use their Yubikey to sign off their change. It's all track and journaled — not necessarily on-chain, but definitely in ways that enables proving, later on, who did what.
And then, a few years from now, we can tell who was nice and who was naughty when it comes to diligence vs. negligence.
~ ~ ~
Back in late 90s and early 2000s, accountability in software engineering was no big thing.
I recall it vividly that the C# Runtime had a bug that quick-sort would be O(N^2) on a particular corner case — which was found by the judges of some competition, and it cost a strong programmer his first place, and a decent prize; a laptop computer IIRC.
We were late teeens back then, back in Russia, and we were all wondering — Microsoft sure knows who wrote that buggy code, when, and why, right?
Many years later I know it for a fact — yes, Microsoft sure does know. But it most likely did not care, because it could absolutely afford to not care. No offense taken — it's totally understandable Microsoft should not have cared back then.
(Although if I'd be the CEO, I'd definitely make sure to gift that young programmer a brand new laptop "as a courtesy for helping us isolate the bug". And perhaps give the problemsetters and the judges good prizes too — since they indeed were the ones who uncovered the bug.)
But with AI exploits here, there, and everywhere, I'm optimistic to live long enough and witness the world where not caring will no longer be an option.
Accountability — matters.
And the Web3 folks know best when it comes to how to institutionalize it.
In the day and age of more and more exploits coming up in AI, the concept of human accountability is becoming increasingly important.
And what's better for individual accountability than having a proof that one's private key was used to sign some transaction?
I can't wait for the world where there two branches converge.
The engineer deploying code, or accessing production data, must use their Yubikey to sign off their change. It's all track and journaled — not necessarily on-chain, but definitely in ways that enables proving, later on, who did what.
And then, a few years from now, we can tell who was nice and who was naughty when it comes to diligence vs. negligence.
~ ~ ~
Back in late 90s and early 2000s, accountability in software engineering was no big thing.
I recall it vividly that the C# Runtime had a bug that quick-sort would be O(N^2) on a particular corner case — which was found by the judges of some competition, and it cost a strong programmer his first place, and a decent prize; a laptop computer IIRC.
We were late teeens back then, back in Russia, and we were all wondering — Microsoft sure knows who wrote that buggy code, when, and why, right?
Many years later I know it for a fact — yes, Microsoft sure does know. But it most likely did not care, because it could absolutely afford to not care. No offense taken — it's totally understandable Microsoft should not have cared back then.
(Although if I'd be the CEO, I'd definitely make sure to gift that young programmer a brand new laptop "as a courtesy for helping us isolate the bug". And perhaps give the problemsetters and the judges good prizes too — since they indeed were the ones who uncovered the bug.)
But with AI exploits here, there, and everywhere, I'm optimistic to live long enough and witness the world where not caring will no longer be an option.
Accountability — matters.
And the Web3 folks know best when it comes to how to institutionalize it.
👍2
Folks, a silly question — how do you use Telegram from MacOS?
I used to be on the browser app. Moved to the native one recently.
Couldn't turn words into clickable URLs, no context menu option, and Cmd+K did not work. Cmd+U, thankfully, did.
But it keeps pasting screenshots are files, not as images.
I've tried both the AppStore version and the onw downloaded directly from Telegram's website.
What's the solution? Is there any?
Thx in advance!
PS: Also, the UX with folders on the left looks like it's quite outdated. Like my Ubuntu setup with Telegram in a Docker container, frozen on some old version. The modern-day Web UX is so much better, and it mirrors the iPhone interface — but somehow the MacOS native app is quite backwards.
I used to be on the browser app. Moved to the native one recently.
Couldn't turn words into clickable URLs, no context menu option, and Cmd+K did not work. Cmd+U, thankfully, did.
But it keeps pasting screenshots are files, not as images.
I've tried both the AppStore version and the onw downloaded directly from Telegram's website.
What's the solution? Is there any?
Thx in advance!
PS: Also, the UX with folders on the left looks like it's quite outdated. Like my Ubuntu setup with Telegram in a Docker container, frozen on some old version. The modern-day Web UX is so much better, and it mirrors the iPhone interface — but somehow the MacOS native app is quite backwards.
😢1
I found a surprisingly effective way to turn AI-written Python from very bad into moderately bad.
The workflow looks like this:
⒈ Ask the AI to rewrite your Python code in Rust.
⒉ Split the Rust output into two repos: service + tests.
⒊ Clean up the Rust code (it’s usually much cleaner by default).
⒋ Once the Rust looks sane, run two AIs in parallel to independently translate:
∙ Rust service → Python service
∙ Rust tests → Python tests
⒌ Painfully but systematically verify:
∙ Rust tests pass against the Python service
∙ Python tests correctly exercise the Rust service
⒍ Merge the Python service + tests into a single repo.
Optional but emotionally damaging:
⒎ Admire the Rust code and suffer, asking yourself why you didn’t start in Rust in the first place.
Why this works? Elementary, my dear Watson.
Forcing a round-trip through a strongly typed language acts as a spec extractor.
The Rust version becomes a de facto executable spec: clearer interfaces, explicit types, and fewer implicit assumptions. Translating back to Python then inherits that structure.
This is especially effective when:
⒈ You have good tests.
⒉ You’re not token-constrained.
⒊ You want the AI to “settle” its understanding by expressing the same system in multiple type systems.
TL;DR:
Parallel translation across languages is a powerful way to force shared understanding, surface ambiguities, and accidentally design your system better than you originally intended. I’d recommend more of this — especially if your goal is to write advanced Python that is not shitty.
The workflow looks like this:
⒈ Ask the AI to rewrite your Python code in Rust.
⒉ Split the Rust output into two repos: service + tests.
⒊ Clean up the Rust code (it’s usually much cleaner by default).
⒋ Once the Rust looks sane, run two AIs in parallel to independently translate:
∙ Rust service → Python service
∙ Rust tests → Python tests
⒌ Painfully but systematically verify:
∙ Rust tests pass against the Python service
∙ Python tests correctly exercise the Rust service
⒍ Merge the Python service + tests into a single repo.
Optional but emotionally damaging:
⒎ Admire the Rust code and suffer, asking yourself why you didn’t start in Rust in the first place.
Why this works? Elementary, my dear Watson.
Forcing a round-trip through a strongly typed language acts as a spec extractor.
The Rust version becomes a de facto executable spec: clearer interfaces, explicit types, and fewer implicit assumptions. Translating back to Python then inherits that structure.
This is especially effective when:
⒈ You have good tests.
⒉ You’re not token-constrained.
⒊ You want the AI to “settle” its understanding by expressing the same system in multiple type systems.
TL;DR:
Parallel translation across languages is a powerful way to force shared understanding, surface ambiguities, and accidentally design your system better than you originally intended. I’d recommend more of this — especially if your goal is to write advanced Python that is not shitty.
👍7
Lol, so I'm using this Wispr Flow as a good friend suggested, and it's addictive. Thus, more and more often my coding instructions are voice-based. Since I like the pace of it, some messages are also me talking to the AI first, asking to phrase my thoughts for further proofreading.
This time I've asked to format it as a nice chat message, broken into short sentences, easy to read.
And this AI overlord
Presented my thoughts
As a goddamn beautiful haiku
Of multiple verses
All making perfect sense
Flowing beautifully as a river
Soon we'll see AI-generated rap battles over microservices architecture and its design flaws. Can't wait.
This time I've asked to format it as a nice chat message, broken into short sentences, easy to read.
And this AI overlord
Presented my thoughts
As a goddamn beautiful haiku
Of multiple verses
All making perfect sense
Flowing beautifully as a river
Soon we'll see AI-generated rap battles over microservices architecture and its design flaws. Can't wait.
❤1👏1
My most popular Cursor query, unsurprisingly.
Pull the main branch from origin, carefully merge with it, ask me if not 100% sure. Explain to me what was merged in. Staying on the original branch, rebase the current diff as a single commit on top of what was merged from main. Commit it with a clear yet detailed description.
Not shy at all.
Thx @arsenyinfo for a hint several months ago. You called it!
Not shy at all.
Thx @arsenyinfo for a hint several months ago. You called it!
🔥4❤1
We need a class action suit worldwide or something simliar for WiFi-s that require you to install a custom certificate.
Let's make a list countries that are exempt. That's okay. Outskirts of civilization do exist, after all.
But if a business center that you are paying money to enter is requesting you to install their WiFi certificate to use the Internet — they should literally be reimbursing EVERY SINGLE USER some $1000 per hour that they have suffered. Starting from the point where we have explicitly expressed this concern. And counting.
Yes, I'm looking at you, WeWork.
PS: I'm not a security expert, but, clearly, any decent VPN service, or even an SSH tunnel proxy, makes one's system secure from man-in-the-middle attacks. Or one can carry an Android phone that can connect to one network, act as the VPN wireless proxy, and tether a brand new network that is safe.
Point is, it's security though obscurity. Adds nothing except pain. Let's end it once and for all, like we're hopefully outlawing those "Accept Cookies" banners some time soon.
Rant over.
Let's make a list countries that are exempt. That's okay. Outskirts of civilization do exist, after all.
But if a business center that you are paying money to enter is requesting you to install their WiFi certificate to use the Internet — they should literally be reimbursing EVERY SINGLE USER some $1000 per hour that they have suffered. Starting from the point where we have explicitly expressed this concern. And counting.
Yes, I'm looking at you, WeWork.
PS: I'm not a security expert, but, clearly, any decent VPN service, or even an SSH tunnel proxy, makes one's system secure from man-in-the-middle attacks. Or one can carry an Android phone that can connect to one network, act as the VPN wireless proxy, and tether a brand new network that is safe.
Point is, it's security though obscurity. Adds nothing except pain. Let's end it once and for all, like we're hopefully outlawing those "Accept Cookies" banners some time soon.
Rant over.
❤6
These were meant to be two or three posts, but they fit into one.
First: people underappreciate how much modern AI-assisted coding removes small but extremely annoying toil. A repo that doesn’t build, mismatched versions, poorly documented setup — this used to cost engineers hours or even full days, especially when entering a new domain. Today, you can usually ask an AI to debug, fix, or at least guide you through the setup. If you want to learn, it makes the process faster and less painful; if you’re pragmatic, you can often just ask it to make things work and move on. Either way, the experience is dramatically better.
Second: disposable software is becoming the norm. A couple of months ago, I built a small tool to transcribe video files into text. Today, I probably wouldn’t bother building or maintaining that tool at all. I’d just use an AI directly, or ask my IDE to transcribe the next file for me "based on the code in this repo". If it fails, I’d ask it to experiment, then document the working approach in a markdown file so the next “agent” (me next week, or someone else later) can just pick it up and repeat the process. The friction to creating and throwing away small tools is collapsing to near-zero as we speak.
Third: people grossly underestimate how much software-driven work is now accessible to non-engineers. In the past, even basic CLI instructions felt risky to many users (“this Win+R and
This creates space for an entire product category: thin frontends over public APIs, where users click buttons or type requests in natural language and the system handles the plumbing. Maybe there’s a huge business opportunity here like it was with chatbots ten years ago; maybe it just becomes table stakes, just like it happened ten years ago. Either way, the barrier to using powerful software primitives is dropping fast.
Even before this fully takes off, this shift changes expectations inside companies. A good product owner in 2016 did not need to know how to build the project. A good product owner in 2026 probably must: IDE installed, dependencies working, and an AI coding assistant available to prototype features, explore data, and test ideas directly. For most “explore the data” tasks — dashboards, link traversal, lightweight analysis — all you need is a clean API, a repo with clear markdown docs describing the data model and access patterns, and a cheap AI assistant subscription.
But that user-facing tools may well begin to ship as a link to a git repository with Markdown instructions for humans and agents. And Cursor is the new browser.
First: people underappreciate how much modern AI-assisted coding removes small but extremely annoying toil. A repo that doesn’t build, mismatched versions, poorly documented setup — this used to cost engineers hours or even full days, especially when entering a new domain. Today, you can usually ask an AI to debug, fix, or at least guide you through the setup. If you want to learn, it makes the process faster and less painful; if you’re pragmatic, you can often just ask it to make things work and move on. Either way, the experience is dramatically better.
Second: disposable software is becoming the norm. A couple of months ago, I built a small tool to transcribe video files into text. Today, I probably wouldn’t bother building or maintaining that tool at all. I’d just use an AI directly, or ask my IDE to transcribe the next file for me "based on the code in this repo". If it fails, I’d ask it to experiment, then document the working approach in a markdown file so the next “agent” (me next week, or someone else later) can just pick it up and repeat the process. The friction to creating and throwing away small tools is collapsing to near-zero as we speak.
Third: people grossly underestimate how much software-driven work is now accessible to non-engineers. In the past, even basic CLI instructions felt risky to many users (“this Win+R and
cmd might break my computer”). Now we’re getting sandboxed AI assistants and safer environments, and a meaningful fraction of non-engineers are willing to pay a monthly subscription to unlock capabilities that used to require a developer. This fundamentally changes UX: instead of “install this repo, set up deps, run these commands,” the UX becomes “describe what you want in plain English.”This creates space for an entire product category: thin frontends over public APIs, where users click buttons or type requests in natural language and the system handles the plumbing. Maybe there’s a huge business opportunity here like it was with chatbots ten years ago; maybe it just becomes table stakes, just like it happened ten years ago. Either way, the barrier to using powerful software primitives is dropping fast.
Even before this fully takes off, this shift changes expectations inside companies. A good product owner in 2016 did not need to know how to build the project. A good product owner in 2026 probably must: IDE installed, dependencies working, and an AI coding assistant available to prototype features, explore data, and test ideas directly. For most “explore the data” tasks — dashboards, link traversal, lightweight analysis — all you need is a clean API, a repo with clear markdown docs describing the data model and access patterns, and a cheap AI assistant subscription.
But that user-facing tools may well begin to ship as a link to a git repository with Markdown instructions for humans and agents. And Cursor is the new browser.
👍1
A while back, I was complaining about how stupid it is for Acrobat PDF reader to disable copy, paste and text if the document says so in the day and age of OCR being so good.
And now if you send yourself a voice message in Telegram, you can transcribe it, but you cannot copy the final text.
Which I would like to call out as stupid as well, because this particular text is parsed from my voice by Telegram, presented as un-copyable text on my screen, and then OCR’d by ChatGPT.
And now if you send yourself a voice message in Telegram, you can transcribe it, but you cannot copy the final text.
Which I would like to call out as stupid as well, because this particular text is parsed from my voice by Telegram, presented as un-copyable text on my screen, and then OCR’d by ChatGPT.
👍4😁2🔥1
Funny how we still stuggle with terminals, coloring, keyboard shortcuts, escape sequences, etc ...
... while products such as Claude Code are "just good enough", with no strings attached.
Perhaps it is about time to ship that Hypertext Fidonet once and for all.
... while products such as Claude Code are "just good enough", with no strings attached.
Perhaps it is about time to ship that Hypertext Fidonet once and for all.
This is almost a trivial thought, but for me it’s quite a revelation.
In the day and age of AI, it may well be the case that the most valuable “learning” or “knowledge” a human can have is a well-internalized pattern of wrong ways of doing things.
An expert musician knows not just where a student is wrong. They anticipate exactly why they are wrong in a hundred other ways just by hearing three tiny mistakes. And they can devise a learning plan to mitigate those mistakes.
A great competitive programmer knows right away why some “standard” algorithm or approach will not work on a problem at hand. In fact, their brain has already thought of the worst possible case for every standard approach, and they already know the pitfalls of many non-solutions before exploring them.
An expert mathematician has a decent idea these days how to prove Fermat’s Last Theorem. An expert mathematician also knows very well why 99.9999% of otherwise brilliant schoolkids won’t be able to prove it. They don’t need to go through their entire proof; just from their basic set of opening ideas, the expert would know where exactly they will get stuck.
So, a human expert, with a wealth of knowledge accumulated over their entire lifetime, is really good at exactly this: seeing where some approaches will inevitably get stuck — even if this getting-stuck part is literally years ahead.
See where I’m going with this?
This knowledge is captured deep within human brain neurons. Maybe one day we’ll have the technology to scan those neurons from live humans. Maybe one day we’ll learn to simulate some environments well enough.
In fact, I am quite positive that some MathsZero, akin to AlphaZero, will be able to prove Fermat’s Last Theorem in just under a million dollars in computational resources, spent from scratch. But maths is very much a simulated domain.
When it comes to the intuition of how long certain things in the human world take to make happen — here humans are still irreplaceable. And this is exactly the skill that will be the most valuable one in the near future.
I still think it may well happen that AI development will be so hard that we’ll literally have physical AI-powered robots sit through every single class of elementary school, then middle school, then university, then some post-graduate degree — not to “learn the facts,” but to observe, very, very carefully how humans reason, what mistakes we make, and how the best of us have mastered the art of sharing with others how to avoid those mistakes.
That, or we’ll find other ways to have our AI overlords surpass us.
In the meantime, I truly love being useful when it comes to helping thinking machines shape the direction in which they should be thinking — literally in real time. There’s an insane amount of instant gratification in making big things happen, and for some big things, what used to take me weeks can now be hours.
What a fascinating time to be alive.
In the day and age of AI, it may well be the case that the most valuable “learning” or “knowledge” a human can have is a well-internalized pattern of wrong ways of doing things.
An expert musician knows not just where a student is wrong. They anticipate exactly why they are wrong in a hundred other ways just by hearing three tiny mistakes. And they can devise a learning plan to mitigate those mistakes.
A great competitive programmer knows right away why some “standard” algorithm or approach will not work on a problem at hand. In fact, their brain has already thought of the worst possible case for every standard approach, and they already know the pitfalls of many non-solutions before exploring them.
An expert mathematician has a decent idea these days how to prove Fermat’s Last Theorem. An expert mathematician also knows very well why 99.9999% of otherwise brilliant schoolkids won’t be able to prove it. They don’t need to go through their entire proof; just from their basic set of opening ideas, the expert would know where exactly they will get stuck.
So, a human expert, with a wealth of knowledge accumulated over their entire lifetime, is really good at exactly this: seeing where some approaches will inevitably get stuck — even if this getting-stuck part is literally years ahead.
See where I’m going with this?
This knowledge is captured deep within human brain neurons. Maybe one day we’ll have the technology to scan those neurons from live humans. Maybe one day we’ll learn to simulate some environments well enough.
In fact, I am quite positive that some MathsZero, akin to AlphaZero, will be able to prove Fermat’s Last Theorem in just under a million dollars in computational resources, spent from scratch. But maths is very much a simulated domain.
When it comes to the intuition of how long certain things in the human world take to make happen — here humans are still irreplaceable. And this is exactly the skill that will be the most valuable one in the near future.
I still think it may well happen that AI development will be so hard that we’ll literally have physical AI-powered robots sit through every single class of elementary school, then middle school, then university, then some post-graduate degree — not to “learn the facts,” but to observe, very, very carefully how humans reason, what mistakes we make, and how the best of us have mastered the art of sharing with others how to avoid those mistakes.
That, or we’ll find other ways to have our AI overlords surpass us.
In the meantime, I truly love being useful when it comes to helping thinking machines shape the direction in which they should be thinking — literally in real time. There’s an insane amount of instant gratification in making big things happen, and for some big things, what used to take me weeks can now be hours.
What a fascinating time to be alive.
👍2🔥1
Someone should say it out loud, may well be me.
If Opus 4.6 is already good enough for coding, perhaps that is the model worthy of encoding in hardware silicon.
17K+ tokens per second when it comes to coding most definitely is the next big thing.
And then some offline open source Cursor or Claude Code or Junie, coupled with a sub-$1000 chip one can buy to use offline — that's the superpower we're talking about.
Although with today's pace of progress, the above is probably some bare minimum expectation of just single-digit years. Hope nothing bad happens and we could see this go full-blown all in in very soon.
And then having a large 120Hz monitor for coding will indeed become the reality. Because how else can a human process this bandwidth of pure value?
If Opus 4.6 is already good enough for coding, perhaps that is the model worthy of encoding in hardware silicon.
17K+ tokens per second when it comes to coding most definitely is the next big thing.
And then some offline open source Cursor or Claude Code or Junie, coupled with a sub-$1000 chip one can buy to use offline — that's the superpower we're talking about.
Although with today's pace of progress, the above is probably some bare minimum expectation of just single-digit years. Hope nothing bad happens and we could see this go full-blown all in in very soon.
And then having a large 120Hz monitor for coding will indeed become the reality. Because how else can a human process this bandwidth of pure value?
🔥4
It's still beyond me that someone out there is being paid a lot of money to carefully phrase this message.
I booked a flight a few hours ago. To a different airport in Baja California. I've cancelled it, of course.
I saw the warning while booking the flight.
I can visualize the map of the region.
I asked the AI where is the hurricane headed. The answer was: East.
I assumed next week it will most definitely safe.
It did not even cross my mind that these are TWO announcements in one: an announcement about the hurricane, and ANOTHER REASON that is, well, at least something along the lines of "social unrest".
Because some people out there are VERY careful to phrase things in the most "neutral" way possible, to avoid any and all "trigger words". And they are literally paid for it.
How we got to this state of affairs is beyond me. Seriously, in my book, it's conscious concealment of facts, on borderline legal levels.
These "positive PR" vibes can literally cost lives. And yet we keep pretending like the world is all unicorns and rainbows.
Hope everything ends well soon for you, dear Mexicans.
I booked a flight a few hours ago. To a different airport in Baja California. I've cancelled it, of course.
I saw the warning while booking the flight.
I can visualize the map of the region.
I asked the AI where is the hurricane headed. The answer was: East.
I assumed next week it will most definitely safe.
It did not even cross my mind that these are TWO announcements in one: an announcement about the hurricane, and ANOTHER REASON that is, well, at least something along the lines of "social unrest".
Because some people out there are VERY careful to phrase things in the most "neutral" way possible, to avoid any and all "trigger words". And they are literally paid for it.
How we got to this state of affairs is beyond me. Seriously, in my book, it's conscious concealment of facts, on borderline legal levels.
These "positive PR" vibes can literally cost lives. And yet we keep pretending like the world is all unicorns and rainbows.
Hope everything ends well soon for you, dear Mexicans.
😢3
Here's an observation that's somewhat controversial. I've phrased it here and there, but never properly collected it.
The AI's take on how to use browsers and frontend-first interfaces is surprisingly close to mine.
As in: Browsers are good. Visuals often help a lot. Making things clickable, having things highlight as you hover over them, selecting and de-selecting items — there is tremendous value in this.
Adaptive search controls, where helpful suggestions complete what you want to phrase in a semi-formal language — those kick ass.
And demos become ultra-slick. Both external and internal.
In theory, much of this functionality can be done in the CLI. In practice, for lightweight helper tools, the ROI of exposing something in a browser-friendly and browser-native way is just too high. Even before the era of AI-assisted coding, and most certainly today.
In fact, I used to advocate — as early as ten years ago — that a JSON-returning endpoint should inspect the headers of the request, and present itself in a more browser-friendly way if queried from the browser. At the very least, make hypermedia links clickable and add an "up" button. And further: offer visualizations, interactive query building, and much more.
The controversial observation is that the AI tends to agree with me on this nearly 100%.
TL;DR:
* Absolutely no React or Vue.
* No jQuery either — vanilla JavaScript.
* Design with CSS in mind, but oftentimes it's just not needed.
I find myself literally saying "okay, let's make this API and this CLI tool's functionality browser-friendly." I keep asking simple questions, and it generates exactly what I would have done myself ten years ago. Only it knows CSS far better than I do, and it takes minutes for what would have taken me hours.
Seriously, I think it's a crime against the geek community to not have browser-first visualization and drilldown tools for virtually anything long-running. From Docker and Git, all the way to a C++ compiler taking a while to make sense of some code. The language servers are already there — it's honestly not too much work to expose that data in human-consumable formats too. Especially if done via a standalone daemon that doesn't make the original tool sloppier or heavier.
And yes. I still think React is literally useless. The only place where it genuinely helps is if your business really needs a "Web Super-App", where the "Ads Feed" — sorry, the "News Feed" — absolutely must live next to the Profile, Notifications, Chat, Alerts, and autoplaying Shorts or Reels. In other words: React helps with exactly what I want zero of in my life. I can open separate tabs or apps for news feeds, chats, and videos when I need to. And I can totally live without notifications.
[ I considered ending with "Prove me wrong", but this take is too personal to argue about. So posted without CTA (c) ]
The AI's take on how to use browsers and frontend-first interfaces is surprisingly close to mine.
As in: Browsers are good. Visuals often help a lot. Making things clickable, having things highlight as you hover over them, selecting and de-selecting items — there is tremendous value in this.
Adaptive search controls, where helpful suggestions complete what you want to phrase in a semi-formal language — those kick ass.
And demos become ultra-slick. Both external and internal.
In theory, much of this functionality can be done in the CLI. In practice, for lightweight helper tools, the ROI of exposing something in a browser-friendly and browser-native way is just too high. Even before the era of AI-assisted coding, and most certainly today.
In fact, I used to advocate — as early as ten years ago — that a JSON-returning endpoint should inspect the headers of the request, and present itself in a more browser-friendly way if queried from the browser. At the very least, make hypermedia links clickable and add an "up" button. And further: offer visualizations, interactive query building, and much more.
The controversial observation is that the AI tends to agree with me on this nearly 100%.
TL;DR:
* Absolutely no React or Vue.
* No jQuery either — vanilla JavaScript.
* Design with CSS in mind, but oftentimes it's just not needed.
I find myself literally saying "okay, let's make this API and this CLI tool's functionality browser-friendly." I keep asking simple questions, and it generates exactly what I would have done myself ten years ago. Only it knows CSS far better than I do, and it takes minutes for what would have taken me hours.
Seriously, I think it's a crime against the geek community to not have browser-first visualization and drilldown tools for virtually anything long-running. From Docker and Git, all the way to a C++ compiler taking a while to make sense of some code. The language servers are already there — it's honestly not too much work to expose that data in human-consumable formats too. Especially if done via a standalone daemon that doesn't make the original tool sloppier or heavier.
And yes. I still think React is literally useless. The only place where it genuinely helps is if your business really needs a "Web Super-App", where the "Ads Feed" — sorry, the "News Feed" — absolutely must live next to the Profile, Notifications, Chat, Alerts, and autoplaying Shorts or Reels. In other words: React helps with exactly what I want zero of in my life. I can open separate tabs or apps for news feeds, chats, and videos when I need to. And I can totally live without notifications.
[ I considered ending with "Prove me wrong", but this take is too personal to argue about. So posted without CTA (c) ]