Not boring, and a bit of a condescending prick
308 subscribers
108 photos
3 videos
185 links
Semi-digested observations about our world right after they are phrased well enough in my head to be shared broader.
Download Telegram
While I dislike Python (and prefer Rust, hehe), one thing it teaches you is that the old "enterprise-grade" Java-world "skillz" are long obsolete.

Because stuff should really a) be short, simple, and descriptive, and b) "just work" (c).

Seriously, I sincerely believe good software engineering taste is that short and clean code with fewer dependencies is generally what we need.

Cargo kicks ass in most aspects here. Python, especially with uv — which is a Rust tool by itself! — is surprisingly okay.

Did not expect myself to say this, but having to deal with TeamCity configuration via .teamcity/* give me shivers. It's been hours, and I can't make "Hello, world!" work. On my own TeamCity instance.

I remember the times when I argued how bad Github's Yaml-based Actions config is. Well, sure, strong typing would be great there — have you considered Cargo and Rust?

But boy, using JVM and gradle to run "workflows" so that I'm getting dozens of unreadable "Kotlin compilation errors" while all I need is to run echo 'Hello, world!'? Call me crazy, but my take here is that the crazy side here is the one that accepts how convoluted this whole thing is.

Challenge accepted. I'll do it. But it's painful af so far.
🔥2
This post carries a trivial message, but I learned the hard way that its implications are not at all obvious.

The trivial message is: Fixing LLM hallucinations is fundamentally no different from fixing similar failure modes in the human brain.

Corollary: The human brain has basic, low-level failure modes that trace back to a few misfiring neurons.

Here's my mental model. I do not claim it is correct, only that it maps reality reasonably well.

Humans share a tiny set of deeply hard-coded concepts: “good,” “fair,” “just,” “divine,” “love,” “duty,” “pleasure,” “dignity,” “loyalty,” “sanctity,” “disgust,” and a few more. They fit on two hands.

But modern civilization is far too complex and contradictory. Worse, countless actors today are aggressively “prompt-engineering” every human being for their own agendas. The cost of experimentation is near zero and the payoff enormous, so state and non-state actors have no reason not to try to “[re-]program” us. This mass-scale “civilizational programming” has reached heights unthinkable a decade or two ago. And it works.

Many things follow from this model; I will outline one minor and one major.

Minor: Remember that every person’s political and moral views reflect the nonstop nonsense they ingest. Reasonable people can debate the degree of personal responsibility to resist propaganda. But one thing is clear: most people simply repeat talking points without applying any critical scrutiny.

This is not new; what is new is the scale. Our echo chambers and propaganda engines now produce large populations who appear completely deranged—advocating agendas detached from their own lived reality and even harming themselves and their families. Activism can be noble; self-sacrifice for something worthless is emotional deficiency, not virtue.

Major: This applies to you as well — perhaps less than to most if you are reading this, but the logic stands.

No one is immune to stimuli aimed at the inner neurons of “happiness,” “safety,” “self-actualization,” etc. The only viable strategy, if sanity is a priority, is to consciously pick your echo chambers and aggressively filter emotionally charged content.

You also need resistance mechanisms — real ones, not coping mechanisms.

For example, I often find myself caring too much about the emotional state of the average human. It arguably damages my personal life. My rational brain knows exactly what restores balance: recognizing how unsalvageable many people are. Walking past a row of slot machines in Vegas and seeing hundreds of empty eyes pouring millions of dollars into pure uselessness forces me to internalize a basic truth: I cannot meaningfully extend compassion to everyone.

(Yes, gambling addiction is a real disease, and regulations exist for a reason. But most people at those machines are not addicts — they are just “regular humans,” as a friend succinctly puts it. Acknowledging that fact helps me care less emotionally, which is one of many mental tricks I utilize to stay sane.)

The takeaway is: There is nothing wrong or shameful in maintaining an arsenal of mental tricks. To live one’s own life in our increasingly hostile informational environment, we will need stronger internal tools. Begin building them early on, if only to Live Long and Prosper!
🔥3
The more I’m thinking of where the world is going the more I’m convinced its trajectory is almost exclusively determined by the answer to one question.

Is unconstrained communication the property of the Universe, or is it a social construct?

If it’s the universal property, that would simply mean that any and all at-scale censorship and speech control mechanisms will fail. We can assume they are all ephemeral and temporary, like the Prohibition. Humankind may well eventually give up alcohol altogether, but we appear to have collectively agreed that trying to out right ban it deals more harm than good.

If it’s a social construct, we have to declare that the days of free Internet are gone for good as of some ten years ago. Orwell then just happened to predict the future by generalizing a few observations well.

I know I personally would prefer to live in the world of free communication. Just imagine mesh networks work at any reasonable distance, below any reasonable signal-to-noise ratio, completely undetectable, except the very entity to which / to whom this particular piece of communication is directed.

Yes, I get it, such a world presents major challenges — from tearing apart the social fabric, all the way to literal military risks h heard of before. But if we manage to sustain our civilization, we’d be up to a great start, to conquer the Solar System and beyond.

And yes, I also get it that if the goal is purely to create a “safe and flourishing” world, collectively agreeing that free and unconstrained communication was just a fluke may well be the best first step.

Thankfully, we don’t have to decide any time soon. Various experiments, from European regulations to swarms of self-flying drones, are underway as we speak. We may well have time to course-correct at multiple bifurcation points if and as needed.

But I have to confess declaring free communication dead is something I would feel quite bitter about. And in quite a few corners of the world it can and should be pronounced dead today.
👍42
It’s remarkable how many solid language-design choices emerge once you commit to treating types as a zero-overhead runtime abstraction.
🥰3👍1🤔1
The major cause of the software crisis is that the machines have become several orders of magnitude more powerful! To put it quite bluntly: as long as there were no machines, programming was no problem at all; when we had a few weak computers, programming became a mild problem, and now we have gigantic computers, programming has become an equally gigantic problem.

1972

Edsger Dijkstra, The Humble Programmer (EWD340), Communications of the ACM.
🔥91
Writing Yaml config files used to be my least favorite activity — no schema, often no quick way to check their validity, no tab completion, no typing (of course).

On the bright side, doing this with LLMs is surprisingly pleasant.

I would not be surprised that among the first industries that completely switch to English as the definition language we will see the one of automatic container build / publish / put-together / test / retention policy / etc.

Sure, people who self-identify as Engineers, like me, would still want English to not be the Source-of-Truth of those "scripts".

But I'm already quite comfortable with a development practice where English instructions are pushed right to the repo alongside the very LLM-generated "code".

So that a good code reviewer — me — would, of course, first and foremost check the correctness of what will be executed; but then pay at least, if not more attention to how it was described, a.k.a. prompted.

And then the very history of prompts and "code reviews" and conversations on and about them, in a simple git repo / Github review tool, will become the very source-of-truth for the future LLMs to keep improving those scripts.

Definitely not bulletproof on DevOps scale. Nowhere near. But very, very good for smaller projects run by smaller teams.
👍3
Very good talk by Simon Peyton Jones.

He pretty much first describes how the idea of “consuming” instances, that culminated in Rust’s lifetimes and borrow checking, is very much a first-principles idea.

Then he says how, if framed just right, this “consuming” concept can be made “immutable”, naturally comparable to how IO abstracts away “mutating the World”, and thus almost monadic, but not quite.

Which allows integrating this concept into Haskell somewhat natively. Leveraging what we should be referring to as polymorphism of types in the first place.

And then “the rest”, that follows from the above in literally one step when looking at it from the right angle.

I also am finding it slightly amusing, and can totally relate, to mild bitterness that it turned out to be Rust, not Haskell, that took this very typesystem-centric idea mainstream.

https://youtu.be/t0mhvd3-60Y
👍3
Of many thoughts I have about this train of Agentic AI that I am currently on, a very positive one keeps standing out.

Capitalism f*cking works!

For many years I used to believe the world of products for tech people was so bad, that it's highly unlikely I'd be paying much for anything.

Effectively, the status quo was that mobile apps were under $5 a month, and other online services, except hosting, were under $10 a month.

Sure, I'm talking about it for years now that products such as Facebook should offer ad-free API-first experience for some $20 per month, so that we could use our own custom clients. This was a utopia with Facebook, but Twitter, now X, is actually exploring a similar path.

Nonetheless, I remained under the assumption that my "work & life setup" will not get substantially better over the years. I'll still be coding in vim, creating and reviewing pull requests from the browser, and occasionally opening up an IDE to debug some nontrivial behavior where debug-prints are not enough.

And most definitely I was under the assumption that I will not be paying anything substantial for any "dev tools", if you'd asked me ~five years ago. Perhaps some $5 monthly donation for a "better vim" or "better code review tool", something Superhuman-like. But nothing game-changing for sure.

Fast forward to today, and AI-assisted coding is here. And it is booming. And I am using the AI every day. And I am paying more for my models usage than for all other online services. Combined.

(Except perhaps my personal hosting, domains and the cloud, but that's beside the point. And it's changing quickly as we speak.)

Very soon I would be paying so much more for models that my very computer would become a commodity! Personally, I'm quite attached to the idea of having my own device, but the thought of it becoming unnecessary is more and more real as we speak.

And the best part is: nobody is forcing me to use the AI. This is the perfect Invisible Hand in action.

A decade or so ago I said "screw you, Market, you can't offer me anything".

The Market seemingly said "meh, well, I don't care about you".

We were content with each other.

Until we were not! Until the market found a way to my soul. By offering, gently and with no pushing whatsoever, something that I truly want to be using — the AI.

And offering it, I should add, at a very lucrative price point. That is, so far. But, given local models are getting better and better, I believe early-2026-grade AI coding assistants have a strict upper bound on their effective monthly price, and this upper bound is both already low enough and is going down rapidly.

What say I? I say: all hail the Market! The Market — delivers.
5🔥2
TIL that in DB schema design world, this "standard" solution exists:

“Identifying relationship via composite primary key”.

For cases such as one-to-one-to-many in databases, where the obvious solution is the third normal form, with a dedicated table to ID one-to-one pairs on ...

... instead of actually imposing 3NF under the hood, ORM systems (including SQLAlchemy!) will actually keep those non-3NF UNIQUE constraints in junction tables.

This may be a trivial piece of knowledge for you, if you're working in this space. Or, more likely, it may be totally unrelated to you, like it was for me.

But oh my God. There exists a perfectly legal solution, and the very job of the ORM is to create a thin wrapper layer for the user, while keeping the data model clean. And instead the ORMs are quietly creating a big mess under the hood, which creates all sorts of problems down the road. Problems that would not exist at all if the ORMs were actually designed well from day one.

Oh well. Hopefully not the last big revelation of my professional life.
I'm quite happy my career includes both ML/AI and Web3.

In the day and age of more and more exploits coming up in AI, the concept of human accountability is becoming increasingly important.

And what's better for individual accountability than having a proof that one's private key was used to sign some transaction?

I can't wait for the world where there two branches converge.

The engineer deploying code, or accessing production data, must use their Yubikey to sign off their change. It's all track and journaled — not necessarily on-chain, but definitely in ways that enables proving, later on, who did what.

And then, a few years from now, we can tell who was nice and who was naughty when it comes to diligence vs. negligence.

~ ~ ~

Back in late 90s and early 2000s, accountability in software engineering was no big thing.

I recall it vividly that the C# Runtime had a bug that quick-sort would be O(N^2) on a particular corner case — which was found by the judges of some competition, and it cost a strong programmer his first place, and a decent prize; a laptop computer IIRC.

We were late teeens back then, back in Russia, and we were all wondering — Microsoft sure knows who wrote that buggy code, when, and why, right?

Many years later I know it for a fact — yes, Microsoft sure does know. But it most likely did not care, because it could absolutely afford to not care. No offense taken — it's totally understandable Microsoft should not have cared back then.

(Although if I'd be the CEO, I'd definitely make sure to gift that young programmer a brand new laptop "as a courtesy for helping us isolate the bug". And perhaps give the problemsetters and the judges good prizes too — since they indeed were the ones who uncovered the bug.)

But with AI exploits here, there, and everywhere, I'm optimistic to live long enough and witness the world where not caring will no longer be an option.

Accountability — matters.

And the Web3 folks know best when it comes to how to institutionalize it.
👍2
Folks, a silly question — how do you use Telegram from MacOS?

I used to be on the browser app. Moved to the native one recently.

Couldn't turn words into clickable URLs, no context menu option, and Cmd+K did not work. Cmd+U, thankfully, did.

But it keeps pasting screenshots are files, not as images.

I've tried both the AppStore version and the onw downloaded directly from Telegram's website.

What's the solution? Is there any?

Thx in advance!

PS: Also, the UX with folders on the left looks like it's quite outdated. Like my Ubuntu setup with Telegram in a Docker container, frozen on some old version. The modern-day Web UX is so much better, and it mirrors the iPhone interface — but somehow the MacOS native app is quite backwards.
😢1
I found a surprisingly effective way to turn AI-written Python from very bad into moderately bad.

The workflow looks like this:

⒈ Ask the AI to rewrite your Python code in Rust.
⒉ Split the Rust output into two repos: service + tests.
⒊ Clean up the Rust code (it’s usually much cleaner by default).
⒋ Once the Rust looks sane, run two AIs in parallel to independently translate:
  ∙ Rust service → Python service
  ∙ Rust tests → Python tests
⒌ Painfully but systematically verify:
  ∙ Rust tests pass against the Python service
  ∙ Python tests correctly exercise the Rust service
⒍ Merge the Python service + tests into a single repo.

Optional but emotionally damaging:

⒎ Admire the Rust code and suffer, asking yourself why you didn’t start in Rust in the first place.

Why this works? Elementary, my dear Watson.

Forcing a round-trip through a strongly typed language acts as a spec extractor.
The Rust version becomes a de facto executable spec: clearer interfaces, explicit types, and fewer implicit assumptions. Translating back to Python then inherits that structure.

This is especially effective when:

⒈ You have good tests.
⒉ You’re not token-constrained.
⒊ You want the AI to “settle” its understanding by expressing the same system in multiple type systems.

TL;DR:

Parallel translation across languages is a powerful way to force shared understanding, surface ambiguities, and accidentally design your system better than you originally intended. I’d recommend more of this — especially if your goal is to write advanced Python that is not shitty.
👍7
Lol, so I'm using this Wispr Flow as a good friend suggested, and it's addictive. Thus, more and more often my coding instructions are voice-based. Since I like the pace of it, some messages are also me talking to the AI first, asking to phrase my thoughts for further proofreading.

This time I've asked to format it as a nice chat message, broken into short sentences, easy to read.

And this AI overlord
Presented my thoughts
A
s a goddamn beautiful haiku

Of multiple verses
All making perfect sense
Flowing beautifully as a river


Soon we'll see AI-generated rap battles over microservices architecture and its design flaws. Can't wait.
1👏1
Hohoho, I wrote this in 2020.

With AI, some dreams do come true!
👍2
Quora is so dead. I can't even.
😁1
My most popular Cursor query, unsurprisingly.


Pull the main branch from origin, carefully merge with it, ask me if not 100% sure. Explain to me what was merged in. Staying on the original branch, rebase the current diff as a single commit on top of what was merged from main. Commit it with a clear yet detailed description.


Not shy at all.

Thx @arsenyinfo for a hint several months ago. You called it!
🔥41
We need a class action suit worldwide or something simliar for WiFi-s that require you to install a custom certificate.

Let's make a list countries that are exempt. That's okay. Outskirts of civilization do exist, after all.

But if a business center that you are paying money to enter is requesting you to install their WiFi certificate to use the Internet — they should literally be reimbursing EVERY SINGLE USER some $1000 per hour that they have suffered. Starting from the point where we have explicitly expressed this concern. And counting.

Yes, I'm looking at you, WeWork.

PS: I'm not a security expert, but, clearly, any decent VPN service, or even an SSH tunnel proxy, makes one's system secure from man-in-the-middle attacks. Or one can carry an Android phone that can connect to one network, act as the VPN wireless proxy, and tether a brand new network that is safe.

Point is, it's security though obscurity. Adds nothing except pain. Let's end it once and for all, like we're hopefully outlawing those "Accept Cookies" banners some time soon.

Rant over.
6
These were meant to be two or three posts, but they fit into one.

First: people underappreciate how much modern AI-assisted coding removes small but extremely annoying toil. A repo that doesn’t build, mismatched versions, poorly documented setup — this used to cost engineers hours or even full days, especially when entering a new domain. Today, you can usually ask an AI to debug, fix, or at least guide you through the setup. If you want to learn, it makes the process faster and less painful; if you’re pragmatic, you can often just ask it to make things work and move on. Either way, the experience is dramatically better.

Second: disposable software is becoming the norm. A couple of months ago, I built a small tool to transcribe video files into text. Today, I probably wouldn’t bother building or maintaining that tool at all. I’d just use an AI directly, or ask my IDE to transcribe the next file for me "based on the code in this repo". If it fails, I’d ask it to experiment, then document the working approach in a markdown file so the next “agent” (me next week, or someone else later) can just pick it up and repeat the process. The friction to creating and throwing away small tools is collapsing to near-zero as we speak.

Third: people grossly underestimate how much software-driven work is now accessible to non-engineers. In the past, even basic CLI instructions felt risky to many users (“this Win+R and cmd might break my computer”). Now we’re getting sandboxed AI assistants and safer environments, and a meaningful fraction of non-engineers are willing to pay a monthly subscription to unlock capabilities that used to require a developer. This fundamentally changes UX: instead of “install this repo, set up deps, run these commands,” the UX becomes “describe what you want in plain English.”

This creates space for an entire product category: thin frontends over public APIs, where users click buttons or type requests in natural language and the system handles the plumbing. Maybe there’s a huge business opportunity here like it was with chatbots ten years ago; maybe it just becomes table stakes, just like it happened ten years ago. Either way, the barrier to using powerful software primitives is dropping fast.

Even before this fully takes off, this shift changes expectations inside companies. A good product owner in 2016 did not need to know how to build the project. A good product owner in 2026 probably must: IDE installed, dependencies working, and an AI coding assistant available to prototype features, explore data, and test ideas directly. For most “explore the data” tasks — dashboards, link traversal, lightweight analysis — all you need is a clean API, a repo with clear markdown docs describing the data model and access patterns, and a cheap AI assistant subscription.

But that user-facing tools may well begin to ship as a link to a git repository with Markdown instructions for humans and agents. And Cursor is the new browser.
👍1
A while back, I was complaining about how stupid it is for Acrobat PDF reader to disable copy, paste and text if the document says so in the day and age of OCR being so good.

And now if you send yourself a voice message in Telegram, you can transcribe it, but you cannot copy the final text.

Which I would like to call out as stupid as well, because this particular text is parsed from my voice by Telegram, presented as un-copyable text on my screen, and then OCR’d by ChatGPT.
👍4😁2🔥1
Funny how we still stuggle with terminals, coloring, keyboard shortcuts, escape sequences, etc ...

... while products such as Claude Code are "just good enough", with no strings attached.

Perhaps it is about time to ship that Hypertext Fidonet once and for all.