Not boring, and a bit of a condescending prick
309 subscribers
108 photos
3 videos
185 links
Semi-digested observations about our world right after they are phrased well enough in my head to be shared broader.
Download Telegram
TIL about:


git config --global branch.sort -committerdate


So that git branch shows the most recently committed to branches on the top.

Thanks? Thanks.
πŸ‘5πŸ”₯2
Here come the ads
Doo-doo-doo
And I say
It's not alright

Seriously, I am not the type of person who'd say "sama bad, delete chatgpt"

But given all the events unfolding, the timing is just insanely bad
πŸ”₯1😱1πŸŽ‰1
This will sounds like a retrograde cyberpunk-ish idea, but hear me out.

When I walk around with my Mac, with my MacBook, I often close it. When I close it, it starts to sleep, so it does not run any background processes.

At the same time, coding agents that are terminal-based, such as Claude Code, are getting better and better. You could say I may well run Claude Code on some server of mine and just reconnect to it and use a tmux screen session, which is true, but then I don't have latency when I'm typing β€” and I don't like it. At the same time, the requirements for Claude Code itself are quite low. It's just talking to the model and changing files.

For a while I contemplated the idea of having a screenless, keyboardless server in my backpack, like a small mini PC that would keep doing this work. I never convinced myself it's worth it, although I might, once we have local models that are fast enough and do not consume too much power.

For now, I looked into whether I could use a mobile device that's always with me as the host for this Claude Code terminal that is always up and running.

Turns out, on iPhone that's pretty bad. Its shell is only a simulation and it also has background sleep mode. But! On Android, Termux is a good app that has wake lock support, so it will not go to sleep in the background.

I could configure my work setup so that my Android β€” phone or tabler β€” that is with me even though it's not taken out, just keeps running Claude Code. It doesn't need to build the code. It doesn't need to be powerful enough to run, although some simple unit tests if it's Rust or Python are perfectly doable. My flow then is: I open the laptop, I have the session to this device, and it's good to go.

I've done most of it before as an experiment, just without the coding agent.

The next step I'm thinking of taking, as a weekend project maybe, is to configure it such that this perpetual session with that device is in my browser. I literally only need to have the browser open, and some of its tabs are my mobile device session keeping Claude Code running β€” or multiple mobile device sessions running Claude Code independently. I can do whatever I want with them, and if I close the laptop, it just keeps working.

The more I think about it, the more I feel like it's an interesting idea to try out β€” and the more I think about it, yeah, I kind of want to make it happen.

As a bonus, if that Android device is a tablet, and if I also have my Bluetooth keyboard with me, and if that tablet has ssh keys to some of my servers β€” as a non-prod user, of course β€” I might as well code something up in a coffee shop right from this tablet, without even the need to take out the compute per se. But this is definitely too nerdy β€” every time I've tried this before, the counter-argument of "why don't you just take out the laptop" is always solid enough.
Parkinson's Law is alive and well: the human mind fills available cognitive capacity the same way work expands to fill time. Give yourself more headroom, and something new rushes in to claim it.

In my daily software engineering routine, I'm observing this vividly with git rebases. I used to avoid them β€” too much friction, too much mental overhead to slice a messy diff into logical commits. But now, with AI handling the mechanical part, I do rebases constantly.

Not because rebases suddenly became more valuable, but because they β€” seemingly! β€” stopped costing me much.

Which raises the actual question: were they worth it before? Probably not at that price. Are they worth it now? Maybe β€” cleaner history helps future reviewers, helps future AI agents parse logical changes. But honestly I'm still not sure the ROI is there. Might be smarter to just keep changes small, do a git reset main, ask the AI to group the diff into commits, review after.

The rebase thing is just one example. The deeper pattern is that we're wired to fill our lives with interesting, complex problems. Freed from one cognitive load, we immediately pick up another. That's not obviously good for long-term mental health. Sometimes the right move is to not reclaim that capacity β€” to let it stay empty.

The wisdom I'm distilling for myself here: just because you can do something nicer with better tools doesn't mean you have to. "This, too, would do" is likely one of the best antidotes to burnout.

Our wet brains need to learn when to limit themselves. Otherwise we'll hit an epidemic of burnout in single-digit months, and demand for "touch the grass" workshops for seasoned veterans will skyrocket.
πŸ‘2❀1
So I'm reading this thread and can't decide what amuses me more.

https://github.com/chardet/chardet/issues/327

Of course we're living through the times where a lot of hard work can be re-done by applying AI credits to "re-solve" the problem "from scratch". And of course it would not really be "from scratch" because the original, open, code was used in that AI's training process.

However, of course, if someone paid a team of developers to lock themselves into a room for a month and produce the artefact that does a logically simliar thing, as long as those developers were not copy-pasting the very lines from the open source repo, we should be all fine.

What concerns me is that we're looking at a project with decades-long history, and it had under 1000 commits. I downloaded and looked at the repo. The first commit dates 2011.

That's just far too small of a number. In other words, it's a small auxiliary project. But somehow no one is mentioning this.

And the conversation goes into the concept that API semantics are subject to intellectual property rules. And this, to my taste, is shaky grounds.

Because the solution of rewriting the module from scratch and changing its APIs and swithing other modules to use this new module to use its new APIs β€” this should be unconditionally legal to my taste!

Because otherwise we should literally argue that whoever invented the streering wheel, or pedals, should be receiving royalties for these discoveries.

And here I recall my own thoughts from 20+ years ago. How come some algorithm is patented? What do you mean bzip2 is deliberately made worse by replacing arithmetic coding by Huffman coding, since the former was patented back then?

Does it mean I myself am at risk by just writing and shippin code, all by myself? There were no LLMs back in 2005. Heck, there was no StackOverflow. I may know the algorithm from elsewhere, or I may well have re-invented it myself β€” that's what human brain is known for, after all: inventing things.

And then I, myself, may find myself liable β€” or at least having to defend myself to a certain degree β€” because it may turn out that the very algorithm I have [re-]invented here is someone's intellectual property.

I don't like it, but I have learned to live with this knowledge. Especially after, more than a decade later, I have looked into the history of cypherpunks, with all those hairy details of printing source code on T-shirts and in books, to prove that strong encryption algorithms are not something that can be legitimately"controlled".

What a bizarre world we are living in, after all.
Crazy idea of the day.

If post-quantum fears are real and the days of encryption is truly over, plus all the deep fakes are indistinguishable from authentic content, we'd live in a very interesting time.

Imagine having to carry golden coins with you, with everybody in the family knowing how to verify their authenticity.

And then to get a large distributed project done we literally have to send someone overseas, since faking a physical person is still impossible.

For truly important project we send over several people. And re-invent erasure coding.

I'm not saying I want to live in such a world. Hell no.

But I can imagine a dystopian world with technoligy β€” i.e. no private communication, every means of cryptography outlawed, government backdoors omnipresent β€” from which I personally would rather escape to a post-digital world.

Would be fun if people my age and my belief systems would invent a synergy of Bali hacking vibes times Amish physical sustainability. Such a community would also outbreed the status quo by the way. And I bet they'd have organic food and amazing coffee.
πŸ‘1
I've been converging on a realization: software engineers are still very much needed for the foreseeable future. Let me explain why, and why I think the current AI hype cycle is no different from every previous one.

First, of course, there will be a specialized, narrow set of skills that lets people build AI-assisted software without touching the code much β€” guidelines and best practices that make it possible to maintain a project by AI alone for more than a couple of weeks, with proper storylines, tests, and compatibility layers between components.

The code would get messier. It would accumulate repetition. As of today's models, it probably wouldn't be secure enough β€” though this is changing quickly. But for a small-ish project, it would be maintainable.

Being a fan of the Unix philosophy β€” small, targeted tools that do one thing well β€” I think this approach may actually fly. Quick detour: if the OS kernel is stripped down to its core and you have a compiler, tools like binutils could in principle be AI-built on top of well-documented syscalls. Lightweight, correct, pleasant to use β€” and never touched by human hands.

However, whether you like it or not, money is concentrated in large-scale, enterprise-grade products. And those require exactly what AI still struggles with: long-term context maintenance. Acquiring quality context that is well internalized in operational memory is everything. Perhaps better code annotation tools and new reasoning techniques will produce a leap forward here β€” but so far, I wouldn't bet on it.

For prototyping, AI wins. For refactoring: unclear. I've tried a couple of times to hack something up with AI first β€” get a working prototype, extract the story, then refactor cleanly. Does it help the overall development process? Inconclusive. Once you have the prototype, it's not clear it's of much use beyond helping you visualize what it should look like and surface imperfections in the original approach. That's about it.

Tests help, sure β€” though even that is less clear-cut than it sounds. A well-defined set of acceptance criteria in plain English may already be competitive with a suite of unit tests. We might port tests conceptually rather than literally: summarize the intent, then ask an LLM to reconstruct the test in spirit, not character-for-character.

So we're back to the same conclusion. AI helps build prototypes and set direction. There will be a narrow niche of engineers who can scale what's buildable without touching code β€” from trivial to somewhat less trivial. That I believe.

And yet: most code today lives inside large companies, with complex business logic and long sales cycles. I'd prefer these companies to transform or die out β€” I like lightweight, fast, simple experiences. But this machine will take a decade to turn, at minimum. If it turns at all.

Much like large companies have crowded out open source by offering better end-user products, we may see the same happen with indie projects. People will keep doing business with the big players, tolerating the clumsiness, because for what they pay they get predictable quality. Think of OpenOffice β€” it never competed with the giants, while those giants replicated its functionality many times over. The same dynamic may play out for AI-built SaaS.

So if you're a good software engineer, your job is safe. We will need more people who can apply sustained intelligence over extended periods to legacy-heavy codebases β€” people who can articulate which changes can be made quickly, which require careful planning, and which are too dangerous to attempt without a full redesign.

"Programmer" once meant someone who flipped switches or punched holes in cards. That changed long ago, and the industry didn't suffer. We're going through the same transformation now β€” faster, but not dramatically so. We've seen a long series of tools each promising 10x productivity gains. Ruby on Rails is a fine example. At the end of the day, every other approach proved at least as effective, and the industry didn't move much faster in the long run.
πŸ‘4
OAuth2. Third time in ten years. Facepalming again.

We have SSH with authorized_keys. I'm young enough to say it's been here since the beginning of time. We all know how good it is and how to deal with it.

We have Ethereum signatures. Literally one fixed-length 0x1234...abcdef private key to rule them all and never give away. Literally one public key, perfectly derivable from the private one, to show publicly.

Why don't we have widespread adoption of this for auth?

The public key is published β€” literally into a DNS record if you wish, in which case TLS isn't even needed. Or if you prefer not to depend on DNS, use ENS β€” it's as good as good old DNS, plus more reliable and far more difficult to attack due to its decentralization.

The authentication handshake is just verifying one signature.

Because the Ethereum ecosystem is so mature, the protocol is literally as secure as it gets by design. The signee is just whoever can prove possession of a private key by signing requests. Some users will tie it to their Google or GitHub or Telegram account. Some will use an air-gapped device to exchange QR codes back and forth. Some will use a hardware wallet. Some will even use multisig with trustees.

None of this is the concern of the service that needs auth. Zero. The service just needs to know how to verify 1 (ONE) signature of 1 (ONE) type. That's it.

It almost feels like some forces deliberately want people to keep being afraid of how reliable Ethereum signatures are. Which is kind of weird β€” because it's the same algorithm most SSH keys use these days. And DevOps and SREs everywhere are using SSH keys to access the very heart of the most valuable production systems in the world.

All Amazon and Google and Microsoft and Oracle servers are protected by the very same ED25519 curve. We trust it. We cherish it. Verification takes under a millisecond.

Except for end user auth. There we make users wait seconds. For literally no good reason β€” except design-by-committee OAuth2 and JWT.
πŸ‘3
For the record, I hear this take loud and clear, but I still disagree.

AI models need to talk to one another. And AI models need to talk to services that users β€” human users β€” tell them to talk to.

And AI models are really good with text these days. They are also reasonably good with semi-structured data, but text is where they truly excel.

Sergei Burkov said it many years ago that English may well become the de facto standard for how disparate computer systems talk to one another. And I think he was spot on.

I'm not a fanboy of Claude, although I do respect Anthropic a lot. The thing is, the AI chat is the new Google search bar. And we, humans, need a native way to tie other services to our chat sessions with AI.

Claude Desktop is really good at it. ChatGPT is okay (you have to turn on Developer Mode, though).

I also found Witsy (closed-source) and LibreChat (open source), which are good at just that:

β€’ Offering the AI chat session.
β€’ With proper support for custom external tools.
β€’ So that these external tools are defined well by the human user when it comes to permissions, etc.
β€’ And all of the above is powered by text, i.e., prompts.

So the tools just "declare," in plain English (or in any human language, really), what they can do. And the "human-level OS," personified in Claude or whatnot, is all the human operator sees.

Screw the context window. Most users don't care about it for mundane tasks.

What people do care about is that if it needs to know how to turn on the heater in my guest bedroom, I can just ask in any chat. Or that I can schedule some offline appointment from a chat session.

And MCP is, "unfortunately," nearly perfect for just this task. So, whether we like it or not, it will probably stick around for a while.

Maybe part of the reason Garry is not happy with MCP is that it's largely outside YC's ecosystem. But this is total speculation on my part.

My TL;DR is that I think MCP is a poorly designed beast. But it solves the right problem, and it's not the bloated, design-by-committee piece of crap β€” at least not yet. So, personally, I'm delighted I have mastered how to use MCP, and I will keep leveraging this knowledge.
πŸ”₯2
Good "intro" (pun intended) to how bad our security already is. And this is even without OpenClaw installed on every other laptop.

https://www.youtube.com/watch?v=ZrD9MC_BXGk

Also, litellm==1.82.7 and litellm==1.82.8 on PyPI have a vulnerability in their litellm_init.pth file. That's an explicit attack, simliar to xz. DO CHECK FOR THIS VERSION IN YOUR `uv.lock` NOW!

Hat tip to the Temporal Slack where I saw this first. You guys rock.
In all seriousness, with modern-day exploits and supply chain attacks, using a) two-factor, b) a passkey, and c) an external signing device is probably the correct solution.

Weird how Github grants ssh keys more permissions than the Web login.

I'd love to have to confirm every pushed commit by tapping something on my mobile phone, or a ubikey, or at least a passkey in the browser.

Or maybe, just maybe, the noble Web3 crew will see the renaissance of their field, since an immutable ledger with fine-grained controls to one's key is something those folks have truly mastered.

Why not just disable the AWS console for production environments except from cleanrooms, and use muiltisig "at least three of five should sign" for Terraform- or API-based configuration changes, with every single action journaled forever?

That's the spirit to my taste. Arguably, for the past 10+ years it does not land well with the field, since "move fast and break things" dominates the mindset.

But I can see the light at the end of the tunnel if just enough things do break. Fast.

At which point signing every request and guaranteeing on the protocol level that no action was taken unless we can trace it to the specific set of approvers β€” I, for one, would embrace such a design.

And, to reiterate: the Web3 community has ALL the necessary bits and pieces for a while now. Publishing some signature to a public ledger costs zero point zero zero something US cents. Blockchain listeners are free when the desired throughput is low.

If an org wants to have its commits and production changes signed in an immutable, perpertual way, this is under one day to set up. Including rolling keys that never last for more than 15 minutes unless refreshed explicitly, the very act of which is also journaled on-chain.

We literally have everything. And we were/are literally ignored in the recent years, because in the minds of pundits blockchain is still NFTs and ICOs all the way down. While in reality the technology to take Visa and MasterCard out of business because any fee above zero point zero zero something cents is just way too high.

Maybe I'm daydreaming, but the tide may well turn this time. And then it might be avalanche, since the first mover's advantage would be too high and it will gain momentum before other players figure out how to best respond.
This is worthy of a post to share in English.

Three Harms of Russian Literature

The first harm was noted by Rozanov: for an entire century, Russian literature mocked and humiliated the very people who form the backbone of a normal society β€” the civil servant, the officer, the priest, the entrepreneur, the merchant β€” and, in general, the bourgeois, the solid, respectable citizen.

The second harm was observed by Turgenev, when he spoke of Dostoevsky's "inverted clichΓ©s": the thief is invariably honorable, the murderer a walking conscience, the drunkard and libertine a philosopher, the prostitute a great soul, the idiot the wisest of all.

The third harm β€” wrote Tyutchev β€” is the constant, stubborn conviction of everyone, and the self-persuasion, that we are special. That no law is written for us: neither European, nor Slavic, nor Christian, nor β€” God forbid β€” any law common to all people, such as international law. Why? Because we are just like that β€” unique, apart, like no one else in the world.

Russian literature long nurtured this deep-seated adolescent complex. It nurtured and nurtured it β€” and at last, nurtured it to fruition.

[ This is a direct translation of a post, link in the comment. Claude is really good at translating btw. ]

And my immediate comment to this in a chat with a friend, also reverse-translated from Russian, also by Claude:

It always grates on me when people treat Russian literature as some kind of supreme treasure. Sure, it's a treasure β€” but the planet has many treasures. Praising the Hermitage having never been to the Louvre, or Angkor Wat for that matter, misses quite a bit of the very point, to my taste.
πŸ‘3
Another thing I will definitely do once I have time is a chain of agents re-writing test cases from scratch, incrementally, with no feedback loops whatsoever.

Layer one: clean English description of test cases. Feed this, plus the autogenerated API spec, to the agent.

Layer two: take detailed descriptions produced by the first agent, convert them into code.

Layer three: confirm the code matches the English text in spirit. Remove any and all uncertainty. Repeat a small number of times until there's no ambiguity.

Layer four: run the tests. Only at this point. With no "back-propagation" of errors whatsoever.

LLMs are non-deterministic by nature; at least the cloud ones. So, sure, sometimes this test will fail. But I'm happy to burn some tokens every night to run this test process ten times from scratch with different random seeds and then look at the result.

Point is, without iterating on fixing the error, no malicious / erroneous / cheeky detail will go unnoticed. The API will have to work correctly according to the very description of the original test case, spec'd in English.

And if it fails more than once in ~ten times then perhaps some documentation β€” of the product in general or of API endpoints in particular β€” should be updated.

And at this point I am actually in favor of using the AI. Burn a bit more tokens every other night to suggest improvements to the documentation that makes AI-re-written end to end tests pass on the first try 90+% of the time.

This literally looks like something I can vibe-code in a few hours. But these weeks are crazy in terms of cognitive load on my side. So: some time soon!
πŸ€”5πŸ‘2
This media is not supported in your browser
VIEW IN TELEGRAM
Folks, do not share broader just yet. And remember: you saw this here first. Check it out.

And before you say it's too easy β€” this was a lot of work to make it work reliably. Keep in mind that what your proprietary paid chat app can do it far more than what the model can handle.

So it's quite a step from "Claude is smart enough to do something" and "I can have my app do this something by calling the LLMs". And we believe we can help many products make this step smoother.

Not to mention that structuring data is a major pillar of my career for well over a decade, and nothing beats a product that guarantees solid, well-defined schema behind the scenes.

What do you think? Good enough short video for such a CTA?

I'm in NY this coming week and in SF the week after that. Showcasing our xmemory, asking for feedback, and looking for customers. Drop me a note if you or someone you know can be interested β€” we'd love to chat with them!
πŸ”₯12πŸ‘3
Dear Anthropic, thank you for making Claude ~20x faster just today or yesterday.

An incredibly pleasant surprise.

Just as I have the harness to run tests overnight, they complete in well under an hour. Me wow.
πŸ‘2😱2πŸ”₯1
Exactly as predicted: https://aistudio.google.com/apps/bundled/flash_lite_browser

Websites, and other services, may well be generated on the fly, tailored to this particular user's preferences.

Clean APIs and data models are β€” finally! β€” becoming the most important part of complex systems.

And authorization and harness and security β€” sweet!
πŸŽ‰1
So, I don't often ask for this, probably once in a few years max.

But xmemory, the startup I'm a co-founder of, is out of stealth, which kind of is a big deal.

I've shared the preview of the video a few days ago, thank you for your early feedback!

Now is the right time to get more people who are in the space to see this. So your help with distribution β€” targeted and carpet β€” is much appreciated.

Announcement links:

* https://www.linkedin.com/feed/update/urn:li:ugcPost:7447314223149305856/
* https://x.com/UniqueDima/status/2041548925639762409
* https://www.facebook.com/share/v/1CisgzNfx9

I promise to be sharing more once insights and product thoughts and experiences of talking to people crystallize in my head.

And we're in NYC this week and in SF/BA next week, mostly for networking purposes. The calendar is somewhat packed, but business first β€” don't be shy to reach out!
πŸ”₯3❀1πŸ‘1
So I did buy a throwaway monitor with a stand yesterday. Just so that my neck does not hurt while at this WeWork, since I'm far too used to standing up and looking at the screen at eye level.

And this was a great idea, since I now have a few revelations.

Revelation one: screens are really light these days. This piece of Full HD is literally a pound. My notepad weighs more.

Revelation two: a single USB-C port is enough. Not very bright, I'll grant that, but definitely workable.

Revelation three: this stand with a negative angle (facing down) does work. Before buying the stand I used to put this screen on top of a few books conveniently available here, and it's quite enjoyable to work looking up at it, with the screen tilted slightly downwards.

Revelation four: the stand is actually quite stable. It's not too heavy, but with a large enough desk-touching surface area, the screen is very much well fixed where it should be.

What I need now is pretty much three things:

1) A much larger screen. Ideally, foldable. But about as light β€” at least about as light per square inch.

2) A stand that has a large power bank built into it. Both for stability and as a power station, with perhaps a USB hub, so that it's just one cable.

3) A "computer without screen" to work from.

For text-only work (i.e. where no mouse is needed, a.k.a. vim), I believe I can work with this setup on an external keyboard on my lap, with the CPU/GPU powering this screen being my phone or my tablet.

But for heavy workloads, some "keyboard + touchpad" combo is absolutely a must. It is also a must when I eventually buy into working from VR headsets while on a plane.

Somehow Apple does not sell its keyboard + trackpad as a combo. So for now I still keep my laptop open, on the table or on my lap.

But if and where there exists a big lightweight foldable monitor on a portable stand like this β€” count me in pls. I'll give it a shot. My travel backpack is rather large, so it's not impossible for me to imagine a tri-fold 30" screen that I can pack with me at all times β€” and that will be quite a game-changer from my away-from-home-office these days.
πŸ‘2πŸ”₯1
Seriously, the best use case may well be to have a compact yet powerful phone that I can plug into this external monitor and use as a touchpad.

Somehow the iPhone can neither stream full-resolution video nor act as a touchpad.

But if some Android is good for this use case, I'd be totally sold. Already travel with my keyboard regardless, so the setup with the phone-shaped device acting as my workstation-in-a-touchpad, augmented with an external screen and an external keyboard β€” that would definitely be cyberpunk enough to my taste!

(This Android phone would need to run some window manager, at least for me to have a terminal and a web browser β€” but these appear to be well-solved problems as of 2026.)
πŸ‘1
So I implemented a fairly large end-to-end UI harness test using Playwright over the past several days.

Even got yet another compliment from the CEO that I’m indeed a weird engineer. Which is fair β€” most engineers can’t be made to write UI tests, and I literally volunteered to build one. Ask forgiveness, not permission. My take was and still is: if you actually care about data isolation across user accounts and system boundaries, end-to-end tests are the best tool we have.

Here comes the punchline though.

Playwright is enormously good in the age of AI. So good that I’m starting to think instrumented Chromium may be one of the most overlooked security risks.

Take online banking or brokerage accounts. Leaking a password is not that scary (sic!), because:

- there’s two-factor
- a new device or location triggers extra checks
- even with access, moving funds to new accounts requires more verification

Now imagine the attacker acting on your behalf from your own browser. Your own headless browser. Which most humans have no idea can exist.

Headless browsers can open your email, grab the 2FA code, complete the login, and delete that email.

And no alarm will ring. Because from the system’s perspective, this is your device. Your browser. Your session. We don’t use CAPTCHAs for bank logins, after all.

And you won’t notice anything. Until it’s way too late.

So, three thoughts. First: I’m scared.

Not so much for myself β€” my personal paranoia (separate browsers, isolated cookies, etc.) probably protects me from most unsophisticated attacks.

But I am scared for the industry. Once this kind of attack becomes widespread, it’s going to be a disaster.

Second: I’m annoyed.

Because this is exactly the kind of problem the Web3 folks solved at the protocol level a decade ago.

Air-gapped device. QR code. Explicit confirmation. Signed response.

You see exactly what you approve.

Why aren’t we doing this for GitHub commits, pull requests, AWS production changes β€” anything high impact?

No idea. Guess we’ll learn the hard way. The industry has framed the Web3 crowd as a bunch of unsophisticated enthusiasts, unwisely dismissing all the great things built there.

And third: the upside.

Security in the age of AI is going to become a huge deal, very quickly. And that is actually a good thing!

Because this is one of the few areas where first-principles thinking really matters. Security is always an arms race, and the ability to reason clearly about systems will be in very high demand.

As for me β€” with all due disrespect to things like Kubernetes and Terraform β€” I can kind of see where this is going.

Less writing code.

More defining invariants, reviewing (semi-AI-generated) rules, and building harnesses that ensure no higher-order policy can be violated by any lower-level implementation.

That seems like a good place to invest the time, energy, and passion of hardcore geeks like yours truly.
πŸ‘7