Not boring, and a bit of a condescending prick
222 subscribers
31 photos
116 links
Semi-digested observations about our world right after they are phrased well enough in my head to be shared broader.
Download Telegram
Dear Europe,

I thought we're done with this shit since one Finnish guy did his job well.

(He also lives in America now, by the way. Which also serves as a wake up call, so that you guys have more motivation to set things straight from now on.)

Sincerely,
Dima
Unpopular / controversial opinion: if Musk doesn't get his act together, I'm quite seriously willing to bet on him being pressured from the top.

Along the lines of "don't mess with our ways to run social ads / experiment with political content distribution / exercise influence here and there, or we'll make sure both SpaceX and Tesla lose most of their value, to begin with".

He's not an idiot, and his ego is not unreasonably huge for the tasks he's tacking. So, my bet would be that the true explanations are elsewhere. The only ones I have, unfortunately, are quite sad.
Some thoughts on Quiet Quitting.

From the game theory standpoint, those who consciously decide to "not participate" in the labor market "until the conditions improve" are doing no more and no less than participating in the act of collective bargaining.

The economy today is indistinguishable from a situation where surprisingly many workers across surprisingly many industries have suddenly and successfully unionized. And decided to go on strike until their conditions improve. Are are standing their ground so far.

Quite a few people seriously believe that they deserve better conditions (pay, benefits, WFH, etc.) than the market can offer today. It's hard to argue that, to some extent, these people are right. The market is indeed not treating them well. And they can indeed improve their collective well-being by acting in a "united" way.

The future, however, is uncertain.

First, we are about to continue with recession. So, in absolute terms, many people's situation will be getting worse. Many may well crack under pressure, because money is, well, quite an effective motivator, to put it blunt.

Second, improved job performance, thanks to automation and robotics, is far more mature today compared to what it was ~50 years ago. For many industries, including agriculture, manufacturing, and retail, fewer workers are needed to keep things running.

Third, "bullshit jobs". If a sufficient fraction of people who are involved in non-bullshit jobs decide to quit what they are doing, the world may well change. As of today, however, it is not clear how many of the "quitters" truly qualify for "essential workers".

In a perfect world we might expect to see the state of the art change, towards people willing to pay more for what we are used to having for less. And we might expect to see wages increase.

After all, many of the price breakdowns we are used to are just situational market equilibriums. Who said the car requires a certain number of cents for maintenance per one dollar of gas? Who said keeping one's wardrobe and shoes in certain shape should require a certain percentage of family budget, and how should this percentage compare to what we spend on gadgets or online subscriptions? It is not unreasonable to expect that, in a perfect world, some large-scale rebalance of values happens "thanks to" the current Quite Quitting trend, and we wake up to the state of the economy where the people who participated in this movement are better off compared to before it has stated.

Intuitively, I doubt it though. For two reasons.

One, one can't do anything with supply/demand impedance. The economy needs production. If we produce less than we consume, depression kicks in. It's just too hard to change people's spending habits when everyone begins spending less. If anything, expect federal programs across the globe to boost spending. And these programs would immediately steer the global worker strike experiment" the wrong way; likely to help it "return" to where the economy was before; where the workers are not happy, but keep working, because that's the best option they have.

Two, on the global markets scale. We, Americans, are outsourcing a lot. Most certainly, if Americans buy more American products (and, consecutively, fewer Chinese et. al. products), the American economy improves. Most certainly, if the outcome is that Americans buy more American products, the wages of American workers increase. But the lockdowns situation is lasting for a while now, and it doesn't seem to me that the share of American products on the American market is growing.

I can't think of any big conclusions to definitively draw from the above. Except that we are receiving a yet another confirmation that market downturns inevitably hit the Average Joe and Jane the hardest. And, on top of that, a (somewhat weaker) confirmation that "global unionization" of workers does not necessarily result in better conditions, at least not in the short term.
Why doesn’t Ubuntu let me install Android apps natively?

My laptop has a SIM card slot. I do buy local phone numbers in various places. In fact, mobile Internet is so good these days that I often work, take my video calls, and even host my meetup events over the internet that is coming through this in-laptop SIM card.

For many local 4G/LTE providers the best way to manage my account is via their native mobile app.

Why can’t I “install” their native “app” into Ubuntu and let it manage my “mobile internet” on my laptop? Up to and including receiving the confirmation text message, etc.? I expect this app to 100% believe it is running on some non-standard yet popular Android phone.

Why can’t I write a one-liner script to check my balance via a USSD code every hour or so?

Where’s technology when we need it most? =)
Folks, how realistic is the following plan?

• Ask people to pay my nonprofit, not me myself or my company, for my extracurricular activities,

• Call out publicly for some open source help, and

• Pay those who are kind enough to help with this open source stuff from this nonprofit?

My rationale is that since whatever I'd be paying for is public domain, and as long as I ask the folks to sign a waiver that they release their code to public domain, this is well in line with what nonprofits are.

Moreover, I would not have to worry about spending all of this non-profit's earnings by the EOY, as I have to with an LLC or with a corp. Because, as far as I understand, non-profits don't pay tax on their annual income.

Obviously, I do not plan to profit directly from those open source contributions. I'll look into making sure this open source work can be leveraged with other, proprietary, code that is not paid for by this non-profit. But everyone is welcome to use the very open source pieces freely, as permitted by the license under which these contributions are released.

Is this a good pattern, or am I walking on shaky grounds if I choose to pursue it?

PS: The extracurricular activities would be my mock interviews / prep sessions / talks I give that people are comfortable paying for / etc.
Late last year I was involuntarily subjected to an interesting psychological experiment.

I misplaced my credit card, my US Passport Card, and some 20 USD.

This made me relatively upset for about two days. Even though 20 USD is nothing to worry about. Even though I have more credit cards on me. Even though I could block this lost card with one message on my phone, which I did. And even though Apple Pay, tied to this card, kept working perfectly fine, so my daily routine was literally unaffected.

Clearly, what made me sad is the loss of the Passport Card.

Sparing the details, here's the thought process that got me back to peace.

This is *exactly* the purpose of the Passport Card!

And it has served its purpose well.

Why did I get a passport card in the first place? To have an alternate ID, in the form & shape of a credit card, an alternate ID that is "safe" to lose, just in case.

What would I carry with me if I did not have this card? My driving license, of course. Which is a far worse document to lose while overseas. Granted, I have another one, from Switzerland, but it's falling apart after 10+ years, so it really really is just a backup one.

Would I "emotionally prefer" to have lost ~100 USD instead of ~20 USD and not lose the Passport Card? Hell yeah, I'd get over losing ~100 USD far quicker than over losing the Passport Card.

Would I have paid that 100 USD to obtain the passport card in the first place, had it been something to order specifically, not something that came with the passport? Likely no; at least, it would have been a close decision.

So, rationally speaking, I concluded that I am worried far too much compared to the true "damage done". I got a passport card to keep it on me mostly as a backup ID so that I don't lose the driving license, I kept it on me mostly for this very purpose, and — whoa! — I did not lose the driving license.

The Universe is working its ways just as it should. Why am I disappointed to begin with?

And I am happy to report this logic & reason did bring me back to peace.

PS: After ~six rounds of human interactions my credit card and my passport cards were found. In the IMAX theater. Although it required two trips there, one to leave the "lost & found request", and one more to manually check on its status, as they did not call back. Karmically weirdly, the cards were found exactly on that one last trip that also had the smallest chance of success, because why would you go somewhere if they would call you on the phone number you left there, right?

PS2: Avatar 2 has amazing visual effects. Distant second would come its effort in autism awareness. Hard to say what would be a distant third. It's still a nice movie, at least for someone like me who frequents movie theaters with a cadence of approximately once in three years.
My understanding of Docker, docker compose, and other higher-level concepts of modern software architecture has just been upgraded to a new level.

I realized I adore them deeply. For a convoluted and controversial reason.

Not only these concepts enable us, engineers, to ship more complex software. That's nice, that's important, but that's not what fascinates me in modern technology.

What makes me love these technologies dearly is their unique property of requiring even more deep thought from whoever uses them.

Let me explain. Some twenty years ago software was "simple". From first principles, any savvy kid could learn enough to qualify for a Jr. engineer by 18yo and for a Sr. engineer by some 25yo. And those "simple" technologies were not very deep in nature.

Today, much as I hate saying this, a modern GPT-* model can handle most of the "complexity" of the technology of the early 2000s.

The technologies the ~18yo-s .. ~25yo-s have to master today, miraculously, are the exact opposite of what GPT-* can be “proficient at”.

Simple things became simpler to build, but the complexity of nontrivial things has grown exponentially. A good engineer today is not someone who understands algorithms and data structures. It's one who is capable of holding in their head the architecture of a system with dozens of moving parts.

From the engineering side, we, modern engineers, are exercising the most powerful part of our brain: managing layers of abstraction. Jr. engineers know how to reason about them. Sr. engineers know how to prefer some to others. Architects know when to break the rules and when to create new abstractions.

Great products emerge throughout engineers navigating these abstractions, and the engineers are paid well in the meantime.

I could not dream of a better world. With all its imperfections, software architecture today is almost intelligently designed to self-select for the people who are good at specifically this: correctly and seamlessly managing multiple layers of abstraction.

So, next time you feel annoyed by a nontrivial counterintuitive behavior of git, or docker-compose, or some hook / trigger / lambda / GitHub action, or whatever tool you use to collect and crunch logs in real time, take a deep breath and relax. That's all how it should be. Had it been "easier", working on software would not require this much intelligence, would not pay well enough, and would not be this much fun.

PS: I played quite a bit with GitHub Actions, docker compose, and wasm/Emscripten in the past several months, and enjoyed how things come together quite a bit.
What is the "breakthrough" of Quantum Advantage, formerly known as Quantum Supremacy?

I can build a machine with N nodes and M wires. Each node would have an adjustable electrical resistance, a digital voltmeter, and a pair of connectors, in and out.

With N dynamically adjustable electrical resistances and M mechanical arms or motors or what not, it would be an O(1) operation to get to any topology of how these wires connect to one another.

(A grid, where M = N^2, or where M = N*2, so that only the adjacent nodes can be connected, might be easier to visualize.)

Now, I would let electric current flow between the first node and the last node. And I'll measure the voltages on all nodes.

Voila! I have reproducibly, and relatively accurately, solved a real-life problem in O(1), using O(N+M) "computing elements". Solving such a problem on a modern ("von Neumann") computer would, if I recall correctly, require more "computations" than such a process.

Matter can compute. We all know this. The "traditional" computer is just one way to build a "universal evaluator".

Despite what the Turing-Church thesis tells us about computability, there is no fundamental law which would state that using the ALU (arithmetic logic unit) in the way modern CPUs do is the most effective way to make matter perform computation.

I just outlined an example of a "computer" that would vastly outperform a "traditional" CPU on a real-life problem. How is it not Q.E.D.?

Quantum computers make clever use of far less intuitive laws of physics to arrive at presumably more effective "hardware" to run more effective "algorithms". But quantum effects by no means have a monopoly on "cheating" the modern CPU architecture.

Electricity, optics, heck, even gasoline & metal, in engine tests, can do and arguably do do a better job at solving some computational problems compared to the good old CPUs.

Friendly reminder: an optical array can perform a Fourier transform of an image. Or even apply a ("hard-coded") neural network to it. It's not hard to imagine a zero-watt, emission-free, "solar-powered" "crystal ball" that will show a green light when pointed at a dog and a red light when pointed at a cat. In the same O(1) "computational complexity". So what?

And, at least to me, finding the prime factors of 21 looks far less sophisticated than solving the system of equations to compute electric current.

So, what's the big deal about that Quantum Advantage? After all, we already have Kirghoff's Advantage, Maxwell's Advantage, and Navier-Stokes Advantage, to name a few.
Expectation: Developers are essential and ChatGPT will not affect our lives in meaningful ways, because what we do for living is about as far from mimicking simple tasks as possible, and about as close to Strong AI as possible.

Reality: Attending a training on GitHub Actions, and the format is literally how one would train ChatGPT in a "monkey sees monkey does" way.
My new favorite mid-level / senior coding interview question: Write a function that returns a random compact JSON of a given length.

The JSON should be compact in the sense that it has no extra whitespace; whatever JSON.stringify(object) returns is compact by definition.

The output of the function should be a string, of length equal to the argument provided to the function.

Acceptance criteria: imagine there will be a forum of developers looking for some random-json function to use random JSONs in their tests. You should strive to make them want to choose your implementation.

Do you best for ~25..30 minutes, then we'll talk. Now go.
We really need some torrent-style worldwide Docker and Ubuntu mesh network of distribution.

Just unthinkable how much traffic is wasted daily by moving the very same bits over and over again.
Not intending to start a holy war [probably], but I'm finding myself comparing programming languages to one another again.

To set the tables straight, I'm talking about backend development, where high throughput, low latency, and high reliability are essential.

In addition to the above, I am at the stage of my career where "design for refactoring" is a big theme. So just "we can hack this up quickly" and/or "we can make it work super-quickly in just a week of work" are not strong enough arguments, because what would the company do after you've shipped it and left?

The above immediately disqualifies languages such as JavaScript and Python. Too slow, and pretty much impossible to refactor, or even maintain.

The pros and cons of the "standard" languages are sort of well known. Let me summarize them quickly; again, this is not intending to start a pointless discussion, and I do not claim to be unbiased, just setting the tables to actually ask what I want to ask about.

• C++. Pros: The code would be effective, and, if the team is strong and C++-focused, the code would be maintainable, and relatively error-free. Cons: You pretty much have to be a C++ company, or, at least, you need to have a task that is so niche and so performance-critical, that it's okay for this particular piece to be done in C++.

• Java and other JVM languages, most notably Kotlin. Pros: Easy to find people who speak the language. Cons: I personally am terrified by a) code bloat, b) garbage collection that kills latency at high percentiles, and c) slow warm-ups, so that autoscaling becomes a pain in and of itself.

• C# and the .NET family. Pros: About the same as Java, plus, personally, I'm more comfortable with .NET runtime vs. the JVM one, even on Linux. Cons: Same as with JVM.

Now, the question. In the setup above, how should one gauge Rust and Golang?

My view is rather straightforward so far.

• Rust: Still young. Unclear if it delivers enough to win over C++; i.e., even if support- and refactoring-wise it turns out to be better than C++ for "simple" services in the long run, Rust people are still rather difficult to find.

• Golang. Here, I'm on the fence. There's a large crowd of people who would argue Go is unfit for large projects, mostly due to lack of proper type safety. There's also a large crowd of people who would argue Go is quite efficient for its simple syntax.

Ultimately, my question then is: should Go be taken seriously in 2023?

Is it worth my time to learn it and [help] build a production service in it?

Or would this knowledge be rather useless, given I speak C++ well, and for "simple" low-latency high-throughput gRPC services, it likely is and will remain a better choice?

If the latter is true, I should not really invest much into Golang, and wait for an opportune moment to educate myself on Rust. If the former is true, an opportune moment is now.

Advice welcome -- and many thanks in advance!
Folks, any experience with the "best practice" of different `USER`s in Docker?

Via this link -- https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#user -- the recommended way is:

RUN groupadd -r postgres && useradd --no-log-init -r -g postgres postgres

I struggle to understand:

a) Why change the user at all? Isn't Docker secure enough so that "just" using the default, "root", user is good enough for most intents and purposes?

b) If some service would rather not be run as root -- which I admire -- why not build everything from under root from an AS builder container, and then only change the user into some app from within the latest, runner, container?

c) Why would the Docker team build this feature in such a way that works very differently on Linux and on macOS, so that cross-religion dev teams have to struggle?

d) On a related note, why doesn't the Linux binary of Docker offer the host.network.internal hostname by default?

Edit: To clarify, postgres is actually a great example of an application that I believe should be run from under its own user! But when it comes to building & running custom Golang apps, for instance, my original question holds.

Edit 2, via @Komzpa: root is not safe because root is really something like if (!uid) {} in many paths in kernel. if you manage to create a device file in your namespace you can remount it and there's nothing you can do about it as you're root. All the security enclosures only work if you're not root, there's no "root of this namespace".
"Dima, I worked in many companies that were at risk of downsizing and/or closing down. The lesson I thus learned early on is to open source more, sooner."
Put together some thoughts about Docker[files and Make[files]: https://dimakorolev.substack.com/p/docker-and-make

If the topic gains interest, we might even consider a meetup episode about it. A tad too hands on and low-level for a SysDesign episode, but if it helps us, The Engineers -- so be it.
Our SysDesign guide with interviewing.io is out!

A lot of work, hope it pays off. The content is really freaking awesome there, easily the best material on the subject available out there, and likely for quite some time.

Enjoy responsibly!
Wrote a piece with some ideas on the long term future of Schema Registries.

https://dimakorolev.substack.com/p/beyond-the-schema-registry

Depending on a number of factors (mostly on whether you agree with my views), this can be a vision roadmap for the future or a long-ish rant about the sad state of the affairs.

But it's something I'm passionate about, so public these thoughts go. Enjoy responsibly!
A follow-up to my previous Substack post.

It's not really about schema registries and API contracts. It's about cutting slack from our software development practices.

And, as Jos points out too, the easiest way to cut this slack is to stop viewing "software development processes" as necessary guardrails, but to critically examine each and every one of them. Switching to "approve" rather than "need more proof" by default is a heuristic he suggests. It's a good one, because, in my experience, good high-level engineers do tend to be the people who ask for forgiveness, not permission.

My view is probably even bolder. Demand cost-benefit analysis for each "best practice", and aggressively cut the ones that are a) not on the critical path, and b) do not deliver the benefits that we should demand given the costs.

In fact, I'm quite comfortable making an even stronger statement.

We have over-stigmatized "ninja engineers" in favor of "team players" who are perfectly fungible. And these "perfectly fungible" engineers, in fact, do get the job done. But the pace is 10x slower, while the rate of missteps is ... well, most certainly not lower.

So it's time we take a sober look and get back to basics. No company would depend on one or two key people who know everything; that's too low of a bus factor. But three key people plus another five solid performers is _probably_ a far better solution for many, if not most, tasks.

We do, after all, have successful large products used by millions, and even billions, of people, and maintained by just a few dozen. That's the spirit to get back to.
Docker is deleting open source orgs.

Can't believe the Docker folks didn't realize name-squatting would be a real issue in the very immediate future. At least, freeze those names ffs. Terrible move, IMHO.

During the golden times of open source someone would have built a Torrent plugin for those free images. So that Docker would not be able to plausibly claim they are cutting costs, while the developer community would keep thriving, albeit with somewhat slower docker pull times.

H/T @Max_Losev for the link.