Not boring, and a bit of a condescending prick
178 subscribers
13 photos
79 links
Semi-digested observations about our world right after they are phrased well enough in my head to be shared broader.
Download Telegram
Both from first principles and from experience I know that load testing is not trivial.

One of the fundamental lessons is that testing for maximum throughput and testing for minimum latency are two very different things. Simply put, it's almost always possible to extract a few more QPS at massive cost to latency at high percentiles, and it's almost always a bad idea to do so with a production service. Thus, the "maximum QPS" measurements are generally worthless, as they are not far away from "spherical cows in the vacuum".

A solid, holistic, approach is:

1) To agree on the SLA / SLO / SLI of the service. This is a product / user capacity planning exercise. In a way, this part is about postulating the problem.

2) To agree on what we consider the acceptable range of operational parameters for the service and its environment. This is the exercise in software architecture and in site reliability engineering.

We answer the questions about the expected usage of our service in (1). Then we plan how to best build and ship this service in (2).

It is (2) where we answer questions such as "local caching vs. Dynamo", "lambda or EC2", "how to leverage elasticity", or "to service mesh or not to service mesh".

Ideally, the service itself (its individual instances integrated into some environment) would always remain within its operating mode defined by (2). The service accomplishes this goal by simply rejecting the excess requests that would take it out of this mode.

For load testing, it is important to understand that only after (2) is established, and only we have the means to spin up the service up in some test environment, we confirm that it conforms with (1).

An example of (1) might be: hold 1K QPS, with a certain number of nines, median latency under 5ms, p99 latency under 10ms, p99.9 latency under 25ms. Because we believe 1K is our peak traffic during the busiest hours, and we postulate that latency numbers above these figures would deteriorate user experience and result in business lost.

An example of (2) might be: use up to four nodes/servers/pods of certain parameters, max. 90% CPU load, max. 70% RAM utilization on each node, run on Kubernetes, within a certain service mesh.

This is not all it takes to properly load test the system. In order to confirm (1), we need a well-defined and well-specified understanding of the expected user traffic. Such as: we expect a Poisson distribution of requests averaging 1K per second during our peak hour. We assume we can model such a load using N=100 "virtual users", even though a perfect load test would be sending all these requests from different IP addresses.

~ ~ ~

Here's a trick question. How do you communicate the above to the people who think along the lines of:

• "Based on our load testing so far we believe we are network-bound"?

And:

• "This is what the documentation to this load testing tool says, it has tons of flags to play around with, and it can simulate any load?"

And:

• "Dima, what you are saying makes sense, but it's too much and hard to follow. Is there an article I can read and understand all this?"

~ ~ ~

Lazy web, I have two questions from my side:

1) Am I making things too complicated, or does it sound reasonable so far?

2) Any good articles out there which I could use as references?

3) Bonus question: Would the above be worthy of a SysDesign meetup episode?
Discovery of the day: docker-compose ... and docker compose ... , while seemingly identical, are two entirely different things under the hood!

TL;DR: better upgrade docker and use docker compose ....

(Edit: Or so is my conclusion so far. Do feel free to correct me if it's the wrong one.)

One major difference I noted is that when health checks are used, containers take a while to start, and should be started in the right order, docker compose (with a space) would nicely print the output of a container while it is getting healthy, while docker-compose would stash the output until the container is healthy, and then dump it out in one piece.

Not that it's life-changing, docker logs -f ... does the job.

But, intuitively, I would much rather be reading the logs of a container that is somehow taking a while to become healthy. Or is there some alternate logic that I'm missing here?

PS: Here's how a productive Sunday looks like to me.
С++ folks, did you know inline constants exported into an .so are actually not inline at all?

This was quite a surprise for me today!

TL;DR: If you ::dlopen() two .so-s, like this:

// lib1.so
namespace nuance {
inline int N = 42;
}
extern "C" int Get42() {
return nuance::N;
}


// lib2.so
namespace nuance {
inline int N = 101;
}
extern "C" int Get101WithANuance() {
return nuance::N;
}


Then calling Get101WithANuance() will return ... drum rolls ... 42!

(Yes, order matters, etc., etc. The whole bouquet of issues you could possibly imagine, straight into your face. What a lovely footgun indeed!)

So much for yours truly sincerely believing that "inline is just a cleaner C++ way to #define symbols, with no symbols leaking wherever".

#LiveAndLearn

Ref: https://github.com/C5T/Current/pull/924
Seems like the Twitter debate is the new black. The topic appears to be polarizing like hell, and I found that only a few people can reason about the events calmly.

There's "it's a death sentence for a civic conversation" on the one end, and "finally, we'll be able to see what people truly care about" on the other. With not much in between.

Both sides, of course, are right. Free speech absolutism invites all sorts of extremes, which are generally not desirable for most users. The other extreme, which, IMHO, Twitter was quite leaning towards in the past few years, is to be excessively censoring whatever does not fit "the media narrative", which inevitably results in various viewpoints being silenced, despite being proven true soon thereafter.

My view here remains the same: opening up the algorithm is generally a great thing. I wrote about it at length before.

The TL;DR is: if you want to use The Algorithm to help solve some Massive Social Problem, but then you tweak the results of this algorithm because it does not fit your definition of right, it's you, not the algorithm, who is to blame.

Sure, an algorithm would miss on important real-life implications. With crime prevention, for example, The Algorithm gets into Orwellian, Minority Report-style biases, that should absolutely be corrected for.

But my strong belief is that it is important to correct for these biases BEFORE, not AFTER The Algorithm. In other words, these corrections should be the inputs to the algorithm, not post-overrides.

Thus, if, IF, Elon is going to do with he claims to want to do -- the Algorithm-first approach to content selection -- then I do believe it will be a step forward.

Whether he will actually work towards making it so, or whether this can be implemented with our current state of technology, or whether the regulators and/or other big players would allow Twitter to be such a platform -- only time will tell, I guess.

I do keep my fingers crossed though. And am hoping for the best.
Folks, what do people mean today when they say "Zero Trust" APIs?

My undestanding -- that might be wrong -- is that each service needs to securely validate the request, as if this request is coming from the outer world. Because security, microservices, etc.

But in this model if a request requires 20+ other requests, the only solution is to make 20+ zero-trust requests which would all be validated as if they are coming from, well, zero trust. Which is both a colossal waste of resources and a huge toll on end-to-end user-perceived latency.

What am I missing?
Dear Europe,

I thought we're done with this shit since one Finnish guy did his job well.

(He also lives in America now, by the way. Which also serves as a wake up call, so that you guys have more motivation to set things straight from now on.)

Sincerely,
Dima
Unpopular / controversial opinion: if Musk doesn't get his act together, I'm quite seriously willing to bet on him being pressured from the top.

Along the lines of "don't mess with our ways to run social ads / experiment with political content distribution / exercise influence here and there, or we'll make sure both SpaceX and Tesla lose most of their value, to begin with".

He's not an idiot, and his ego is not unreasonably huge for the tasks he's tacking. So, my bet would be that the true explanations are elsewhere. The only ones I have, unfortunately, are quite sad.
Some thoughts on Quiet Quitting.

From the game theory standpoint, those who consciously decide to "not participate" in the labor market "until the conditions improve" are doing no more and no less than participating in the act of collective bargaining.

The economy today is indistinguishable from a situation where surprisingly many workers across surprisingly many industries have suddenly and successfully unionized. And decided to go on strike until their conditions improve. Are are standing their ground so far.

Quite a few people seriously believe that they deserve better conditions (pay, benefits, WFH, etc.) than the market can offer today. It's hard to argue that, to some extent, these people are right. The market is indeed not treating them well. And they can indeed improve their collective well-being by acting in a "united" way.

The future, however, is uncertain.

First, we are about to continue with recession. So, in absolute terms, many people's situation will be getting worse. Many may well crack under pressure, because money is, well, quite an effective motivator, to put it blunt.

Second, improved job performance, thanks to automation and robotics, is far more mature today compared to what it was ~50 years ago. For many industries, including agriculture, manufacturing, and retail, fewer workers are needed to keep things running.

Third, "bullshit jobs". If a sufficient fraction of people who are involved in non-bullshit jobs decide to quit what they are doing, the world may well change. As of today, however, it is not clear how many of the "quitters" truly qualify for "essential workers".

In a perfect world we might expect to see the state of the art change, towards people willing to pay more for what we are used to having for less. And we might expect to see wages increase.

After all, many of the price breakdowns we are used to are just situational market equilibriums. Who said the car requires a certain number of cents for maintenance per one dollar of gas? Who said keeping one's wardrobe and shoes in certain shape should require a certain percentage of family budget, and how should this percentage compare to what we spend on gadgets or online subscriptions? It is not unreasonable to expect that, in a perfect world, some large-scale rebalance of values happens "thanks to" the current Quite Quitting trend, and we wake up to the state of the economy where the people who participated in this movement are better off compared to before it has stated.

Intuitively, I doubt it though. For two reasons.

One, one can't do anything with supply/demand impedance. The economy needs production. If we produce less than we consume, depression kicks in. It's just too hard to change people's spending habits when everyone begins spending less. If anything, expect federal programs across the globe to boost spending. And these programs would immediately steer the global worker strike experiment" the wrong way; likely to help it "return" to where the economy was before; where the workers are not happy, but keep working, because that's the best option they have.

Two, on the global markets scale. We, Americans, are outsourcing a lot. Most certainly, if Americans buy more American products (and, consecutively, fewer Chinese et. al. products), the American economy improves. Most certainly, if the outcome is that Americans buy more American products, the wages of American workers increase. But the lockdowns situation is lasting for a while now, and it doesn't seem to me that the share of American products on the American market is growing.

I can't think of any big conclusions to definitively draw from the above. Except that we are receiving a yet another confirmation that market downturns inevitably hit the Average Joe and Jane the hardest. And, on top of that, a (somewhat weaker) confirmation that "global unionization" of workers does not necessarily result in better conditions, at least not in the short term.
Why doesn’t Ubuntu let me install Android apps natively?

My laptop has a SIM card slot. I do buy local phone numbers in various places. In fact, mobile Internet is so good these days that I often work, take my video calls, and even host my meetup events over the internet that is coming through this in-laptop SIM card.

For many local 4G/LTE providers the best way to manage my account is via their native mobile app.

Why can’t I “install” their native “app” into Ubuntu and let it manage my “mobile internet” on my laptop? Up to and including receiving the confirmation text message, etc.? I expect this app to 100% believe it is running on some non-standard yet popular Android phone.

Why can’t I write a one-liner script to check my balance via a USSD code every hour or so?

Where’s technology when we need it most? =)
Folks, how realistic is the following plan?

• Ask people to pay my nonprofit, not me myself or my company, for my extracurricular activities,

• Call out publicly for some open source help, and

• Pay those who are kind enough to help with this open source stuff from this nonprofit?

My rationale is that since whatever I'd be paying for is public domain, and as long as I ask the folks to sign a waiver that they release their code to public domain, this is well in line with what nonprofits are.

Moreover, I would not have to worry about spending all of this non-profit's earnings by the EOY, as I have to with an LLC or with a corp. Because, as far as I understand, non-profits don't pay tax on their annual income.

Obviously, I do not plan to profit directly from those open source contributions. I'll look into making sure this open source work can be leveraged with other, proprietary, code that is not paid for by this non-profit. But everyone is welcome to use the very open source pieces freely, as permitted by the license under which these contributions are released.

Is this a good pattern, or am I walking on shaky grounds if I choose to pursue it?

PS: The extracurricular activities would be my mock interviews / prep sessions / talks I give that people are comfortable paying for / etc.
Late last year I was involuntarily subjected to an interesting psychological experiment.

I misplaced my credit card, my US Passport Card, and some 20 USD.

This made me relatively upset for about two days. Even though 20 USD is nothing to worry about. Even though I have more credit cards on me. Even though I could block this lost card with one message on my phone, which I did. And even though Apple Pay, tied to this card, kept working perfectly fine, so my daily routine was literally unaffected.

Clearly, what made me sad is the loss of the Passport Card.

Sparing the details, here's the thought process that got me back to peace.

This is *exactly* the purpose of the Passport Card!

And it has served its purpose well.

Why did I get a passport card in the first place? To have an alternate ID, in the form & shape of a credit card, an alternate ID that is "safe" to lose, just in case.

What would I carry with me if I did not have this card? My driving license, of course. Which is a far worse document to lose while overseas. Granted, I have another one, from Switzerland, but it's falling apart after 10+ years, so it really really is just a backup one.

Would I "emotionally prefer" to have lost ~100 USD instead of ~20 USD and not lose the Passport Card? Hell yeah, I'd get over losing ~100 USD far quicker than over losing the Passport Card.

Would I have paid that 100 USD to obtain the passport card in the first place, had it been something to order specifically, not something that came with the passport? Likely no; at least, it would have been a close decision.

So, rationally speaking, I concluded that I am worried far too much compared to the true "damage done". I got a passport card to keep it on me mostly as a backup ID so that I don't lose the driving license, I kept it on me mostly for this very purpose, and — whoa! — I did not lose the driving license.

The Universe is working its ways just as it should. Why am I disappointed to begin with?

And I am happy to report this logic & reason did bring me back to peace.

PS: After ~six rounds of human interactions my credit card and my passport cards were found. In the IMAX theater. Although it required two trips there, one to leave the "lost & found request", and one more to manually check on its status, as they did not call back. Karmically weirdly, the cards were found exactly on that one last trip that also had the smallest chance of success, because why would you go somewhere if they would call you on the phone number you left there, right?

PS2: Avatar 2 has amazing visual effects. Distant second would come its effort in autism awareness. Hard to say what would be a distant third. It's still a nice movie, at least for someone like me who frequents movie theaters with a cadence of approximately once in three years.
My understanding of Docker, docker compose, and other higher-level concepts of modern software architecture has just been upgraded to a new level.

I realized I adore them deeply. For a convoluted and controversial reason.

Not only these concepts enable us, engineers, to ship more complex software. That's nice, that's important, but that's not what fascinates me in modern technology.

What makes me love these technologies dearly is their unique property of requiring even more deep thought from whoever uses them.

Let me explain. Some twenty years ago software was "simple". From first principles, any savvy kid could learn enough to qualify for a Jr. engineer by 18yo and for a Sr. engineer by some 25yo. And those "simple" technologies were not very deep in nature.

Today, much as I hate saying this, a modern GPT-* model can handle most of the "complexity" of the technology of the early 2000s.

The technologies the ~18yo-s .. ~25yo-s have to master today, miraculously, are the exact opposite of what GPT-* can be “proficient at”.

Simple things became simpler to build, but the complexity of nontrivial things has grown exponentially. A good engineer today is not someone who understands algorithms and data structures. It's one who is capable of holding in their head the architecture of a system with dozens of moving parts.

From the engineering side, we, modern engineers, are exercising the most powerful part of our brain: managing layers of abstraction. Jr. engineers know how to reason about them. Sr. engineers know how to prefer some to others. Architects know when to break the rules and when to create new abstractions.

Great products emerge throughout engineers navigating these abstractions, and the engineers are paid well in the meantime.

I could not dream of a better world. With all its imperfections, software architecture today is almost intelligently designed to self-select for the people who are good at specifically this: correctly and seamlessly managing multiple layers of abstraction.

So, next time you feel annoyed by a nontrivial counterintuitive behavior of git, or docker-compose, or some hook / trigger / lambda / GitHub action, or whatever tool you use to collect and crunch logs in real time, take a deep breath and relax. That's all how it should be. Had it been "easier", working on software would not require this much intelligence, would not pay well enough, and would not be this much fun.

PS: I played quite a bit with GitHub Actions, docker compose, and wasm/Emscripten in the past several months, and enjoyed how things come together quite a bit.
What is the "breakthrough" of Quantum Advantage, formerly known as Quantum Supremacy?

I can build a machine with N nodes and M wires. Each node would have an adjustable electrical resistance, a digital voltmeter, and a pair of connectors, in and out.

With N dynamically adjustable electrical resistances and M mechanical arms or motors or what not, it would be an O(1) operation to get to any topology of how these wires connect to one another.

(A grid, where M = N^2, or where M = N*2, so that only the adjacent nodes can be connected, might be easier to visualize.)

Now, I would let electric current flow between the first node and the last node. And I'll measure the voltages on all nodes.

Voila! I have reproducibly, and relatively accurately, solved a real-life problem in O(1), using O(N+M) "computing elements". Solving such a problem on a modern ("von Neumann") computer would, if I recall correctly, require more "computations" than such a process.

Matter can compute. We all know this. The "traditional" computer is just one way to build a "universal evaluator".

Despite what the Turing-Church thesis tells us about computability, there is no fundamental law which would state that using the ALU (arithmetic logic unit) in the way modern CPUs do is the most effective way to make matter perform computation.

I just outlined an example of a "computer" that would vastly outperform a "traditional" CPU on a real-life problem. How is it not Q.E.D.?

Quantum computers make clever use of far less intuitive laws of physics to arrive at presumably more effective "hardware" to run more effective "algorithms". But quantum effects by no means have a monopoly on "cheating" the modern CPU architecture.

Electricity, optics, heck, even gasoline & metal, in engine tests, can do and arguably do do a better job at solving some computational problems compared to the good old CPUs.

Friendly reminder: an optical array can perform a Fourier transform of an image. Or even apply a ("hard-coded") neural network to it. It's not hard to imagine a zero-watt, emission-free, "solar-powered" "crystal ball" that will show a green light when pointed at a dog and a red light when pointed at a cat. In the same O(1) "computational complexity". So what?

And, at least to me, finding the prime factors of 21 looks far less sophisticated than solving the system of equations to compute electric current.

So, what's the big deal about that Quantum Advantage? After all, we already have Kirghoff's Advantage, Maxwell's Advantage, and Navier-Stokes Advantage, to name a few.
Expectation: Developers are essential and ChatGPT will not affect our lives in meaningful ways, because what we do for living is about as far from mimicking simple tasks as possible, and about as close to Strong AI as possible.

Reality: Attending a training on GitHub Actions, and the format is literally how one would train ChatGPT in a "monkey sees monkey does" way.