Not boring, and a bit of a condescending prick
129 subscribers
8 photos
52 links
Semi-digested observations about our world right after they are phrased well enough in my head to be shared broader.
Download Telegram
to view and join the conversation
Embarrassingly, I knew it since forever that git upstreams can be local dirs, but it always felt a tricky feature to use.

So, even when we all knew a git proxy would help speed up cloning the repos, I was somehow reluctant to look into this.

Well, git upstreams can be /home/dima/.my_git_cache/${repo_name}. In other words:
• Create a .my_git_cache local dir.
• Clone everything you need there.
• (You may even "clone" from your local, already cloned, dirs, just make sure to change their upstream to the true ones.)
• Write a cron script to do git fetch in all those .my_git_cache/* repos every few minutes.
• For your new repo, add this .my_git_cache as one of the upstreams, e.g. cache.
• (You may also want to mark it read-only by preventing pushes to it, just to be safe).
• Now, establish a habit of saying git fetch cache every time you need to sync with the "true" upstream.

Voila. Time saved: hours and hours. A rewarding git experience: priceless.
In the meantime, the System Design Meetup is growing, and we are now 99 people in Slack.

If you are interested in making it a three-digit number, or in joining the meetup, or both, here's the invite link:

To also take part in the on-the-record episodes, which we release on YouTube, please also fill the form.
As part of doing something pure and useful every once in a while, I've spent some ~1.5 hours to confirm I speak cmake sufficiently well.

Code:, with the README describing what does it accomplish.

I've even added a Makefile to shortcut the popular commands, so that you won't have to worry about cmake at all.

git clone
(cd cmake_playground; make)
(cd cmake_playground; make test)

On both Linux and Windows, in Qt Creator and Visual Studio respectively, this code can be "opened" (as a directory and/or as the CMakeLists.txt file), and the IDE would do the magic down to identifying individual googletest cases. Heck, I can even debug them one by one — it's been a while since I used anything but gdb on Linux for these purposes.

Back to the real work now =)
So, I really love open source.

To the extent that I often mention, in private conversations, that my ideal “engineer” life is to be paid for working in a 100% open fashion, so that my business hours could be screencast.

And that I’d need to do nothing to configure a new laptop or desktop but to copy a bunch of dotfiles and have its public key populated on the repos I work with.

As of now (2021) this, technically, is attainable. But the question already is in the air: what if my open source work benefits the wrong people?

My answer to this concern remains the same: “none of my problem”.

Because if that’s what you are worried about, why don’t you focus your energy on making our world more united and more positive-sum? As opposed to contributing to its divisiveness by erecting even more barriers, such as constraining what general-purpose technologies should be open source by the reasoning that goes along the lines of “intellectual property”, “corporate secrets”, and “national security”.

Technology, after all, is that high tide that lifts all boats. And, when it comes to software, the cost of making it possible for everyone to contribute to it is literally zero. In fact, this cost is negative — as controlling access to the code is by itself a nontrivial task that’s quite an expense for the entity that “owns” this code.

And we, each and every human being, undoubtedly benefit from better maps, better messengers, and better wealth preservation and transfer solutions that are near-instant, know no borders, and are mathematically guaranteed to do their job well.

~ ~ ~

And it occurred to me just today that such a point of view may well be cancel-worthy in some not too distant future.

Which is a very sad thought. Because it means I myself do not believe 100% that my dear humankind is interested in playing more and more positive-sum games moving forward.

Seriously, the Golden Age of technology may already be past us. To the degree that I feel the urge to publish this post today, while it is still safe to hold such a point of view publicly.
Mobile phones have a battery life of about one day.

A bit less if you're lucky, a bit more if you prioritized battery life when choosing the phone, and some 5x more if you have a power bank on you.

Life changes with a charger in your pocket. Your "battery life" is effectively infinite now. In the vast majority of places these days total strangers would let you charge your phone, for free. The cost of power, after all, is miniscule.

In fact, if your phone needs a standard charger, there's a good chance you don't need your charger with you at all; in many, if not most places you could find yourself at people would be happy to have your phone charged.

If you are a semi-frequent Starbucks, or virtually any, coffee shop visitor, quite frankly, you don't have to worry about charging your phone at all. A daily cup of coffee, virtually by definition, does pay this bill.

~ ~ ~

Why can't we, as the civilization, do the same for the Internet?

Forget video. Forget streaming. Three very basic Internet usecases are:

⒈ 𝗔𝗰𝗰𝗲𝘀𝘀 𝗶𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻. To read Wikipedia, or my favorite book, or today's news.
⒉ 𝗠𝗲𝘀𝘀𝗲𝗻𝗴𝗶𝗻𝗴. End-to-end encrypted, and unconditionally free from censorship of course.
⒊ 𝗣𝗮𝘆𝗺𝗲𝗻𝘁𝘀. A healthy mix of Venmo / Zelle and crypto. Banks are welcome.

Banks, in (3) are, of course, welcome, as are third-party messenging platforms in (3), as long as they provide the basic guarantee of unconstrained wealth transfer and unconstrained message passing.

In other words, most of today's banks and today's messengers are not welcome (the former have a tendency to block innocent payments, the latter have been caught multiple times failing to send the messages which the platform "does not endorse"). They are still welcome. On the proper API terms.

Let's get this Net Neutrality thing working. I don't care about the term; it would likely need to change. Still, there's no reason at all that poor kids all over the planet are deprived from a feature as simple as sending a "Hello" message, or sending $10 to a friend, or reading today's news.

~ ~ ~

Maybe this would be the largest impact Elon Musk makes all in all. Make Starlink available 24/7 for ~1KB/s traffic, unconditionally. Simple as that.

If he attaches a cryptocurrency to this (a possible future for DOGE, right?), this alone could be cash-positive AF. I sincerely hope this, or something like this, is about to happen some time very soon.
I like tolerant people, and believe I am quite a tolerant person myself. Here's a thought (*) from today.

Personally, I don't pay much attention to holidays, or even weekends, in my own life. My philosophy is and remains that if what I'm dedicating my life to doing professionally is not worth my time and attention on a weekend, why should it during the "standard" business hours?

That said, I relentlessly and unconditionally respect the personal time of others.

Vacation? You have fun, I won't bother you at all. Weekend with kids? Enjoy, and, if you need a break and you see me online, I'd gladly chat with you about what you have in mind, but, from my end, you'd never receive any ask to have some "urgent" work done.

Want to trade some biz day to get an extra day off? Sure thing, I'd cover you best I can, and, rest assured, if anything goes wrong it'd be my responsibility — just make sure to get the job done, and we'll handle everything else in a BAU fashion.

Taking this further, I actually enjoy it quite a bit that most of the people I work with are very much used to taking weekends off to disconnect from work 100%. (Most of my peers these days are Canadians and Brits, which most certainly helps.)

In a way, I like to think I'm a big contributor to the culture where anyone can be off at any time, and the whole hive would still perform as it should. It just never crossed my mind that such a mindset should apply equally to myself, because, yes, as I said above — I like what I do, and I don't see the point in working on something that would not excite me enough to get [parts of] it done over a weekend or on a holiday.

Interestingly, this topic relates to an old debate about hiring "superstars".

On the one hand, from a purely corporate setting, hiring superstars is a terrible idea, as they are hard to replace, tend play by their own rules, and would, at times, strongly demand something that the organization has committed to provide, but, for organizational and other reasons, is unable to provide at the time. From these grounds yours truly is making a terrible mistake by refusing to "lay low" and pretend to be part of the "tribe", quite valuable yet perfectly replaceable. (**)

On the other hand, superstars are who help shape the culture, after all, and I firmly believe that good leadership consists of people who can not only find and hire the "right" superstars, but also a) trust them enough to form and sustain the "lead by example" culture, and b) retain them, so that those people feel empowered to keep making those changes, in ways that, quite frankly, do not always follow the chain of command, or the playbooks, or the overall "best corporate practices" of the company.

Thanks for all the congrats btw, two years and flying smoothly at PokerStars!

(*) prompted by the fact that in the past two weeks, more than once, I had calls with someone in their pre-5am. "You sure? It's damn early on your end of the world?" "Hey, no worries, I'm already enjoying my coffee at the computer, so why not, it'd make my morning."

(**) just realized while writing this that last two times I actually interviewed for a job (~2 years ago and ~3 years ago) I sincerely argued that my attitude is to never try to be irreplaceable, but rather to be perfectly replaceable — just in the right cohort of people, who are relatively high paid and relatively hard to recruit.
The next planned system design meetup is about storage and databases.

I'm preparing quite seriously, and this is perhaps the first time the slides are really worth reviewing way ahead of the event: link.

Comments are more than welcome, here in Telegram, or just in the Google Slides documents.

Looking forward to the real meetup, and I'll announce the date once there's a bit more certainty of my calendar (we're planning a team event for the whole next week, hard to say if I might be able to find time, but I'll do my best).

If you think Facebook's UI/UX and/or moderation policies are reasonable and without flaws, think again 😊

Also, an admin or a moderator of my own feed is yours truly, right?
An educational System Design Episode on TinyURL: video, slides.

I'm trying to find the balance between in-depth conversations on the one hand and making sure our meetups are useful to less experienced engineers on the other one. Hope this is a good stare; we sure enjoyed it a lot, although it's nearly two hours 😉
The true complexity of this world is in explaining and understanding stuff.

A lot of things follow from this.

Corollary one, on science/engineering/technology and on engineering jobs. Engineering, especially software engineering, may well be the essence of the art of explaining things in formal terms. After all, computers are just that: ultra-performant entities with zero "prior knowledge" of how to interpret our "instructions". Thus, programming has to be absolutely precise. The ultimate challenge in formal explanations, IMHO, are computer science algorithms. When one can formulate a CS problem well, chances are they can explain its solution well. Even more true the other way: once one can clearly articulate how a CS algorithm works their explanation of the problem it solves tends to be sharp and concise as well.

Corollary two, on replacing jobs with AI. There is an emerging trend of "no code" in my industry, the flagship of which today is the demo of Codex by OpenAI. Outside software engineering self-driving cars are a notable example, along with cashiers or clerks. Here I am going to postulate an unpopular opinion that Codex would not change the world substantially, at least not any time soon. The reason, I think, is simply that the set of assumptions to keep in mind grows exponentially once the problem domain gets broader than putting elements into a DOM tree. Any human being, who went through a few dozen hours of learning how HTML + CSS + JavaScript result in stuff emerging on the screen, would be superior to today's "AI" the moment the problem becomes even a tiny bit less straightforward. My experience of building an NLP engine to convert English utterances into database queries proves this beyond reasonable doubt: the human language is inherently ambiguous, and the challenge an automated system faces is not in understanding every word, but in capturing their meaning, and this meaning is a) far greater than the sum of the words, and b) is very much context-dependent.

Corollary three, on implicit defaults. One of the biggest killers of quality explanations is when the subject matter itself contradicts our intuitions. This alone may be a good reason why theoretical physics attracts people of certain mindsets: one has to have their intuition calibrated in a certain way to work with seemingly contradictory observations in order to see the big picture. When humans communicate there are two "successful" modes and a death valley in between. The first successful mode is solving a math problem: the conversation is analytical, the arguments clear, and all the parties effectively incorporate them into their mental models. The second "successful" mode is discussing something ambiguous, such as what qualifies for oppression, with the adepts of certain mindset: their default state is to "agree", the people tend to agree with virtually everything, as the sense of mutual agreement is more important to them than making sure the parties understand what exactly they are claiming to agree with. Oftentimes their defaults are close enough for the differences to not matter. The death valley in between is when two people are communicating in two different "successful" modes and their defaults are out of sync. This is what leads to disasters down the road, and that's why big decisions are best to be formalized, documented, and cross-checked.

This also happens to be exactly why I believe the AI today can neither understand nor explain stuff; unless it's a very narrow AI, but we're talking AGI here, of course. GPT-3 can easily talk about what qualifies for oppression in the modern day. GPT-3 would fail miserably at solving anything nontrivial, as "just guessing" would not get one very far there. Try asking it to only talk to you in sentences of an even number of words.

The bright conclusion is that humans still matter, and we would for quite a while from now on.
Many years ago, back at Google, the junior me suggested a crazy idea: every line of code has a half-life.

In other words, the company is committed to keeping no legacy code, by constantly updating its codebase.

Say, your team shipped 20K lines of code in late 2008. At most half of them are allowed to still be part of the codebase by the end of 2010, at most half of the half can remain by the end of 2012, etc. If your code is business-critical, then make sure you re-implement at least half of its logic every two years.

Back then we concluded that it's not too bad of an idea, especially if we can keep the tests. Which we, of course, can.

Obviously, companies (and, especially, managers!) would not endorse such an approach. They are mostly in the "move fast, break things" mode, regardless, with tech debt piling up year after year being the rule, not an exception.

I still think it's a good idea. If your friends are building a company this way, let me know — I'd love to talk to them, at least.

~ ~ ~

These days I am having a déjà vu when it comes to infrastructure as code.

How about introducing the term half-life for infra? At least half of the servers you have configured for prod use today must be re-configured a month from now.

If you are cloud-first, you are, in a way, already half-way there. Servers die, or get decommissioned, and you have to have the process by which your code can be re-staged on fresh machines, being fully connected to the rest of the fleet.

Admittedly, in most companies there will be a few servers with fine-tuned manual configuration, for which this job is more manual than just a few actions. Those are exactly the points of impact that are not getting the attention they deserve most these days, and those are exactly the ones, IMHO, which a new solution, once it emerges, would improve the most.

A big reason I like the cloud is that it's easier to be disciplined if your deployment has to follow certain rules and constraints.

For on-prem deployment, this "luxury" is not there. In a way, it is making the DevOps job much harder in the long run.

~ ~ ~

Curiously, this is where the crypto community might be ahead of the curve.

The crypto folks don't have the luxury of relying on some cloud infra. At the same time, crypto solutions can not afford manual configuration, mostly because they are, well, decentralized by design.

But designing software to run in a 100% decentralized fashion doesn't necessarily solve the problems right away. Sure, decentralized solutions are a lot harder when it comes to reliability, low latency, and safety from bad actors. Still, a lot more down to earth problems, such as fault tolerance and node discoverability, have to be taken care of.

~ ~ ~

I am going to make a strong claim — and a falsifiable prediction! — that if a new and effective configuration / server management solution for one's own cloud, or for a hybrid cloud, would emerge, it would come not from the people who are thinking 24/7 of how to improve Chef or Ansible or Kuberbetes or CloudFormation.

Rather, it would be designed by those looking for better ways to speed up their smart contracts by offering users to run their own support nodes, for a modest income in certain tokens.
Consistent Hashing is the second episode of the Educational Series of the System Design Meetup.