Web After AI
3 subscribers
1 photo
4 links
Notes about the web (and not only) in the era of the AI revolution
Download Telegram
Channel created
Please open Telegram to view this post
VIEW IN TELEGRAM
Channel photo updated
CI (Continuous Integration) + Trunk-based development

I watched a talk about trunk-based development. It’s basically a story of moving from GitFlow to working off a single mainline in a fairly big team (around 15+ engineers), and how that changed delivery: releases went from “every ~2 weeks” to shipping every day (sometimes multiple times per day), with fewer bugs over time. The price is discipline: when main branch can roll to production automatically, you need very high confidence in every change, so you end up with a lot of automated tests. Feature flags help you merge work early while keeping it off for users until it’s actually ready (or until the backend is ready), but feature flags also have their own operational cost (tooling/service, ownership, cleanup). It’s a lot of extra work, so in a small team it might be too heavy — decide for yourself.

Then I bumped into the same idea in Refactoring (2nd ed., JavaScript) by Martin Fowler, with Kent Beck: they prefer CI and tie it to trunk-based work. The connection that stuck with me is refactoring: even small changes like renaming a function should land quickly in the main branch, otherwise long-lived branches turn into merge conflicts. The “do it in small steps and integrate continuously” part suddenly felt very practical.

Links:
Trunk-based development talk (RU): https://www.youtube.com/watch?v=qpGhQXC7ha0
Codex App

OpenAI shipped Codex App: a desktop UI for agentic coding where you can run multiple parallel threads, work directly with the filesystem/repo, review diffs, and keep changes organized per task. It also makes it easy to hand off work to an IDE (including Cursor) when you want to take over manually.

What I’m curious about: the early “skills” concept (especially whether the Figma skill is better than a plain Figma MCP setup). Automations exist too, but so far they feel like a rough first pass — maybe they’ll get interesting once I can define my own workflows.

The constraint for me is model choice: it’s OpenAI-only, and I often get stronger coding output from Claude/Gemini. So I’m not sure I’ll use Codex App for actual development.

Source: https://openai.com/index/introducing-the-codex-app/
👍1
Did you know Google Translate can break React?

It turns out that translating an entire page with Google Translate can break React and other apps because it mutates the DOM: instead of text nodes, it inserts elements like <font> and changes the tree structure.

Because of this, you can get errors at the level of DOM operations (removeChild, insertBefore), and sometimes the app simply stops updating UI/data properly — because the framework no longer sees “the same” tree it originally built.

Meanwhile, the discussion around this problem (see link) turns into a typical holy war: React argues with the Google Translate team about who should change it — the framework or the translator.

And while they argue, users come up with different workarounds that don’t always work and usually only cover specific cases. For example, people often suggest wrapping text in a span so the translator doesn’t touch “bare” text nodes. But it’s not a universal solution and doesn’t cover every case.

It’s a pretty telling situation: two big systems shift responsibility back and forth, there’s been no universal fix for years — and the real cost of that dispute lands on developers and users.

Source: https://issues.chromium.org/issues/41407169
👍1
P95/P99: why dashboards use histograms

When you see P95/P99 on a chart, it’s tempting to think the system stores every request latency, sorts the list, and picks the 95th/99th value. In practice, that doesn’t scale. At 10,000 requests per second, a 10-minute window already means ~6 million samples. Keeping all of them in memory (per service, per minute) and sorting them repeatedly is a fast way to burn CPU and memory.

So most monitoring stacks don’t ship raw latencies. Instead, each instance reports a compact, mergeable summary of the distribution — basically “how many requests fell into each time range”. For example: 0–10 ms: N, 10–50 ms: M, 50–100 ms: K, and so on. That’s a histogram.

With Prometheus, that’s typically histogram buckets (counts per latency range). Datadog uses t-digest for percentiles. The backend merges these summaries across instances into a single global distribution and only then computes P95/P99 from that combined view.
🔥1
After a long pause, I’ve finally published a new article: “Advancing AI-Assisted Engineering Practices”
https://dev.to/dglazkov/advancing-ai-assisted-engineering-practices-phf

In this article, I share my experience of how my approach to working with AI in Software Development has evolved. You’ll find my perspective on approaches such as Spec-Driven Development, Human-in-the-Loop, Ralph Loop, and Harness Engineering.
I also touch on some fairly controversial questions, such as whether AI actually writes code better or faster than humans, and whether code generated with AI should be reviewed.
I’d really appreciate it if you take a look, support me with some reactions, and share your thoughts in the comments. I’m also happy to answer any questions either in the comments or via private messages.
🔥21