If no one’s reviewing your code, that’s not trust. That’s risk.
Even senior developers need feedback.
Even experienced engineers miss things.
Even well-tested code can go wrong if it’s built on bad assumptions.
Code reviews aren’t just about syntax or formatting. They catch:
● Gaps in business logic
● Missed edge cases
● Overcomplicated solutions
● Misaligned decisions that could impact others
Reviews protect the system, not your ego.
And if your team avoids reviewing your work because you're "senior," it might be time to ask: am I really leading - or just moving fast alone?
© LinkedIn
Even senior developers need feedback.
Even experienced engineers miss things.
Even well-tested code can go wrong if it’s built on bad assumptions.
Code reviews aren’t just about syntax or formatting. They catch:
● Gaps in business logic
● Missed edge cases
● Overcomplicated solutions
● Misaligned decisions that could impact others
Reviews protect the system, not your ego.
And if your team avoids reviewing your work because you're "senior," it might be time to ask: am I really leading - or just moving fast alone?
Linkedin
If no one’s reviewing your code, that’s not TRUST! | Zayniddin Otabaev
If no one’s reviewing your code, that’s not TRUST! That’s RISK!
Even senior developers need feedback.
Even experienced engineers miss things.
Even well-tested code can go wrong if it’s built on bad assumptions.
Code reviews aren’t just about syntax or formatting.…
Even senior developers need feedback.
Even experienced engineers miss things.
Even well-tested code can go wrong if it’s built on bad assumptions.
Code reviews aren’t just about syntax or formatting.…
The worst bugs I’ve seen weren’t caused by bad code.
They were caused by assumptions.
- Assuming a status could never be null
- Assuming a third-party API would always respond
- Assuming a queue job wouldn’t retry at the wrong time
- Assuming a user would never submit the same form twice
- Assuming “this should never happen” actually means it won’t
Clean code doesn’t protect you from flawed thinking.
Tests won’t help if you test the wrong scenario.
And documentation rarely keeps up with real-world behavior.
The deeper I go in backend work, the more I value this:
Don’t code based on what you assume.
Code based on what the system might do - especially when things go wrong.
Because when assumptions fail, users pay.
©️ LinkedIn
They were caused by assumptions.
- Assuming a status could never be null
- Assuming a third-party API would always respond
- Assuming a queue job wouldn’t retry at the wrong time
- Assuming a user would never submit the same form twice
- Assuming “this should never happen” actually means it won’t
Clean code doesn’t protect you from flawed thinking.
Tests won’t help if you test the wrong scenario.
And documentation rarely keeps up with real-world behavior.
The deeper I go in backend work, the more I value this:
Don’t code based on what you assume.
Code based on what the system might do - especially when things go wrong.
Because when assumptions fail, users pay.
Linkedin
The worst bugs I’ve seen weren’t caused by bad code. | Zayniddin Otabaev
The worst bugs I’ve seen weren’t caused by bad code.
They were caused by assumptions.
- Assuming a status could never be null
- Assuming a third-party API would always respond
- Assuming a queue job wouldn’t retry at the wrong time
- Assuming a user would…
They were caused by assumptions.
- Assuming a status could never be null
- Assuming a third-party API would always respond
- Assuming a queue job wouldn’t retry at the wrong time
- Assuming a user would…
Every messy legacy system was once someone's "best idea at the time".
It probably solved a real problem.
It probably worked under different constraints.
It might’ve even been impressive - for its moment.
Years later, we inherit it.
We complain about the code.
We rewrite it, rearchitect it, modernize it.
And one day, our new system becomes the next legacy.
That’s the cycle. And it’s normal.
Legacy isn’t failure.
It’s just time, business pressure, and change - layered into code.
©️ LinkedIn
It probably solved a real problem.
It probably worked under different constraints.
It might’ve even been impressive - for its moment.
Years later, we inherit it.
We complain about the code.
We rewrite it, rearchitect it, modernize it.
And one day, our new system becomes the next legacy.
That’s the cycle. And it’s normal.
Legacy isn’t failure.
It’s just time, business pressure, and change - layered into code.
Linkedin
Every messy legacy system was once someone's “best idea at the time.” | Zayniddin Otabaev
Every messy legacy system was once someone's “best idea at the time.”
It probably solved a real problem.
It probably worked under different constraints.
It might’ve even been impressive — for its moment.
Years later, we inherit it.
We complain about…
It probably solved a real problem.
It probably worked under different constraints.
It might’ve even been impressive — for its moment.
Years later, we inherit it.
We complain about…
Building APIs is easy.
Designing them is not.
Anyone can expose a controller and return some JSON.
But building an API that survives long-term use - that’s different.
Here are a few lessons I learned the hard way:
● Never skip pagination - you'll regret it once the dataset grows
● Plan for versioning from day one - breaking changes are expensive
● Be strict about request/response schemas - even for internal clients
● Add idempotency for critical POST requests - retries will happen
● Design with unknown clients in mind - today it's your frontend, tomorrow it's a third party
● Think in use cases, not database tables - your API is not your ORM
Good APIs are more than HTTP wrappers around services.
They’re contracts. They shape how others build on top of your work.
Build them like they'll outlive your current stack.
Because they often do.
©️ LinkedIn
Designing them is not.
Anyone can expose a controller and return some JSON.
But building an API that survives long-term use - that’s different.
Here are a few lessons I learned the hard way:
● Never skip pagination - you'll regret it once the dataset grows
● Plan for versioning from day one - breaking changes are expensive
● Be strict about request/response schemas - even for internal clients
● Add idempotency for critical POST requests - retries will happen
● Design with unknown clients in mind - today it's your frontend, tomorrow it's a third party
● Think in use cases, not database tables - your API is not your ORM
Good APIs are more than HTTP wrappers around services.
They’re contracts. They shape how others build on top of your work.
Build them like they'll outlive your current stack.
Because they often do.
Linkedin
Building APIs is easy. | Zayniddin Otabaev
Building APIs is easy.
Designing them is not.
Anyone can expose a controller and return some JSON.
But building an API that survives long-term use - that’s different.
Here are a few lessons I learned the hard way:
● Never skip pagination - you'll regret…
Designing them is not.
Anyone can expose a controller and return some JSON.
But building an API that survives long-term use - that’s different.
Here are a few lessons I learned the hard way:
● Never skip pagination - you'll regret…
👍1🔥1
https://arktype.io/ ishlatib ko'ryapman, menga yoqyapti🙂
ArkType
TypeScript's 1:1 validator, optimized from editor to runtime
If you’re designing a database for something small today but expect it to grow, think beyond just "what works now".
Start with clear naming conventions so new developers can understand it instantly.
Normalize where it makes sense, but don’t overcomplicate - some duplication can save you pain at scale.
Use UUIDs or other non-sequential IDs if you anticipate sharding or merging data later.
Plan indexes early, but don’t add too many - they speed up reads but slow down writes.
Think about how you’ll handle archiving old data, because tables that just keep growing will eventually hurt performance.
And always leave room for optional fields or relations you might not need today, but will probably need tomorrow.
©️ LinkedIn
Start with clear naming conventions so new developers can understand it instantly.
Normalize where it makes sense, but don’t overcomplicate - some duplication can save you pain at scale.
Use UUIDs or other non-sequential IDs if you anticipate sharding or merging data later.
Plan indexes early, but don’t add too many - they speed up reads but slow down writes.
Think about how you’ll handle archiving old data, because tables that just keep growing will eventually hurt performance.
And always leave room for optional fields or relations you might not need today, but will probably need tomorrow.
Linkedin
If you’re designing a database for something small today but expect it to grow, think beyond just "what works now". | Zayniddin…
If you’re designing a database for something small today but expect it to grow, think beyond just "what works now".
Start with clear naming conventions so new developers can understand it instantly.
Normalize where it makes sense, but don’t overcomplicate…
Start with clear naming conventions so new developers can understand it instantly.
Normalize where it makes sense, but don’t overcomplicate…
🔥2👍1
What is a Type Guard?
A type guard is how TypeScript narrows a union type or unknown type to something more specific.
Example:
2. Built-in Type Guards
a)
b)
c) Property check:
d) Equality checks (when comparing with literals, TS narrows types):
3. Custom Type Guards (
Complex example:
4. Real-world Use Cases
a) Parsing JSON
b) Working with APIs
A type guard is how TypeScript narrows a union type or unknown type to something more specific.
Example:
function printLength(value: string | string[]) {
if (typeof value === "string") {
// Here TypeScript knows: value is string
console.log(value.length);
} else {
// Here: value is string[]
console.log(value.length); // array length
}
}
2. Built-in Type Guards
a)
typeof
(works for primitive types):function logValue(x: number | string) {
if (typeof x === "string") {
console.log(x.toUpperCase()); // string methods OK
} else {
console.log(x.toFixed(2)); // number methods OK
}
}
b)
instanceof
(checks if a value is an instance of a class/constructor):class Dog { bark() {} }
class Cat { meow() {} }
function makeSound(animal: Dog | Cat) {
if (animal instanceof Dog) {
animal.bark();
} else {
animal.meow();
}
}
c) Property check:
"prop" in obj
type Car = { wheels: number };
type Boat = { sails: number };
function move(vehicle: Car | Boat) {
if ("wheels" in vehicle) {
console.log(`Car with ${vehicle.wheels} wheels`);
} else {
console.log(`Boat with ${vehicle.sails} sails`);
}
}
d) Equality checks (when comparing with literals, TS narrows types):
type Direction = "up" | "down";
function move(dir: Direction) {
if (dir === "up") {
// dir is "up" here
} else {
// dir is "down" here
}
}
3. Custom Type Guards (
val is Type
)function isString(value: unknown): value is string {
return typeof value === "string";
}
function printUpperCase(value: unknown) {
if (isString(value)) {
// value is now string here
console.log(value.toUpperCase());
}
}
Complex example:
interface User {
id: number;
name: string;
}
function isUser(obj: any): obj is User {
return typeof obj?.id === "number" && typeof obj?.name === "string";
}
function greet(data: unknown) {
if (isUser(data)) {
console.log(`Hello ${data.name}`);
} else {
console.log("Not a valid user");
}
}
4. Real-world Use Cases
a) Parsing JSON
type Product = { id: string; price: number };
function isProduct(obj: any): obj is Product {
return typeof obj.id === "string" && typeof obj.price === "number";
}
const raw = '{"id": "A1", "price": 99}';
const parsed: unknown = JSON.parse(raw);
if (isProduct(parsed)) {
console.log(parsed.price * 2);
} else {
console.error("Invalid product data");
}
b) Working with APIs
async function fetchUser(id: number): Promise<User | { error: string }> {
const res = await fetch(`/users/${id}`);
return res.json();
}
function isErrorResponse(obj: any): obj is { error: string } {
return typeof obj?.error === "string";
}
async function showUser(id: number) {
const data = await fetchUser(id);
if (isErrorResponse(data)) {
console.error(data.error);
} else {
console.log(`User: ${data.name}`);
}
}
The real serverless compute to database connection problem, solved!
https://vercel.com/blog/the-real-serverless-compute-to-database-connection-problem-solved
https://vercel.com/blog/the-real-serverless-compute-to-database-connection-problem-solved
Vercel
The real serverless compute to database connection problem, solved - Vercel
Serverless compute does not mean you need more database connections. The math is the same for serverful and serverless. The real difference is what happens when functions suspend. We solve this issue with Fluid compute.
Sam Altman Has Lost Touch With Reality
And it's happening in plain sight
(Disclaimer: This is not personal. It’s business, economics, and finance. Sanity versus delusion. We are living through a modern update to Charles Mackay’s Extraordinary Popular Delusions and the Madness of Crowds (1841).
“You should expect OpenAI to spend trillions of dollars... Economists will wring their hands and say, ‘This is crazy, reckless.’ And we’ll just be like, ‘You know what? Let us do our thing.’” — Sam Altman¹
OpenAI lost $5B last year on $3.7B in revenue¹⁰. Now Altman wants to spend $7 trillion on AI servers.
The insane math:
More than Germany’s entire GDP³
13x the global semiconductor industry’s total revenue⁴
Enough to fund U.S. universal healthcare for 2+ years⁵
Meanwhile, reality:
58% of Gen Z graduates can’t find work⁶
U.S. youth unemployment jumped 2.8% in one year⁷
4.3 million young people are NEETs (Not in Employment, Education, or Training)⁸
In China, youth unemployment hit 46.5% before data suppression⁹
That’s the world Sam Altman thinks needs trillions of GPUs, not jobs.
What $7T could actually build:
Clean water access for every human being
Universal healthcare for multiple countries
World-class education systems globally
Comprehensive youth employment programs
Instead? More servers to make chatbots marginally better.
And the kicker: GPT-5, released last week, was widely criticized as merely “incremental improvements”—not revolutionary breakthroughs¹¹. If hundreds of millions produce minor upgrades, what will trillions do beyond enriching hardware manufacturers?
"And this time it's not different".
It never is.
©️ LinkedIn
And it's happening in plain sight
(Disclaimer: This is not personal. It’s business, economics, and finance. Sanity versus delusion. We are living through a modern update to Charles Mackay’s Extraordinary Popular Delusions and the Madness of Crowds (1841).
“You should expect OpenAI to spend trillions of dollars... Economists will wring their hands and say, ‘This is crazy, reckless.’ And we’ll just be like, ‘You know what? Let us do our thing.’” — Sam Altman¹
OpenAI lost $5B last year on $3.7B in revenue¹⁰. Now Altman wants to spend $7 trillion on AI servers.
The insane math:
More than Germany’s entire GDP³
13x the global semiconductor industry’s total revenue⁴
Enough to fund U.S. universal healthcare for 2+ years⁵
Meanwhile, reality:
58% of Gen Z graduates can’t find work⁶
U.S. youth unemployment jumped 2.8% in one year⁷
4.3 million young people are NEETs (Not in Employment, Education, or Training)⁸
In China, youth unemployment hit 46.5% before data suppression⁹
That’s the world Sam Altman thinks needs trillions of GPUs, not jobs.
What $7T could actually build:
Clean water access for every human being
Universal healthcare for multiple countries
World-class education systems globally
Comprehensive youth employment programs
Instead? More servers to make chatbots marginally better.
And the kicker: GPT-5, released last week, was widely criticized as merely “incremental improvements”—not revolutionary breakthroughs¹¹. If hundreds of millions produce minor upgrades, what will trillions do beyond enriching hardware manufacturers?
"And this time it's not different".
It never is.
Linkedin
Sam Altman Has Lost Touch With Reality | Stephen Klein | 500 comments
Sam Altman Has Lost Touch With Reality
And it's happening in plain sight
(Disclaimer: This is not personal. It’s business, economics, and finance. Sanity versus delusion. We are living through a modern update to Charles Mackay’s Extraordinary Popular Delusions…
And it's happening in plain sight
(Disclaimer: This is not personal. It’s business, economics, and finance. Sanity versus delusion. We are living through a modern update to Charles Mackay’s Extraordinary Popular Delusions…
In a recent interview I was asked: "How would you migrate a live platform from Postgres to MySQL without downtime or data loss?"
At that moment, I didn't give a strong enough answer. But after the interview, I dug deeper and here's what I found: the core challenge is that the database is constantly serving reads and writes. You can't just export/import and call it a day.
The solution comes down to two steps:
1. Backfill the data: Take a consistent snapshot of Postgres and load it into MySQL.
2. Stream ongoing changes: use Change Data Capture (CDC) from Postgres WAL to MySQL so the replica stays in sync.
Once MySQL is nearly caught up, you can cut over by:
● Using dual-writes in the application for a short period
● Switching reads to MySQL gradually behind a feature flag
● Validating data consistency with counts, checksums, or shadow reads
After confidence builds, Postgres can be retired.
Key lessons I learned while researching:
1) Migration isn't just about moving data, it's about handling constraints, indexes, and type differences.
2) You always need a rollback strategy if things go wrong.
3) Feature flags and dual-writes are powerful tools to reduce risk.
I didn't answer this well in the interview, but I walked away with a much clearer understanding. Sometimes the best lessons come after the hard questions.
©️ LinkedIn
At that moment, I didn't give a strong enough answer. But after the interview, I dug deeper and here's what I found: the core challenge is that the database is constantly serving reads and writes. You can't just export/import and call it a day.
The solution comes down to two steps:
1. Backfill the data: Take a consistent snapshot of Postgres and load it into MySQL.
2. Stream ongoing changes: use Change Data Capture (CDC) from Postgres WAL to MySQL so the replica stays in sync.
Once MySQL is nearly caught up, you can cut over by:
● Using dual-writes in the application for a short period
● Switching reads to MySQL gradually behind a feature flag
● Validating data consistency with counts, checksums, or shadow reads
After confidence builds, Postgres can be retired.
Key lessons I learned while researching:
1) Migration isn't just about moving data, it's about handling constraints, indexes, and type differences.
2) You always need a rollback strategy if things go wrong.
3) Feature flags and dual-writes are powerful tools to reduce risk.
I didn't answer this well in the interview, but I walked away with a much clearer understanding. Sometimes the best lessons come after the hard questions.
Linkedin
In a recent interview I was asked: "How would you migrate a live platform from Postgres to MySQL without downtime or data loss?"…
In a recent interview I was asked: "How would you migrate a live platform from Postgres to MySQL without downtime or data loss?"
At that moment, I didn't give a strong enough answer. But after the interview, I dug deeper and here's what I found: the core…
At that moment, I didn't give a strong enough answer. But after the interview, I dug deeper and here's what I found: the core…
👍1
Hey, look what I've cooked :)
YouTube
Codewars | 2-qism
Telegram kanalimizga a'zo bo'ling: t.me/scriptjs
🔥2