TechLead Bits
432 subscribers
64 photos
1 file
162 links
About software development with common sense.
Thoughts, tips and useful resources on technical leadership, architecture and engineering practices.

Author: @nelia_loginova
Download Telegram
Architectural Debt

Let's continue the topic with architectural debt. The previous post was focused on the impact and importance of the debt itself, but it has no information about what to do with it.

To fix that I suggest to read Technical Debt vs. Architecture Debt: Don’t Confuse Them. The article maybe not fully answer this question but provide actionable recommendations of how to measure the architectural debt and what strategies can be applied to decrease it.

Architectural debt indicators:
πŸ”Έ Duplicated functionality: Count how many systems perform overlapping functions.
πŸ”Έ Integration complexity: Measure the number of point-to-point connections vs API gateways, enterprise service buses (ESBs) or event-driven models.
πŸ”Έ Principle violations: Track how many systems lack defined owners, documented interfaces or compliance with internal architectural standards.
πŸ”Έ Latency chains: Calculate end-to-end data flow time between multiple hops.
πŸ”Έ Configuration management completeness: Measure the percentage of applications with filled ownership, life cycle and dependency fields.

What can be done:
πŸ”Έ Officially define architecture debt. The first step to fixing a problem is admitting it. :)
πŸ”Έ Build metrics and dashboards.
πŸ”Έ Practice architecture observability. Track system dependencies, integration bottlenecks and principal compliance in near-real time.
πŸ”Έ Run architecture reviews.
πŸ”Έ Manage debt as a portfolio. Not all debt needs immediate repayment. Like managing a project portfolio, organizations should prioritize the debt by business impact.
πŸ”Έ Link debt to business KPIs.

As you can see there is no rocket science: standard make the problem evident -> measure -> improve cycle.

I think the article is a good point to start analyze whether you have architectural debt in your organization and prepare first steps to work with it.

#architecture
πŸ‘3πŸ‘€1
AI Impact on Developer's Productivity

Over the past year, almost everyone has predicted that AI will replace developers. New tools appear every week, and the hype continues to grow. But where are these tools in real life? Do they really help developers to be more productive?

Anthropic recently published a research how AI tools transformed development in their own company. Anthropic is well-known by its Claude Code agent and it's quite interesting how AI tools impact development in AI company (they definitely should know how todo it in an effective way, right?).

Key points:
πŸ”Έ AI is mostly used for fixing bugs and code understanding.
πŸ”Έ Engineers reported 50% productivity boost (subjective opinion).
πŸ”Έ 27% of assisted work consists of tasks that wouldn't have been done otherwise: minor issues, small refactoring, nice-to-have tools, additional tests, documentation.
πŸ”Έ Only 20% of real work can be delegated to the AI assistant. Moreover, engineers tend to delegate tasks that can be easily verified and reviewed.
πŸ”Έ Everyone is becoming more β€œfull-stack": for example, backend developer can do simple frontend tasks.
πŸ”Έ Claude Code became the first point to ask questions, decreasing mentorship and collaboration experience.
πŸ”Έ Many engineers shows deep uncertainty about their future and how development will look like in several years.

So according to the survey AI assistants can significantly help with routine tasks, can act as a knowledgebase about the code. But there is still not enough trust to delegate them complex tasks or architectural decisions.

#ai #engineering
πŸ”₯2❀1
Personal Goals & Well-Being

Do you already have some plans to start doing something from January? πŸ˜‰
December is traditionally a time to sum up the year and start planning next achievements.

So today I want to share Gallup's key elements of well-being that can help to define the areas for personal global goals. Gallup institute made a huge research to define aspects of human life that we can  do something about to make our life better:
πŸ”Έ Career: You like what you do every day.
πŸ”Έ Social: You have meaningful friendships in your life.
πŸ”Έ Financial wellbeing: You manage your money well.
πŸ”Έ Physical: You have energy to get things done.
πŸ”ΈCommunity: You like where you live.

For each area you can define several goals for the year. To make them real decompose goals to particular steps (plan) and activities to start with (better to immediately add them to the calendar).

Many years I was focused on career and finance only to have more expertise, more experience, interesting tasks to solve, get money to feel safe. As a result this year I have different health issues.
So I made my lessons learnt and for the next year I prepare a separate plan for other areas of well-being especially the physical part.

Take care and be balanced!

#softskills #productivity
❀2πŸ‘2
Dear Friends, Happy New Year! πŸŽ„βœ¨

I wish you motivation that doesn’t burn out, career growth in the direction you want, and progress you can actually be proud of.
Interesting challenges, reasonable deadlines, clean architecture and teams you enjoy working with.

I hope you stay healthy, have enough energy, and keep your closest people nearby.

Take care, rest well, and have a great 2026. πŸŽ„πŸŽ„πŸŽ„

Warm wishes,
Nelia
Please open Telegram to view this post
VIEW IN TELEGRAM
πŸŽ‰16πŸŽ„3
Stanford Engineering: Transformers & LLMs

The New Year holidays are over, and it’s the perfect time to start learning something new πŸ€”.

Stanford Engineering fully opened a course CME 295: Transformers & LLMs with explanation of LLMs core components, their limitations and how to use them effectively in real-world applications.
Course instructors are engineers with work experience in Uber, Google and Netflix, so they they really know what they are talking about.

Topic covered in the course:
- Transformers architecture
- Decoding strategies & MoEs
- LLMs finetuning & optimizations
- Results evaluation & Reasoning
- RAG & Agentic workflows

To really understand a topic, I need to know how everything is organized under the hood and what the core architectural principles are. That's why I really like such courses as they provide a structured and systematic view of the topic with all the necessary theory.

#ai #engineering
πŸ”₯2
A2UI Protocol

Google has introduced a new protocol for AI - A2UI. It allows agents to generate rich user interfaces that they can be displayed in different host applications. Now Lit, Angular, and Flutter renderers are supported, others are in the roadmap.

The main idea is that LLMs can generate a UI from a catalog of predefined widgets and send them as a message to the client.

The workflow looks as follows:
πŸ”Έ User sends a message to an AI agent
πŸ”Έ Agent generates A2UI messages describing the UI (structure + data in JSON lines format)
{"surfaceUpdate": 
{"surfaceId": "booking",
"components": [
{"id": "root", "component": {"Column": {"children": {"explicitList": ["header", "guests-field"]}}}},
{"id": "header", "component": {"Text": {"text": {"literalString": "Confirm Reservation"}, "usageHint": "h1"}}},
{"id": "guests-field", "component": {"TextField": {"label": {"literalString": "Guests"}, "text": {"path": "/reservation/guests"}}}}
]}}


πŸ”Έ Messages stream to the client application
πŸ”Έ Client renders it using native components (Angular, Flutter, React, etc.)
πŸ”Έ User interacts with the UI, sending actions back to the agent
πŸ”Έ Agent responds with updated A2UI messages

According to the article, the main benefits of the protocol are:
πŸ”Έ Security: No LLM-generated code, there is only a declaration passed to the client.
πŸ”Έ LLM-friendly: Flat structure, easy for LLMs to generate incrementally
πŸ”Έ Framework-agnostic: Separation of the UI structure from the UI implementation.

Right now the project is in early public preview. It looks promising, especially once REST protocol will be supported (now only A2A and AG-UI are supported).

#ai #news
πŸ‘5πŸ‘€4πŸ”₯1
Make Your Docs Ready for AI

Several years ago, we wrote documents for humans. Now we write documents for AI πŸ˜€.

And, to be honest, machines require much more structured text than humans do. Poorly structured content leads to incorrect chunking and low quality answers during RAG context evaluation.

Common recommendations for modern docs:
πŸ”Έ Use structured format: md, html, asciidoc.
πŸ”Έ Provide metadata: important dates, tags, context and document goal.
πŸ”Έ Define glossary: describe key terms and abbreviations.
πŸ”Έ Organize content hierarchy: create clear structure using descriptive headings and subheadings.
πŸ”Έ Use lists: prefer bulleted or numbered lists instead of comma-separated enumerations.
πŸ”Έ Include text description for visual information: duplicate important details from diagrams, charts, and screenshots via text.
πŸ”Έ Use simple layout: do not use complex visuals and tables, prefer simple headings, lists and paragraphs.
πŸ”Έ Keep related information together: design content in a way that each section contains sufficient context to be understood independently.
πŸ”Έ Describe external references: when referencing external concepts, provide brief context and explanation.

As you can see, the recommendations are quite simple. And moreover, it feels like I became a machine a long time ago: I hate long paragraphs without clear structure and prefer well-structured documents for years.

#ai #documentation
✍1πŸ”₯1
AI-Ready Repos: AGENTS.md

Structuring project documentation helps build a good knowledge base, but it's not enough to work effectively with the codebase. In practice, agents also need extra instruction files: AGENTS.md and SKILLS.md. Let's start with AGENTS.md: what it is for and how to cook it properly.

AGENTS.md is a markdown file that provides context, instructions, and guidelines for AI coding agents working with the repo.

Its content is added to the initial prompt (system context) when LLM session is created. What matters that this is a standard that widely adopted by different agents such as Cursor, Codex, Claude Code and many others.

Common structure:
πŸ”Έ Project overview: project description, tech stack with particular versions, key folders and dependencies.
πŸ”Έ Commands: list of build and test commands with required flags and options.
πŸ”Έ Code Style: describe preferred code style.
πŸ”Έ Testing: commands to run different types of tests and linters.
πŸ”Έ Boundaries: do's and don'ts (e.g., never touch secrets, env configs).
πŸ”Έ Extra: PR guidelines, git workflow details, deployment instructions, etc.

Common recommendations:
πŸ”Έ Keep it short (~150 lines)
πŸ”Έ Continuously update it with code changes
πŸ”Έ Be specific, prefer samples over description
πŸ”Έ Improve it iteratively by adding what really works and removing what doesn't
πŸ”Έ Use nested AGENTS.md files in large codebases. The agent reads the closest file to the work it is doing.

Sample:
# Tech Stack
- Language: Go 1.24+
- API: gRPC
- Database: PostgreSQL 18
- Message Queue: RabbitMQ 4.2, Apache Kafka 4.1.x
- Observability: OpenTelemetry, Jaeger, Prometheus, Grafana
- Security: JWT, OAuth2, TLS
- Deployment: Docker, Kubernetes, Helm

# Build & Test Commands
- Build: `go build -o myapp`
- Test `go test`

# Boundaries
- Never touch `/charts/secrets/` files.
- Avoid adding unnecessary dependencies.

# PR Submission

## Title Format (MANDATORY)
Issue No: User-facing description


Samples from opensource projects:
- RabbitMQ Cluster Operator
- Kubebuilder
- Airflow
- Headlamp

Additionally I recommend reading How to write a great agents.md: Lessons from over 2,500 repositories from Github blog. I didn't get how they measured the effectiveness of analyzed instructions, but anyway the overall recommendations can be helpful.

#ai #agents #engineering #documentation
πŸ”₯2πŸ‘1
Fighting the Calendar Chaos

The more responsibilities you have, the more meetings you get. Unfortunately, it's a typical story.

At some point of time there are so many meetings in the calendar, it's not clear how to manage the day.

To bring it back to a manageable state, I use the following tips:
πŸ”ΈBook slots in the calendar for my working tasks
πŸ”ΈDecline meetings that I think I don't need to attend (no clear agenda, already delegated, etc.)
πŸ”Έ Use color coded categories to classify the meetings. I mostly sort them out according to the importance and urgency, for example:
- Black: urgent and/or important, I must attend.
- Red: not urgent but important, need to attend.
- Orange: not so important, make sense to attend if I have time (e.g., some regular meetings or where I believe my team can make a decision without me).
- Gray: not needed, for my info that meeting is scheduled.

You can use whatever categories are convenient for you. But I don't recommend having more than 5 categories, it’s hard to keep them all in your head. Additionally, meetings categorization helps to review the week if there is enough time for important tasks. If not, it's a sign to reflect and change something.

#softskills #tips #productivity
πŸ”₯4✍2
AI-Ready Repos: Skills

Previously we checked how to cook AGENTS.md, today we'll check another important part of agent configuration - skills.

Agent skill is a detailed workflow description or checklist for performing a specific task. Technically a skill is a folder containing a `SKILL.md` file, scripts, and additional resources. Most imprtantly, it's a standard already supported by different AI agents.

Skills structure in a repo:
skills/
β”œβ”€β”€my-skill/
β”œβ”€β”€ SKILL.md # Required: instructions + metadata
β”œβ”€β”€ scripts/ # Optional: executable code
β”œβ”€β”€ references/ # Optional: documentation
└── assets/ # Optional: templates, resources


SKILL.md is a standard md file used to define and document agent capabilities. It consists of:
- Metadata (~100 tokens): name(skill name) and description (when to use).
- Instructions (< 5000 tokens recommended): The main part of the file that is loaded when the skill is activated.

The overall flow is as follows::
1. Discovery: At startup the agent loads the name and description of each available skill.
2. Activation: When a task matches a skill’s description, the agent reads the full `SKILL.md` instructions into context.
3. Execution: The agent follows the instructions, loading referenced files or executing bundled code if needed.

The main idea is to load instructions lazily to prevent prompt sprawl and context rot, when LLMs lose focus on specific tasks and start making mistakes.

What I like about skills is that it's an artifact with clear specification and development lifecycle. It allows not only collecting knowledge about tools and procedures, but also makes AI outputs more testable and predictable.

#ai #agents #engineering #documentation
❀3πŸ‘1
The Words You Use

Have you ever been in the meetings where people say the same thing in different words and still don’t understand each other? Or maybe you were in another situation: you explain something, but the audience doesn't get it, then someone else repeats the same idea and surprisingly it clicks.

Strangely enough, it all comes down to the words you use.
Words reflect the picture of the world in someone’s head.

To make communication effective, pay attention to what a person says and how they say it, what definitions they use and what terms they like. If this is someone you work with regularly, you can do a deeper analysis and understand what is important for that person and why.

Then start using the same terms, metaphors, and definitions. If you fit into a person’s picture of the world, then it become much easier to share your ideas to that person. You start talking on the same language using the same vocabulary and stop wasting time on pointless arguing .

#tips #communications
πŸ”₯3✍2πŸ‘1
Can you see the forest for the trees?

How often have you seen developers stuck in code review discussing some method optimization or "code excellence"? Spending hours or even days trying to make code perfect? Did that really help build a well-architected solution?

Developers often get stuck in small details and completely lose sight of the bigger picture. That's a very common mistake that I see in engineering teams. As a result, there are perfect classes or functions and complete mess in overall structure.

A proper review should always start with a bird’s-eye view:
πŸ”Έ Component structure: Are changes implemented in the components\services that are actually responsible for this logic?
πŸ”Έ Module structure: Are changes in the modules you expect them to be (public vs private, pkg vs internal, etc.)?
πŸ”Έ Public contracts: Review how your APIs will be used by other parties. Are they clear, convenient, easy to use, and easy to extend?
πŸ”Έ Naming: Are module, class and function names clear and easy to understand? Don't they duplicate existing entities?
πŸ”Έ Data model: Is the domain modeled correctly? Does the model follow single responsibility principle?
πŸ”Έ Testing: Are main cases covered? What about negative scenarios? Do we have proper failure handling approach?

In most cases, there’s no point in reviewing code details until the items above are finalized. The code will likely be rewritten, maybe even more than once.
That’s why specific lines of code should be the last thing to check.

Details are cheap to fix.
Structure and contracts are not.

#engineering #codereview
πŸ”₯3πŸ‘1