AI Impact on Developer's Productivity
Over the past year, almost everyone has predicted that AI will replace developers. New tools appear every week, and the hype continues to grow. But where are these tools in real life? Do they really help developers to be more productive?
Anthropic recently published a research how AI tools transformed development in their own company. Anthropic is well-known by its Claude Code agent and it's quite interesting how AI tools impact development in AI company (they definitely should know how todo it in an effective way, right?).
Key points:
πΈ AI is mostly used for fixing bugs and code understanding.
πΈ Engineers reported 50% productivity boost (subjective opinion).
πΈ 27% of assisted work consists of tasks that wouldn't have been done otherwise: minor issues, small refactoring, nice-to-have tools, additional tests, documentation.
πΈ Only 20% of real work can be delegated to the AI assistant. Moreover, engineers tend to delegate tasks that can be easily verified and reviewed.
πΈ Everyone is becoming more βfull-stack": for example, backend developer can do simple frontend tasks.
πΈ Claude Code became the first point to ask questions, decreasing mentorship and collaboration experience.
πΈ Many engineers shows deep uncertainty about their future and how development will look like in several years.
So according to the survey AI assistants can significantly help with routine tasks, can act as a knowledgebase about the code. But there is still not enough trust to delegate them complex tasks or architectural decisions.
#ai #engineering
Over the past year, almost everyone has predicted that AI will replace developers. New tools appear every week, and the hype continues to grow. But where are these tools in real life? Do they really help developers to be more productive?
Anthropic recently published a research how AI tools transformed development in their own company. Anthropic is well-known by its Claude Code agent and it's quite interesting how AI tools impact development in AI company (they definitely should know how todo it in an effective way, right?).
Key points:
πΈ AI is mostly used for fixing bugs and code understanding.
πΈ Engineers reported 50% productivity boost (subjective opinion).
πΈ 27% of assisted work consists of tasks that wouldn't have been done otherwise: minor issues, small refactoring, nice-to-have tools, additional tests, documentation.
πΈ Only 20% of real work can be delegated to the AI assistant. Moreover, engineers tend to delegate tasks that can be easily verified and reviewed.
πΈ Everyone is becoming more βfull-stack": for example, backend developer can do simple frontend tasks.
πΈ Claude Code became the first point to ask questions, decreasing mentorship and collaboration experience.
πΈ Many engineers shows deep uncertainty about their future and how development will look like in several years.
So according to the survey AI assistants can significantly help with routine tasks, can act as a knowledgebase about the code. But there is still not enough trust to delegate them complex tasks or architectural decisions.
#ai #engineering
Anthropic
How AI is transforming work at Anthropic
How AI Is Transforming Work at Anthropic
π₯2β€1
Personal Goals & Well-Being
Do you already have some plans to start doing something from January? π
December is traditionally a time to sum up the year and start planning next achievements.
So today I want to share Gallup's key elements of well-being that can help to define the areas for personal global goals. Gallup institute made a huge research to define aspects of human life that we can do something about to make our life better:
πΈ Career: You like what you do every day.
πΈ Social: You have meaningful friendships in your life.
πΈ Financial wellbeing: You manage your money well.
πΈ Physical: You have energy to get things done.
πΈCommunity: You like where you live.
For each area you can define several goals for the year. To make them real decompose goals to particular steps (plan) and activities to start with (better to immediately add them to the calendar).
Many years I was focused on career and finance only to have more expertise, more experience, interesting tasks to solve, get money to feel safe. As a result this year I have different health issues.
So I made my lessons learnt and for the next year I prepare a separate plan for other areas of well-being especially the physical part.
Take care and be balanced!
#softskills #productivity
Do you already have some plans to start doing something from January? π
December is traditionally a time to sum up the year and start planning next achievements.
So today I want to share Gallup's key elements of well-being that can help to define the areas for personal global goals. Gallup institute made a huge research to define aspects of human life that we can do something about to make our life better:
πΈ Career: You like what you do every day.
πΈ Social: You have meaningful friendships in your life.
πΈ Financial wellbeing: You manage your money well.
πΈ Physical: You have energy to get things done.
πΈCommunity: You like where you live.
For each area you can define several goals for the year. To make them real decompose goals to particular steps (plan) and activities to start with (better to immediately add them to the calendar).
Many years I was focused on career and finance only to have more expertise, more experience, interesting tasks to solve, get money to feel safe. As a result this year I have different health issues.
So I made my lessons learnt and for the next year I prepare a separate plan for other areas of well-being especially the physical part.
Take care and be balanced!
#softskills #productivity
Gallup.com
The Five Essential Elements of Well-Being
In partnership with leading economists, psychologists, and other acclaimed scientists, Gallup has been exploring the common aspects of well-being that transcend countries and cultures. Researchers uncovered five essential elements that differentiate a thrivingβ¦
β€2π2
Dear Friends, Happy New Year! πβ¨
I wish you motivation that doesnβt burn out, career growth in the direction you want, and progress you can actually be proud of.
Interesting challenges, reasonable deadlines, clean architecture and teams you enjoy working with.
I hope you stay healthy, have enough energy, and keep your closest people nearby.
Take care, rest well, and have a great 2026.π π π
Warm wishes,
Nelia
I wish you motivation that doesnβt burn out, career growth in the direction you want, and progress you can actually be proud of.
Interesting challenges, reasonable deadlines, clean architecture and teams you enjoy working with.
I hope you stay healthy, have enough energy, and keep your closest people nearby.
Take care, rest well, and have a great 2026.
Warm wishes,
Nelia
Please open Telegram to view this post
VIEW IN TELEGRAM
π16π3
Stanford Engineering: Transformers & LLMs
The New Year holidays are over, and itβs the perfect time to start learning something new π€.
Stanford Engineering fully opened a course CME 295: Transformers & LLMs with explanation of LLMs core components, their limitations and how to use them effectively in real-world applications.
Course instructors are engineers with work experience in Uber, Google and Netflix, so they they really know what they are talking about.
Topic covered in the course:
- Transformers architecture
- Decoding strategies & MoEs
- LLMs finetuning & optimizations
- Results evaluation & Reasoning
- RAG & Agentic workflows
To really understand a topic, I need to know how everything is organized under the hood and what the core architectural principles are. That's why I really like such courses as they provide a structured and systematic view of the topic with all the necessary theory.
#ai #engineering
The New Year holidays are over, and itβs the perfect time to start learning something new π€.
Stanford Engineering fully opened a course CME 295: Transformers & LLMs with explanation of LLMs core components, their limitations and how to use them effectively in real-world applications.
Course instructors are engineers with work experience in Uber, Google and Netflix, so they they really know what they are talking about.
Topic covered in the course:
- Transformers architecture
- Decoding strategies & MoEs
- LLMs finetuning & optimizations
- Results evaluation & Reasoning
- RAG & Agentic workflows
To really understand a topic, I need to know how everything is organized under the hood and what the core architectural principles are. That's why I really like such courses as they provide a structured and systematic view of the topic with all the necessary theory.
#ai #engineering
YouTube
Stanford CME295: Transformers and Large Language Models I Autumn 2025
This course explores the world of Transformers and Large Language Models (LLMs). You will learn the evolution of NLP methods, the core components of the Tran...
π₯2
A2UI Protocol
Google has introduced a new protocol for AI - A2UI. It allows agents to generate rich user interfaces that they can be displayed in different host applications. Now Lit, Angular, and Flutter renderers are supported, others are in the roadmap.
The main idea is that LLMs can generate a UI from a catalog of predefined widgets and send them as a message to the client.
The workflow looks as follows:
πΈ User sends a message to an AI agent
πΈ Agent generates A2UI messages describing the UI (structure + data in JSON lines format)
πΈ Messages stream to the client application
πΈ Client renders it using native components (Angular, Flutter, React, etc.)
πΈ User interacts with the UI, sending actions back to the agent
πΈ Agent responds with updated A2UI messages
According to the article, the main benefits of the protocol are:
πΈ Security: No LLM-generated code, there is only a declaration passed to the client.
πΈ LLM-friendly: Flat structure, easy for LLMs to generate incrementally
πΈ Framework-agnostic: Separation of the UI structure from the UI implementation.
Right now the project is in early public preview. It looks promising, especially once REST protocol will be supported (now only A2A and AG-UI are supported).
#ai #news
Google has introduced a new protocol for AI - A2UI. It allows agents to generate rich user interfaces that they can be displayed in different host applications. Now Lit, Angular, and Flutter renderers are supported, others are in the roadmap.
The main idea is that LLMs can generate a UI from a catalog of predefined widgets and send them as a message to the client.
The workflow looks as follows:
πΈ User sends a message to an AI agent
πΈ Agent generates A2UI messages describing the UI (structure + data in JSON lines format)
{"surfaceUpdate":
{"surfaceId": "booking",
"components": [
{"id": "root", "component": {"Column": {"children": {"explicitList": ["header", "guests-field"]}}}},
{"id": "header", "component": {"Text": {"text": {"literalString": "Confirm Reservation"}, "usageHint": "h1"}}},
{"id": "guests-field", "component": {"TextField": {"label": {"literalString": "Guests"}, "text": {"path": "/reservation/guests"}}}}
]}}πΈ Messages stream to the client application
πΈ Client renders it using native components (Angular, Flutter, React, etc.)
πΈ User interacts with the UI, sending actions back to the agent
πΈ Agent responds with updated A2UI messages
According to the article, the main benefits of the protocol are:
πΈ Security: No LLM-generated code, there is only a declaration passed to the client.
πΈ LLM-friendly: Flat structure, easy for LLMs to generate incrementally
πΈ Framework-agnostic: Separation of the UI structure from the UI implementation.
Right now the project is in early public preview. It looks promising, especially once REST protocol will be supported (now only A2A and AG-UI are supported).
#ai #news
Googleblog
Google for Developers Blog - News about Web, Mobile, AI and Cloud
A2UI is an open-source project for agent-driven, cross-platform generative UI. It uses a secure, declarative format for agents to safely render UIs.
π5π4π₯1
Make Your Docs Ready for AI
Several years ago, we wrote documents for humans. Now we write documents for AI π.
And, to be honest, machines require much more structured text than humans do. Poorly structured content leads to incorrect chunking and low quality answers during RAG context evaluation.
Common recommendations for modern docs:
πΈ Use structured format: md, html, asciidoc.
πΈ Provide metadata: important dates, tags, context and document goal.
πΈ Define glossary: describe key terms and abbreviations.
πΈ Organize content hierarchy: create clear structure using descriptive headings and subheadings.
πΈ Use lists: prefer bulleted or numbered lists instead of comma-separated enumerations.
πΈ Include text description for visual information: duplicate important details from diagrams, charts, and screenshots via text.
πΈ Use simple layout: do not use complex visuals and tables, prefer simple headings, lists and paragraphs.
πΈ Keep related information together: design content in a way that each section contains sufficient context to be understood independently.
πΈ Describe external references: when referencing external concepts, provide brief context and explanation.
As you can see, the recommendations are quite simple. And moreover, it feels like I became a machine a long time ago: I hate long paragraphs without clear structure and prefer well-structured documents for years.
#ai #documentation
Several years ago, we wrote documents for humans. Now we write documents for AI π.
And, to be honest, machines require much more structured text than humans do. Poorly structured content leads to incorrect chunking and low quality answers during RAG context evaluation.
Common recommendations for modern docs:
πΈ Use structured format: md, html, asciidoc.
πΈ Provide metadata: important dates, tags, context and document goal.
πΈ Define glossary: describe key terms and abbreviations.
πΈ Organize content hierarchy: create clear structure using descriptive headings and subheadings.
πΈ Use lists: prefer bulleted or numbered lists instead of comma-separated enumerations.
πΈ Include text description for visual information: duplicate important details from diagrams, charts, and screenshots via text.
πΈ Use simple layout: do not use complex visuals and tables, prefer simple headings, lists and paragraphs.
πΈ Keep related information together: design content in a way that each section contains sufficient context to be understood independently.
πΈ Describe external references: when referencing external concepts, provide brief context and explanation.
As you can see, the recommendations are quite simple. And moreover, it feels like I became a machine a long time ago: I hate long paragraphs without clear structure and prefer well-structured documents for years.
#ai #documentation
β1π₯1
AI-Ready Repos: AGENTS.md
Structuring project documentation helps build a good knowledge base, but it's not enough to work effectively with the codebase. In practice, agents also need extra instruction files: AGENTS.md and SKILLS.md. Let's start with AGENTS.md: what it is for and how to cook it properly.
AGENTS.md is a markdown file that provides context, instructions, and guidelines for AI coding agents working with the repo.
Its content is added to the initial prompt (system context) when LLM session is created. What matters that this is a standard that widely adopted by different agents such as Cursor, Codex, Claude Code and many others.
Common structure:
πΈ Project overview: project description, tech stack with particular versions, key folders and dependencies.
πΈ Commands: list of build and test commands with required flags and options.
πΈ Code Style: describe preferred code style.
πΈ Testing: commands to run different types of tests and linters.
πΈ Boundaries: do's and don'ts (e.g., never touch secrets, env configs).
πΈ Extra: PR guidelines, git workflow details, deployment instructions, etc.
Common recommendations:
πΈ Keep it short (~150 lines)
πΈ Continuously update it with code changes
πΈ Be specific, prefer samples over description
πΈ Improve it iteratively by adding what really works and removing what doesn't
πΈ Use nested AGENTS.md files in large codebases. The agent reads the closest file to the work it is doing.
Sample:
Samples from opensource projects:
- RabbitMQ Cluster Operator
- Kubebuilder
- Airflow
- Headlamp
Additionally I recommend reading How to write a great agents.md: Lessons from over 2,500 repositories from Github blog. I didn't get how they measured the effectiveness of analyzed instructions, but anyway the overall recommendations can be helpful.
#ai #agents #engineering #documentation
Structuring project documentation helps build a good knowledge base, but it's not enough to work effectively with the codebase. In practice, agents also need extra instruction files: AGENTS.md and SKILLS.md. Let's start with AGENTS.md: what it is for and how to cook it properly.
AGENTS.md is a markdown file that provides context, instructions, and guidelines for AI coding agents working with the repo.
Its content is added to the initial prompt (system context) when LLM session is created. What matters that this is a standard that widely adopted by different agents such as Cursor, Codex, Claude Code and many others.
Common structure:
πΈ Project overview: project description, tech stack with particular versions, key folders and dependencies.
πΈ Commands: list of build and test commands with required flags and options.
πΈ Code Style: describe preferred code style.
πΈ Testing: commands to run different types of tests and linters.
πΈ Boundaries: do's and don'ts (e.g., never touch secrets, env configs).
πΈ Extra: PR guidelines, git workflow details, deployment instructions, etc.
Common recommendations:
πΈ Keep it short (~150 lines)
πΈ Continuously update it with code changes
πΈ Be specific, prefer samples over description
πΈ Improve it iteratively by adding what really works and removing what doesn't
πΈ Use nested AGENTS.md files in large codebases. The agent reads the closest file to the work it is doing.
Sample:
# Tech Stack
- Language: Go 1.24+
- API: gRPC
- Database: PostgreSQL 18
- Message Queue: RabbitMQ 4.2, Apache Kafka 4.1.x
- Observability: OpenTelemetry, Jaeger, Prometheus, Grafana
- Security: JWT, OAuth2, TLS
- Deployment: Docker, Kubernetes, Helm
# Build & Test Commands
- Build: `go build -o myapp`
- Test `go test`
# Boundaries
- Never touch `/charts/secrets/` files.
- Avoid adding unnecessary dependencies.
# PR Submission
## Title Format (MANDATORY)
Issue No: User-facing description
Samples from opensource projects:
- RabbitMQ Cluster Operator
- Kubebuilder
- Airflow
- Headlamp
Additionally I recommend reading How to write a great agents.md: Lessons from over 2,500 repositories from Github blog. I didn't get how they measured the effectiveness of analyzed instructions, but anyway the overall recommendations can be helpful.
#ai #agents #engineering #documentation
agents.md
AGENTS.md is a simple, open format for guiding coding agents. Think of it as a README for agents.
π₯2π1
Fighting the Calendar Chaos
The more responsibilities you have, the more meetings you get. Unfortunately, it's a typical story.
At some point of time there are so many meetings in the calendar, it's not clear how to manage the day.
To bring it back to a manageable state, I use the following tips:
πΈBook slots in the calendar for my working tasks
πΈDecline meetings that I think I don't need to attend (no clear agenda, already delegated, etc.)
πΈ Use color coded categories to classify the meetings. I mostly sort them out according to the importance and urgency, for example:
- Black: urgent and/or important, I must attend.
- Red: not urgent but important, need to attend.
- Orange: not so important, make sense to attend if I have time (e.g., some regular meetings or where I believe my team can make a decision without me).
- Gray: not needed, for my info that meeting is scheduled.
You can use whatever categories are convenient for you. But I don't recommend having more than 5 categories, itβs hard to keep them all in your head. Additionally, meetings categorization helps to review the week if there is enough time for important tasks. If not, it's a sign to reflect and change something.
#softskills #tips #productivity
The more responsibilities you have, the more meetings you get. Unfortunately, it's a typical story.
At some point of time there are so many meetings in the calendar, it's not clear how to manage the day.
To bring it back to a manageable state, I use the following tips:
πΈBook slots in the calendar for my working tasks
πΈDecline meetings that I think I don't need to attend (no clear agenda, already delegated, etc.)
πΈ Use color coded categories to classify the meetings. I mostly sort them out according to the importance and urgency, for example:
- Black: urgent and/or important, I must attend.
- Red: not urgent but important, need to attend.
- Orange: not so important, make sense to attend if I have time (e.g., some regular meetings or where I believe my team can make a decision without me).
- Gray: not needed, for my info that meeting is scheduled.
You can use whatever categories are convenient for you. But I don't recommend having more than 5 categories, itβs hard to keep them all in your head. Additionally, meetings categorization helps to review the week if there is enough time for important tasks. If not, it's a sign to reflect and change something.
#softskills #tips #productivity
π₯4β2
AI-Ready Repos: Skills
Previously we checked how to cook AGENTS.md, today we'll check another important part of agent configuration - skills.
Agent skill is a detailed workflow description or checklist for performing a specific task. Technically a skill is a folder containing a `SKILL.md` file, scripts, and additional resources. Most imprtantly, it's a standard already supported by different AI agents.
Skills structure in a repo:
SKILL.md is a standard md file used to define and document agent capabilities. It consists of:
- Metadata (~100 tokens):
- Instructions (< 5000 tokens recommended): The main part of the file that is loaded when the skill is activated.
The overall flow is as follows::
1. Discovery: At startup the agent loads the name and description of each available skill.
2. Activation: When a task matches a skillβs description, the agent reads the full `SKILL.md` instructions into context.
3. Execution: The agent follows the instructions, loading referenced files or executing bundled code if needed.
The main idea is to load instructions lazily to prevent prompt sprawl and context rot, when LLMs lose focus on specific tasks and start making mistakes.
What I like about skills is that it's an artifact with clear specification and development lifecycle. It allows not only collecting knowledge about tools and procedures, but also makes AI outputs more testable and predictable.
#ai #agents #engineering #documentation
Previously we checked how to cook AGENTS.md, today we'll check another important part of agent configuration - skills.
Agent skill is a detailed workflow description or checklist for performing a specific task. Technically a skill is a folder containing a `SKILL.md` file, scripts, and additional resources. Most imprtantly, it's a standard already supported by different AI agents.
Skills structure in a repo:
skills/
βββmy-skill/
βββ SKILL.md # Required: instructions + metadata
βββ scripts/ # Optional: executable code
βββ references/ # Optional: documentation
βββ assets/ # Optional: templates, resources
SKILL.md is a standard md file used to define and document agent capabilities. It consists of:
- Metadata (~100 tokens):
name(skill name) and description (when to use).- Instructions (< 5000 tokens recommended): The main part of the file that is loaded when the skill is activated.
The overall flow is as follows::
1. Discovery: At startup the agent loads the name and description of each available skill.
2. Activation: When a task matches a skillβs description, the agent reads the full `SKILL.md` instructions into context.
3. Execution: The agent follows the instructions, loading referenced files or executing bundled code if needed.
The main idea is to load instructions lazily to prevent prompt sprawl and context rot, when LLMs lose focus on specific tasks and start making mistakes.
What I like about skills is that it's an artifact with clear specification and development lifecycle. It allows not only collecting knowledge about tools and procedures, but also makes AI outputs more testable and predictable.
#ai #agents #engineering #documentation
β€3π1
The Words You Use
Have you ever been in the meetings where people say the same thing in different words and still donβt understand each other? Or maybe you were in another situation: you explain something, but the audience doesn't get it, then someone else repeats the same idea and surprisingly it clicks.
Strangely enough, it all comes down to the words you use.
Words reflect the picture of the world in someoneβs head.
To make communication effective, pay attention to what a person says and how they say it, what definitions they use and what terms they like. If this is someone you work with regularly, you can do a deeper analysis and understand what is important for that person and why.
Then start using the same terms, metaphors, and definitions. If you fit into a personβs picture of the world, then it become much easier to share your ideas to that person. You start talking on the same language using the same vocabulary and stop wasting time on pointless arguing .
#tips #communications
Have you ever been in the meetings where people say the same thing in different words and still donβt understand each other? Or maybe you were in another situation: you explain something, but the audience doesn't get it, then someone else repeats the same idea and surprisingly it clicks.
Strangely enough, it all comes down to the words you use.
Words reflect the picture of the world in someoneβs head.
To make communication effective, pay attention to what a person says and how they say it, what definitions they use and what terms they like. If this is someone you work with regularly, you can do a deeper analysis and understand what is important for that person and why.
Then start using the same terms, metaphors, and definitions. If you fit into a personβs picture of the world, then it become much easier to share your ideas to that person. You start talking on the same language using the same vocabulary and stop wasting time on pointless arguing .
#tips #communications
π₯3β2π1
Can you see the forest for the trees?
How often have you seen developers stuck in code review discussing some method optimization or "code excellence"? Spending hours or even days trying to make code perfect? Did that really help build a well-architected solution?
Developers often get stuck in small details and completely lose sight of the bigger picture. That's a very common mistake that I see in engineering teams. As a result, there are perfect classes or functions and complete mess in overall structure.
A proper review should always start with a birdβs-eye view:
πΈ Component structure: Are changes implemented in the components\services that are actually responsible for this logic?
πΈ Module structure: Are changes in the modules you expect them to be (public vs private, pkg vs internal, etc.)?
πΈ Public contracts: Review how your APIs will be used by other parties. Are they clear, convenient, easy to use, and easy to extend?
πΈ Naming: Are module, class and function names clear and easy to understand? Don't they duplicate existing entities?
πΈ Data model: Is the domain modeled correctly? Does the model follow single responsibility principle?
πΈ Testing: Are main cases covered? What about negative scenarios? Do we have proper failure handling approach?
In most cases, thereβs no point in reviewing code details until the items above are finalized. The code will likely be rewritten, maybe even more than once.
Thatβs why specific lines of code should be the last thing to check.
Details are cheap to fix.
Structure and contracts are not.
#engineering #codereview
How often have you seen developers stuck in code review discussing some method optimization or "code excellence"? Spending hours or even days trying to make code perfect? Did that really help build a well-architected solution?
Developers often get stuck in small details and completely lose sight of the bigger picture. That's a very common mistake that I see in engineering teams. As a result, there are perfect classes or functions and complete mess in overall structure.
A proper review should always start with a birdβs-eye view:
πΈ Component structure: Are changes implemented in the components\services that are actually responsible for this logic?
πΈ Module structure: Are changes in the modules you expect them to be (public vs private, pkg vs internal, etc.)?
πΈ Public contracts: Review how your APIs will be used by other parties. Are they clear, convenient, easy to use, and easy to extend?
πΈ Naming: Are module, class and function names clear and easy to understand? Don't they duplicate existing entities?
πΈ Data model: Is the domain modeled correctly? Does the model follow single responsibility principle?
πΈ Testing: Are main cases covered? What about negative scenarios? Do we have proper failure handling approach?
In most cases, thereβs no point in reviewing code details until the items above are finalized. The code will likely be rewritten, maybe even more than once.
Thatβs why specific lines of code should be the last thing to check.
Details are cheap to fix.
Structure and contracts are not.
#engineering #codereview
π₯3π1