This media is not supported in your browser
VIEW IN TELEGRAM
Google just quietly dropped Code Wiki โ a platform that transforms any repository into interactive documentation ๐ฒ
The tool automatically maps your entire project, generates diagrams, and even creates video walkthroughs using NotebookLM. Plus, you can chat with Gemini to clarify anything you're confused about. Oh, and it's completely free.
Definitely bookmarking this one ๐คฉ
The tool automatically maps your entire project, generates diagrams, and even creates video walkthroughs using NotebookLM. Plus, you can chat with Gemini to clarify anything you're confused about. Oh, and it's completely free.
Definitely bookmarking this one ๐คฉ
Please open Telegram to view this post
VIEW IN TELEGRAM
๐ฅ5๐4
Media is too big
VIEW IN TELEGRAM
Google Released Their Cursor Alternative
So Google released Antigravity (https://antigravity.google) โ their answer to Cursor and other AI coding tools.
The interesting part? Theyโre positioning it differently. Instead of just โwrite code faster,โ theyโre saying:
UPD: Google has already published an introductory course
Let me know what you think. If thereโs enough interest, Iโll dive deep and share what I find.
So Google released Antigravity (https://antigravity.google) โ their answer to Cursor and other AI coding tools.
The interesting part? Theyโre positioning it differently. Instead of just โwrite code faster,โ theyโre saying:
focus on building solutions, not writing individual lines of code. The emphasis is on AI agents and integrated AI experience throughout the development flow.
UPD: Google has already published an introductory course
Let me know what you think. If thereโs enough interest, Iโll dive deep and share what I find.
๐ฅ7
๐ธ Save Money on Expo Builds
Found a really cool tool that helps generate GitHub Actions workflows and build Expo apps using GitHub free plan.
The basic Expo plan at $19/month gives you at most:
ยท 30 Android builds or
ยท 15 iOS builds
With expobuilder, you can use GitHub Actionsโ 2000 free minutes, which is roughly enough for:
ยท 200 Android builds or
ยท 20 iOS builds per month (macOS uses free minutes x10)
๐ง How does it work?
ยท expobuilder generates a workflow that uses expo-cli โlocal to build on GitHub runners
ยท you need an Expo account and a generated Expo Token
ยท even if youโre not familiar with GitHub Actions, you can set up a CI/CD pipeline from scratch by following the instructions
๐ Setting up secrets:
ยท you only need EXPO_TOKEN to get started
ยท for notifications and store publishing - add the other keys
โ๏ธ Build storage:
ยท artifacts are saved in GitHub Releases by default, but you can connect your own storage
This is a legit money-saver if youโre building frequently.
Have you tried building Expo apps with GitHub Actions? Would love to hear your experience!โโโโโโโโโโโโโโโโ
Found a really cool tool that helps generate GitHub Actions workflows and build Expo apps using GitHub free plan.
The basic Expo plan at $19/month gives you at most:
ยท 30 Android builds or
ยท 15 iOS builds
With expobuilder, you can use GitHub Actionsโ 2000 free minutes, which is roughly enough for:
ยท 200 Android builds or
ยท 20 iOS builds per month (macOS uses free minutes x10)
๐ง How does it work?
ยท expobuilder generates a workflow that uses expo-cli โlocal to build on GitHub runners
ยท you need an Expo account and a generated Expo Token
ยท even if youโre not familiar with GitHub Actions, you can set up a CI/CD pipeline from scratch by following the instructions
๐ Setting up secrets:
ยท you only need EXPO_TOKEN to get started
ยท for notifications and store publishing - add the other keys
โ๏ธ Build storage:
ยท artifacts are saved in GitHub Releases by default, but you can connect your own storage
This is a legit money-saver if youโre building frequently.
Have you tried building Expo apps with GitHub Actions? Would love to hear your experience!โโโโโโโโโโโโโโโโ
๐ฅ7โก1๐1๐1
Hi guys๐! Since this channel is growing, I want to know better โ whatโs your stack?
(This helps me know what to share more of ๐)
(This helps me know what to share more of ๐)
Anonymous Poll
22%
Frontend (web)
14%
Backend
14%
Mobile (native or cross-platform)
39%
Full-stack
5%
DevOps / Infrastructure
6%
Other ๐
๐5๐1
Media is too big
VIEW IN TELEGRAM
A new interview with Ilya Sutskever has recently been published
Here are the main points he talked about:
1. Models today can crush benchmarks, ace IQ tests, and solve olympiad-level problems. But they struggle with real-world tasks. The current training approach has hit a ceiling.
2. The "throw more compute at it" formula is exhausted.
What matters now isn't adding more power โ it's discovering new training methods. We have enough compute, but it's not delivering the exponential gains we used to see. The focus is shifting to new algorithms and research with existing models.
3. The difference between a model and a human? Humans learn fast from tiny amounts of data. We literally build our worldview from fragments of information and self-correct along the way. AI needs to consume entire knowledge bases and still struggles with context. Bottom line: humans are way more efficient learners right now.
4. Interesting take on AGI: it's not a model that knows everything. It's a model that knows how to learn. When that happens, progress will accelerate dramatically. Why? Millions of AI workers learning like humans, but faster. That's going to shake up the job market hard.
5. AI can't verify its own actions yet. Humans have emotions, intuition, that gut feeling when something's off โ it's a feedback system. AI just executes functions. Without this mechanism, it's unreliable.
6. Big models will be rolled out gradually so society can adapt. Just like GPT existed and functioned for 3 years before being shown to the public.
The overall picture? We're not racing toward one massive Skynet. Instead, we're heading toward specialized AIs, each mastering its own domain. Once we crack that, we'll clone millions of copies โ and that's when the real shift begins. Some roles will vanish, others will survive.
So what should we do? Learn to work with these tools, not against them. And double down on the skills that can't be automated
watch interview
Ilya co-founded OpenAI and was one of the key minds behind GPT. In 2024, he left OpenAI to start Safe Superintelligence Inc โ a company focused on building safe superintelligence with safety at its core
Here are the main points he talked about:
1. Models today can crush benchmarks, ace IQ tests, and solve olympiad-level problems. But they struggle with real-world tasks. The current training approach has hit a ceiling.
2. The "throw more compute at it" formula is exhausted.
What matters now isn't adding more power โ it's discovering new training methods. We have enough compute, but it's not delivering the exponential gains we used to see. The focus is shifting to new algorithms and research with existing models.
3. The difference between a model and a human? Humans learn fast from tiny amounts of data. We literally build our worldview from fragments of information and self-correct along the way. AI needs to consume entire knowledge bases and still struggles with context. Bottom line: humans are way more efficient learners right now.
4. Interesting take on AGI: it's not a model that knows everything. It's a model that knows how to learn. When that happens, progress will accelerate dramatically. Why? Millions of AI workers learning like humans, but faster. That's going to shake up the job market hard.
5. AI can't verify its own actions yet. Humans have emotions, intuition, that gut feeling when something's off โ it's a feedback system. AI just executes functions. Without this mechanism, it's unreliable.
6. Big models will be rolled out gradually so society can adapt. Just like GPT existed and functioned for 3 years before being shown to the public.
The overall picture? We're not racing toward one massive Skynet. Instead, we're heading toward specialized AIs, each mastering its own domain. Once we crack that, we'll clone millions of copies โ and that's when the real shift begins. Some roles will vanish, others will survive.
So what should we do? Learn to work with these tools, not against them. And double down on the skills that can't be automated
watch interview
๐4๐ฅ4๐2
DeepSeek introduces open-source V3.2 models with Speciale variant matching Gemini-3.0-Pro on hard reasoning
DeepSeekโs new V3.2 models arrive like a sequel that actually fixes the plot. The setup is simple: developers want open models that think, plan, and act with the precision of top proprietary systems.
The problem is that long-context reasoning and agent workflows usually break when attention costs spike or post-training budgets run thin.
The insight came from studying where open models fall short: slow attention, weak RL signals, and limited agent data. DeepSeek answers with a redesigned attention layer and a scaled reinforcement learning pipeline that treats reasoning as a first-class target.
The standout moment comes from V3.2-Speciale, which reaches gold-level scores on the 2025 IMO, CMO, ICPC, and IOI and matches Gemini-3.0-Pro on complex reasoning.
Key features and results:
โข DeepSeek Sparse Attention reduces long-context compute without hurting accuracy.
โข Reinforcement learning uses over 10% of pre-training compute to sharpen reasoning.
โข Agent data spans 1,800 environments and 85,000 prompts for stronger generalization.
โข V3.2-Speciale matches Gemini-3.0-Pro on demanding reasoning benchmarks.
โข Open weights on Hugging Face support fine-tuning with LoRA or full training.
DeepSeekโs new V3.2 models arrive like a sequel that actually fixes the plot. The setup is simple: developers want open models that think, plan, and act with the precision of top proprietary systems.
The problem is that long-context reasoning and agent workflows usually break when attention costs spike or post-training budgets run thin.
The insight came from studying where open models fall short: slow attention, weak RL signals, and limited agent data. DeepSeek answers with a redesigned attention layer and a scaled reinforcement learning pipeline that treats reasoning as a first-class target.
The standout moment comes from V3.2-Speciale, which reaches gold-level scores on the 2025 IMO, CMO, ICPC, and IOI and matches Gemini-3.0-Pro on complex reasoning.
Key features and results:
โข DeepSeek Sparse Attention reduces long-context compute without hurting accuracy.
โข Reinforcement learning uses over 10% of pre-training compute to sharpen reasoning.
โข Agent data spans 1,800 environments and 85,000 prompts for stronger generalization.
โข V3.2-Speciale matches Gemini-3.0-Pro on demanding reasoning benchmarks.
โข Open weights on Hugging Face support fine-tuning with LoRA or full training.
๐ฅ5๐1
#WeeklyDigest โ1
๐น OpenAI released GPT-5.1 with 400K context, 128K output, priced at $1.25/$10 per million tokens. The model scores 76.3% on SWE-bench Verified, 88.1% on GPQA Diamond, and 94.0% on AIME 2025.
๐น Codex also got updated, including the most powerful version yet โ GPT-5.1 Codex Max. It scores 77.9% on SWE-Bench Verified. However, it's currently only available in Codex CLI and Codex plugins. They promise to add it to the API soon.
๐น xAI released Grok 4.1 โ the model became much more empathetic and sensitive, with improved creative writing. Though it falls behind GPT-5.1 on major benchmarks and isn't available in the API yet. However, they did add grok-4-1-fast-reasoning and grok-4-1-fast-non-reasoning versions to the API. 2M context window, $0.2/$0.5 per million tokens. Overall, not particularly interesting for programming tasks โ waiting for the updated Grok Code.
๐น One of the most interesting developments: a new data structuring format for working with models is gaining momentum โ Token-Oriented Object Notation (TOON). The format surpasses everything in tokenization efficiency: from XML to JSON and even CSV. There are already tons of adapters and converters available online for the new format.
๐น Cursor in version 2.1 improved Plan Mode, updated the search functionality (now faster and more accurate), and added AI Code Reviews with the option to run automatically on every commit (can be enabled in settings).
๐น Qoder released a beta version of their IDE for Linux and launched plugins for JetBrains IDEs.
---
P.s: Hey everyone! This is the first edition of our new Weekly Digest โ I'll be sharing the most interesting dev and AI news every week. Let me know what you think!
๐น OpenAI released GPT-5.1 with 400K context, 128K output, priced at $1.25/$10 per million tokens. The model scores 76.3% on SWE-bench Verified, 88.1% on GPQA Diamond, and 94.0% on AIME 2025.
๐น Codex also got updated, including the most powerful version yet โ GPT-5.1 Codex Max. It scores 77.9% on SWE-Bench Verified. However, it's currently only available in Codex CLI and Codex plugins. They promise to add it to the API soon.
๐น xAI released Grok 4.1 โ the model became much more empathetic and sensitive, with improved creative writing. Though it falls behind GPT-5.1 on major benchmarks and isn't available in the API yet. However, they did add grok-4-1-fast-reasoning and grok-4-1-fast-non-reasoning versions to the API. 2M context window, $0.2/$0.5 per million tokens. Overall, not particularly interesting for programming tasks โ waiting for the updated Grok Code.
๐น One of the most interesting developments: a new data structuring format for working with models is gaining momentum โ Token-Oriented Object Notation (TOON). The format surpasses everything in tokenization efficiency: from XML to JSON and even CSV. There are already tons of adapters and converters available online for the new format.
๐น Cursor in version 2.1 improved Plan Mode, updated the search functionality (now faster and more accurate), and added AI Code Reviews with the option to run automatically on every commit (can be enabled in settings).
๐น Qoder released a beta version of their IDE for Linux and launched plugins for JetBrains IDEs.
---
P.s: Hey everyone! This is the first edition of our new Weekly Digest โ I'll be sharing the most interesting dev and AI news every week. Let me know what you think!
๐3๐ฅ2โก1
#TipsAndTools
Hey frontenders๐, this one's for you: the legendary icon library and toolkit just got even better
It recently updated again and grew to impressive scale โ 63,119 icons, 30 styles, a full SVG library, and font ligatures that work like normal text.
What's new:
โซ๏ธ Refreshed design with new icon sets
โซ๏ธOfficial NPM packages for React and other frameworks
โซ๏ธCDN, SVG packages, fonts โ use whatever works for you
โซ๏ธEverything rebuilt from scratch, no legacy baggage
Here's the link โ enjoy!
Hey frontenders๐, this one's for you: the legendary icon library and toolkit just got even better
It recently updated again and grew to impressive scale โ 63,119 icons, 30 styles, a full SVG library, and font ligatures that work like normal text.
What's new:
โซ๏ธ Refreshed design with new icon sets
โซ๏ธOfficial NPM packages for React and other frameworks
โซ๏ธCDN, SVG packages, fonts โ use whatever works for you
โซ๏ธEverything rebuilt from scratch, no legacy baggage
Here's the link โ enjoy!
๐7๐ฅ2๐2
I enjoyed Andrej Karpathy latest insight โ clean, simple, and actually useful.
He reminds us that LLMs are simulators, not independent entities. And honestly, this shift in thinking changes how you interact with them.
This is what good prompting advice looks like)
He reminds us that LLMs are simulators, not independent entities. And honestly, this shift in thinking changes how you interact with them.
Instead of asking:
โWhat do you think about xyz?โ
Try this:
โWhat group of people would be best suited to discuss xyz? What would they say?โ
Thereโs no โyouโ in there. The model doesnโt hold opinions or think things through the way we do. It hasnโt reflected on the topic and formed a stance. When you force it into a โyouโ framing, it still responds โ but itโs essentially adopting a personality vector drawn from training data statistics and simulating it.
Thatโs fine. It works. But thereโs far less mystery here than people assume when they ask questions to โartificial intelligence.โ
This is what good prompting advice looks like)
๐3๐ฅ2
This media is not supported in your browser
VIEW IN TELEGRAM
#TipsAndTools
Found an interactive tool for learning TypeScript types: Visual Types.
Itโs basically everything from the TypeScript handbook, but way more visual and hands-on. Instead of just reading docs, you click through examples โ some using objects, others showing Venn diagrams and charts to illustrate how types work.
And itโs not just about basic types. Thereโs unknown vs any, conditional types, common patterns, and even mapped types. Pretty comprehensive.
Worth bookmarking if youโre into๐ฉโ๐ป ๐
Found an interactive tool for learning TypeScript types: Visual Types.
Itโs basically everything from the TypeScript handbook, but way more visual and hands-on. Instead of just reading docs, you click through examples โ some using objects, others showing Venn diagrams and charts to illustrate how types work.
And itโs not just about basic types. Thereโs unknown vs any, conditional types, common patterns, and even mapped types. Pretty comprehensive.
Worth bookmarking if youโre into
Please open Telegram to view this post
VIEW IN TELEGRAM
๐4
#WeeklyDigest โ2
๐น Google launches Workspace Studio, enabling no-code agents that automate tasks across Gmail, Docs, and Sheets
๐น Google presents Gemini 3 Pro instructions that improve agentic benchmark performance by roughly 5%. Featuring 1M token context window, 64K output, priced at $2-$4/$12-$18 per million tokens, with training data from late 2024 to early 2025. Scores: 91.9% on GPQA Diamond, 37.5% on HLE, 95% on AIME 2025, 76.2% on SWE-bench Verified, 2.439 on LiveCodeBench Pro, 45.1% on ARC-AGI-2, 54.2% on Terminal-Bench 2.0
๐น Anthropic launched their new model - Claude Opus 4.5. This is the company's flagship model, specifically optimized for development. It offers 200K token context, 32K output, priced at $5/$25 per million tokens, which is more expensive than competitors. That said, the model shows strong (and in some cases, best-in-class) results: 80.9% on SWE-bench Verified, 87.0% on GPQA Diamond, 59.3% on Terminal-Bench 2.0
๐น Mistral launches Devstral 2, an open-source coding model alongside its first autonomous Vibe CLI agent
๐น OpenAI reports 320ร growth in enterprise reasoning tokens as organizations integrate AI into workflows
๐น Google launches Workspace Studio, enabling no-code agents that automate tasks across Gmail, Docs, and Sheets
๐น Google presents Gemini 3 Pro instructions that improve agentic benchmark performance by roughly 5%. Featuring 1M token context window, 64K output, priced at $2-$4/$12-$18 per million tokens, with training data from late 2024 to early 2025. Scores: 91.9% on GPQA Diamond, 37.5% on HLE, 95% on AIME 2025, 76.2% on SWE-bench Verified, 2.439 on LiveCodeBench Pro, 45.1% on ARC-AGI-2, 54.2% on Terminal-Bench 2.0
๐น Anthropic launched their new model - Claude Opus 4.5. This is the company's flagship model, specifically optimized for development. It offers 200K token context, 32K output, priced at $5/$25 per million tokens, which is more expensive than competitors. That said, the model shows strong (and in some cases, best-in-class) results: 80.9% on SWE-bench Verified, 87.0% on GPQA Diamond, 59.3% on Terminal-Bench 2.0
๐น Mistral launches Devstral 2, an open-source coding model alongside its first autonomous Vibe CLI agent
๐น OpenAI reports 320ร growth in enterprise reasoning tokens as organizations integrate AI into workflows
๐ฅ5
OpenAIโs new State of Enterprise AI report drops like a reality check for anyone building with frontier models. It shows that enterprises now treat AI less like a shiny tool and more like infrastructure.
The standout moment comes from a single number: reasoning token consumption grows 320ร in one year, a clear signal that real workloads now run on AI at scale.
Teams experiment with AI, see early wins, but struggle to turn scattered prompts into dependable systems. The report shows how organizations fix that by shifting to structured workflows like predefined steps, shared context, and consistent execution
The breakthrough appears when companies use these workflows inside engineering, data, and product pipelines. They move from trying AI to relying on it.
Key features and results:
- Projects and Custom GPTs grow 19ร, standardizing multi-step tasks.
- ChatGPT Enterprise messages increase 8ร, with workers sending 30% more.
- Workers save 40-60 minutes daily through AI-assisted workflows.
- Frontier users send 6ร more messages than typical workers.
The standout moment comes from a single number: reasoning token consumption grows 320ร in one year, a clear signal that real workloads now run on AI at scale.
Teams experiment with AI, see early wins, but struggle to turn scattered prompts into dependable systems. The report shows how organizations fix that by shifting to structured workflows like predefined steps, shared context, and consistent execution
The breakthrough appears when companies use these workflows inside engineering, data, and product pipelines. They move from trying AI to relying on it.
Key features and results:
- Projects and Custom GPTs grow 19ร, standardizing multi-step tasks.
- ChatGPT Enterprise messages increase 8ร, with workers sending 30% more.
- Workers save 40-60 minutes daily through AI-assisted workflows.
- Frontier users send 6ร more messages than typical workers.
๐2๐ฅ2
I found a pretty cool piece of software ๐
Protect your spine: An AI assistant that watches your posture through your webcam ๐คจ
If you start turning into a shrimp or faceplanting into your monitor, the app will gently (or persistently) remind you to straighten up:
What it does:
โข AI analyzes your neck and shoulder angles in real-time through your camera
โข When it detects โtech neck syndrome,โ you get a notification with recommendations
โข Assigns your posture a score (0-100) and tracks your progress over time
โข Fully configurable sensitivity and check intervals, so it wonโt spam you every second
#TipsAndTools
Protect your spine: An AI assistant that watches your posture through your webcam ๐คจ
If you start turning into a shrimp or faceplanting into your monitor, the app will gently (or persistently) remind you to straighten up:
What it does:
โข AI analyzes your neck and shoulder angles in real-time through your camera
โข When it detects โtech neck syndrome,โ you get a notification with recommendations
โข Assigns your posture a score (0-100) and tracks your progress over time
โข Fully configurable sensitivity and check intervals, so it wonโt spam you every second
#TipsAndTools
๐2๐ฅ2โก1
#WeeklyDigest โ3
๐น Cursor releases Debug Mode so agents debug using real runtime logs instead of static guesses
๐น OpenAI releases GPT-5.2, boosting reasoning, coding reliability, and long-context performance for production agents
๐น Google launches Code Wiki, an automated system that keeps repo documentation continuously up to date
๐น NVIDIA releases Nemotron 3, an open model family built for large-scale multi-agent AI systems
๐น Anthropic rolls out syntax highlighting, prompt suggestions, and a plugin marketplace in Claude Code
๐น Cursor releases Debug Mode so agents debug using real runtime logs instead of static guesses
๐น OpenAI releases GPT-5.2, boosting reasoning, coding reliability, and long-context performance for production agents
๐น Google launches Code Wiki, an automated system that keeps repo documentation continuously up to date
๐น NVIDIA releases Nemotron 3, an open model family built for large-scale multi-agent AI systems
๐น Anthropic rolls out syntax highlighting, prompt suggestions, and a plugin marketplace in Claude Code
๐ฅ2
Media is too big
VIEW IN TELEGRAM
#TipsAndTools
OpenAI just dropped GPT-Image-1.5, and it's genuinely impressive ๐ฅ
The new model follows prompts much more accurately, handles edits without breaking details, and generates images 4ร faster than before. It's now free in ChatGPT and available via API.
What I love: you can edit existing images iteratively without losing quality โ faces, text, and composition stay intact. Plus there's a dedicated Images panel with preset styles to speed up workflows.
Been testing it for two days now. If you do any work with AI images, definitely worth checking out.
Official prompting guide
OpenAI just dropped GPT-Image-1.5, and it's genuinely impressive ๐ฅ
The new model follows prompts much more accurately, handles edits without breaking details, and generates images 4ร faster than before. It's now free in ChatGPT and available via API.
What I love: you can edit existing images iteratively without losing quality โ faces, text, and composition stay intact. Plus there's a dedicated Images panel with preset styles to speed up workflows.
Been testing it for two days now. If you do any work with AI images, definitely worth checking out.
Official prompting guide
1๐ฅ2
Big news ๐
After months of work, my private developer community is finally live ๐ฅน
Ars Dev Hub
Join now to get:
๐ +5 Practical AI courses
๐ฐ WeeklyDigest
๐ ๏ธ Collection of Tools & Resources
โ MondaySync
๐ฌ Real community
๐ and more...
๐ First 50 members lock $5/month for lifetime (then $25/m)
๐ Try Risk FREE for 7-days
See you inside ๐ฅ
After months of work, my private developer community is finally live ๐ฅน
Ars Dev Hub
Join now to get:
๐ +5 Practical AI courses
๐ฐ WeeklyDigest
๐ ๏ธ Collection of Tools & Resources
โ MondaySync
๐ฌ Real community
๐ and more...
๐ First 50 members lock $5/month for lifetime (then $25/m)
๐ Try Risk FREE for 7-days
See you inside ๐ฅ
๐5๐ฅ1
