Byte by Byte
35 subscribers
14 photos
3 videos
1 file
51 links
Bite your bit of tech information and news here. Discuss in our Chip Chat group!
Download Telegram
We got GIMP 3.0 before GTA 6
🔥1
"Model Context Protocol (MCP) is an AI tool calling standard that has been rapidly gaining adoption over the past few months. MCP tools give LLMs a standardized way to call functions, look up data, and interact with the world. Anthropic created the protocol and built the first GitHub MCP server, which grew to be one of the most popular MCP servers in the expanding ecosystem. We are excited to take ownership of the server and continue its development."

https://github.blog/changelog/2025-04-04-github-mcp-server-public-preview/
OpenAI has just released a FOSS CLI tool for “developers who already live in the terminal and want ChatGPT‑level reasoning plus the power to actually run code, manipulate files, and iterate – all under version control”.

https://github.com/openai/codex
Babe wake up, Microsoft released a 1-bit LLM under MIT that is optimized for running on CPUs: microsoft/bitnet-b1.58-2B-4T
Salvatore Sanfilippo (aka antirez) is back!
After stepping away from Redis for some time, he's returned with a major contribution: a brand-new data type called vector sets. This addition brings semantic similarity search to Redis, making it possible to query based on meaning rather than exact matches.

Check it out: https://redis.io/blog/announcing-vector-sets-a-new-redis-data-type-for-vector-similarity
A must-watch for anyone interested in the future of #AI. In this interview for NVIDIA Developer, Yann LeCun - Turing Award winner and Chief AI Scientist at Meta - shares his take (and a very robust contrarian opinion, IMHO) on the limits of today’s language models.

In the interview, he argues that LLMs (like OpenAI's GPT or Meta's LLaMA) are not the path to true artificial general intelligence. They're impressive, yes, but fundamentally constrained by the Transformer architecture. Scaling up won’t solve this. Why? Because human intelligence isn’t just about language or token prediction - it’s about understanding, reasoning, and interacting with the physical world through systems more akin to what psychologists call System 1 and System 2.

Think about a cat: it can leap with precision without any concept of physics, nor of the languages (mathematical and linguistic) which can explicitly explain physics. Ask your local orange alley cat about it, if you don't believe me.

LeCun also shares some striking numbers to highlight just how limited language-based learning really is:

• Human language processing has a very low data rate - roughly 12 bytes per second. That’s about 4.5 words per second, assuming each word is encoded in about 2 bytes.

• Vision operates on an entirely different scale. Our two optical nerves transmit a combined stream of roughly 20 megabytes per second, based on the million fibers in each nerve sending about 10 bytes per second.

• Over just four years of being awake, a child accumulates around a petabyte of visual experience - far more than the total training data of even the largest language models.


To put it plainly:

Visual perception carries around 16 million times more data than reading or listening to language, and a preschooler has already taken in 50 times more data than what goes into the largest text-trained LLMs.


LeCun’s takeaway is that the infrastructure modeling the data and the structure of data matter just as much as the quantity of data. Self-Supervised Learning thrives on redundancy, and sensory inputs (especially vision) are packed with the kind of statistical richness that language alone can’t provide.

Most of what we know - and certainly what animals know - comes from experience, not explanation. Language is a brilliant tool, but it's the final layer, not the foundation.

I explored this in my own way in Hangman and Circles, where I reported some results of a research by the Apple AI Team, reflected on the limits of linguistic abstraction, and talked about why it may be misleading us in the pursuit of AGI:

https://t.me/bytebaibyte/19

This interview is amazing, and super-fun too - definitely worth your time:

https://youtu.be/eyrDM3A_YFc?si=oMiDKJAXUYIjfjIu

PS. I also found watching the ~2h interview on the Lex Fridman Podcast extremely interesting - it dives deeper into the topics mentioned during the Nvidia Developer interview.
In the upcoming Ubuntu 25.10, Canonical plans to use an alternative to sudo that's being developed by the sudo-rs project and written in Rust. In March, a similar decision was made to replace GNU Coreutils with uutils, which is also written in Rust. There are currently initiatives under consideration to replace zlib and ntpd with zlib-rs and ntpd-rs.

https://www.phoronix.com/news/Ubuntu-25.10-sudo-rs-Default
via Valentina Lenarduzzi on LinkedIn:

"Our paper "Does #Microservices Adoption Impact the Velocity? A #Cohort Study" has been accepted at Empirical Software Engineering Journal

Microservices are often praised for improving development speed thanks to their modular and independent nature. But do they actually lead to faster feature delivery and bug fixing? In our latest study, we explored this question using a retrospective #Cohort design - a methodology widely used in medical research but still rare in software engineering.

What we did: We conducted the first large-scale empirical study comparing GitHub projects built with #Microservices from the start against similar monolithic projects, using a #Cohort study to assess causality-not just correlation.

What we found: Surprisingly, no statistically significant difference in development velocity was observed. Even after controlling for confounding variables, #Microservices adoption didn't show a measurable impact on how quickly projects deliver features or fix bugs.

Why it matters: This study not only challenges assumptions about #Microservices and velocity, but also introduces a powerful empirical methodology to our field. We're excited to contribute one of the first works applying cohort studies in software engineering research.

https://www.researchgate.net/publication/391482952_Does_Microservice_Adoption_Impact_the_Velocity_A_Cohort_Study
WOAH

[...] we propose a new RLVR paradigm called Absolute Zero, in which a single model learns to propose tasks that maximize its own learning progress and improves reasoning by solving them, without relying on any external data. [...]

https://arxiv.org/abs/2505.03335
distrust any business or enterprise that claims otherwise. They clearly don't understand the concepts of agency and accountability. And you don't want to deal with anyone who hasn't realized what makes them different from a machine. You'll be disappointed.
👍1
One of the people behind the recent incredible performance improvement of Typescript has been laid off by Microsoft. He was a Microsoft employee for 18 years.
Me talking with a colleague yesterday vs the news today
Man, nobody told me Gemini Advanced on 2.5 Pro was this good. First drafts always look great, no need for revision. Its answers are just straight to the point, no introductory bootlicking like "oh yours is a very good question". It gulps down whatever context file and doesn't need any iteration or prompt fragmentation.

I love it. Definitely not going back to chat gippity after this.
Yesterday Microsoft killed the paid AI code editor market (in a good way):

"We will open source the code in the GitHub Copilot Chat extension under the MIT license"

https://code.visualstudio.com/blogs/2025/05/19/openSourceAIEditor