Google boosts Gboard’s typing and proofreading by training AI with privacy-preserving synthetic data, never exposing real user info. • They use clever prompts and federated learning to make models smarter and safer for everyone.
https://research.google/blog/synthetic-and-federated-privacy-preserving-domain-adaptation-with-llms-for-mobile-applications/
https://research.google/blog/synthetic-and-federated-privacy-preserving-domain-adaptation-with-llms-for-mobile-applications/
research.google
Synthetic and federated: Privacy-preserving domain adaptation with LLMs for mobile applications
The Synthetic Edge
Practical wins with synthetic data—and a federated path to real-world signal.
• Where synthetic actually helps (and where it breaks)
• Pre-training vs. post-training: when to use synthetic
• FL × Synthetic: Powerful Duo
Read: https://botsnblocks.substack.com/p/the-synthetic-edge
Practical wins with synthetic data—and a federated path to real-world signal.
• Where synthetic actually helps (and where it breaks)
• Pre-training vs. post-training: when to use synthetic
• FL × Synthetic: Powerful Duo
Read: https://botsnblocks.substack.com/p/the-synthetic-edge
Gemma 3 270M is a super-efficient, compact AI model built for fast, private, and specialized tasks on any device.
https://developers.googleblog.com/en/introducing-gemma-3-270m/
https://developers.googleblog.com/en/introducing-gemma-3-270m/
Googleblog
Google for Developers Blog - News about Web, Mobile, AI and Cloud
Explore Gemma 3 270M, a compact, energy-efficient AI model for task-specific fine-tuning, offering strong instruction-following and production-ready quantization.
Google's PH-LLM, a Gemini Ultra-based AI, outperformed human experts in sleep and fitness coaching using wearable data.
It delivers personalized health insights and recommendations, marking a leap in AI-powered personal wellness.
https://www.nature.com/articles/s41591-025-03888-0
It delivers personalized health insights and recommendations, marking a leap in AI-powered personal wellness.
https://www.nature.com/articles/s41591-025-03888-0
Nature
A personal health large language model for sleep and fitness coaching
Nature Medicine - A large language model designed for health monitoring provides personalized sleep and fitness predictions, insights and advice.
Perplexity Is Launching a New Revenue-Share Model for Publishers
https://www.wsj.com/business/media/perplexity-ai-search-publisher-revenue-507987e5?
https://www.wsj.com/business/media/perplexity-ai-search-publisher-revenue-507987e5?
The Wall Street Journal
Perplexity Is Launching a New Revenue-Share Model for Publishers
Media companies will get paid out of a $42.5 million pool when their articles are used by Perplexity’s web browser.
The report examines how four cryptographic primitives (MPC, FHE, TEEs, and ZK proofs) are converging to create programmable privacy infrastructure that reconciles these contradictions.
https://a1research.io/blog/programmable-privacy-the-next-multi-billion-dollar-infrastructure-layer
https://a1research.io/blog/programmable-privacy-the-next-multi-billion-dollar-infrastructure-layer
a1research.io
Programmable Privacy: The Next Multi-Billion Dollar Infrastructure Layer - A1 Research
A1 Research is a research collective focused on producing high-impact, thesis-driven insights across the crypto, DeFi, and capital markets landscape
🔥1
Apple's FastVLM + MobileCLIP2 are now live on Hugging Face:
→ Up to 85x faster
→ 3.4x smaller
→ Runs in real time, directly in your browser
→ Even does live video captioning 100% locally
https://huggingface.co/apple
→ Up to 85x faster
→ 3.4x smaller
→ Runs in real time, directly in your browser
→ Even does live video captioning 100% locally
https://huggingface.co/apple
huggingface.co
apple (Apple)
Org profile for Apple on Hugging Face, the AI community building the future.
👍1
The true source of nondeterminism lies in how kernels handle batches. Understanding non-determinism of LLMs.
https://thinkingmachines.ai/blog/defeating-nondeterminism-in-llm-inference/
https://thinkingmachines.ai/blog/defeating-nondeterminism-in-llm-inference/
Thinking Machines Lab
Defeating Nondeterminism in LLM Inference
Reproducibility is a bedrock of scientific progress. However, it’s remarkably difficult to get reproducible results out of large language models.
For example, you might observe that asking ChatGPT the same question multiple times provides different results.…
For example, you might observe that asking ChatGPT the same question multiple times provides different results.…
Google and Coinbase have integrated the x402 protocol into Google's Agentic Payments Protocol (AP2), empowering AI agents to process payments autonomously using stablecoins for micropayments and automation. Demonstrated via Lowe's Innovation Lab, this enables agents to monetize services, pay each other, and handle tasks like shopping and checkout seamlessly.
https://www.coinbase.com/developer-platform/discover/launches/google_x402
https://www.coinbase.com/developer-platform/discover/launches/google_x402
Coinbase
Google Agentic Payments Protocol + x402: Agents Can Now Actually Pay Each Other
Agents can already talk to each other. And now, with x402 within Google’s new AP2, they can pay each other too. Stablecoins make this possible at the speed of code, unlocking micropayments and new models of automation that legacy rails simply can’t support.
Less is More
With only 7M parameters,
TRM obtains 45% test-accuracy on ARC-AGI-
1 and 8% on ARC-AGI-2, higher than most
LLMs (e.g., Deepseek R1, o3-mini, Gemini 2.5
Pro) with less than 0.01% of the parameters
This paper from Samsung fundamentally alters how we design training architecture and required compute.
https://arxiv.org/pdf/2510.04871v1
With only 7M parameters,
TRM obtains 45% test-accuracy on ARC-AGI-
1 and 8% on ARC-AGI-2, higher than most
LLMs (e.g., Deepseek R1, o3-mini, Gemini 2.5
Pro) with less than 0.01% of the parameters
This paper from Samsung fundamentally alters how we design training architecture and required compute.
https://arxiv.org/pdf/2510.04871v1