๐ฅ Jack Dorsey Just Fired 4,000 People. He Blames AI. And He Says You're Next.
Block (Square, Cash App, Afterpay) just cut 40% of its workforce, over 4,000 people, and Dorsey didn't sugarcoat it. "The intelligence tools we're creating, paired with smaller and flatter teams, are enabling a new way of working which fundamentally changes what it means to build and run a company." He then told employees that most other companies would eventually do the same.
๐ก Why this matters: This is the first major CEO to publicly tie a 40% workforce cut directly to AI tools, on the record, with no apology. If Dorsey is right that "most companies will do the same," we just watched the starting gun get fired on the AI layoff era.
Source: AP News | BBC News
Block (Square, Cash App, Afterpay) just cut 40% of its workforce, over 4,000 people, and Dorsey didn't sugarcoat it. "The intelligence tools we're creating, paired with smaller and flatter teams, are enabling a new way of working which fundamentally changes what it means to build and run a company." He then told employees that most other companies would eventually do the same.
๐ก Why this matters: This is the first major CEO to publicly tie a 40% workforce cut directly to AI tools, on the record, with no apology. If Dorsey is right that "most companies will do the same," we just watched the starting gun get fired on the AI layoff era.
Source: AP News | BBC News
๐ฅ1๐ฑ1
โก Meta Is Building a Wall Around Nvidia. Google and AMD Are the Bricks.
Meta just closed two massive AI chip deals back to back. First, a $100B+ agreement with AMD for 6 gigawatts of GPU capacity. Now, a separate multibillion-dollar deal with Google for TPU access in 2026 and outright purchases in 2027. That's two major Nvidia alternatives locked in simultaneously, and Meta now holds enough chip diversity to run its AI buildout without depending on any single supplier.
๐ก Why this matters: This is the clearest sign yet that Big Tech is executing a deliberate strategy to break Nvidia's grip on AI infrastructure. When Meta, Google, and AMD are all cutting deals with each other, Nvidia's moat just got a lot narrower.
Source: Reuters | WSJ
Meta just closed two massive AI chip deals back to back. First, a $100B+ agreement with AMD for 6 gigawatts of GPU capacity. Now, a separate multibillion-dollar deal with Google for TPU access in 2026 and outright purchases in 2027. That's two major Nvidia alternatives locked in simultaneously, and Meta now holds enough chip diversity to run its AI buildout without depending on any single supplier.
๐ก Why this matters: This is the clearest sign yet that Big Tech is executing a deliberate strategy to break Nvidia's grip on AI infrastructure. When Meta, Google, and AMD are all cutting deals with each other, Nvidia's moat just got a lot narrower.
Source: Reuters | WSJ
๐1๐ฅ1
๐ฆ OpenFang: Someone Just Built an Agent OS in Rust and It's Already at 3,000 Stars
A solo dev dropped OpenFang three days ago: a full Agent Operating System written in Rust that compiles to a 32MB binary, starts in under 200ms, and runs AI agents autonomously on schedules without any human prompting. It ships with 7 pre-built agents, 27 LLM providers, 40 messaging platforms, MCP and A2A support, and 16 security layers including WASM sandboxing and Merkle hash-chain audit trails.
๐ก Why this matters: Most agent frameworks are Python wrappers that wait for input. This one wakes up at 6 AM, researches competitors, and drops a report on your desk before you open your laptop. If this holds up under scrutiny, it's one of the more serious agent infrastructure projects to land in 2026.
Source: GitHub | openfang.sh
A solo dev dropped OpenFang three days ago: a full Agent Operating System written in Rust that compiles to a 32MB binary, starts in under 200ms, and runs AI agents autonomously on schedules without any human prompting. It ships with 7 pre-built agents, 27 LLM providers, 40 messaging platforms, MCP and A2A support, and 16 security layers including WASM sandboxing and Merkle hash-chain audit trails.
๐ก Why this matters: Most agent frameworks are Python wrappers that wait for input. This one wakes up at 6 AM, researches competitors, and drops a report on your desk before you open your laptop. If this holds up under scrutiny, it's one of the more serious agent infrastructure projects to land in 2026.
Source: GitHub | openfang.sh
๐ฅ2
๐ฐ OpenAI Just Raised $110 Billion. Yes, Billion with a B.
OpenAI closed one of the largest private funding rounds in history, pulling in $110 billion at a $730 billion valuation. Amazon dropped $50B, Nvidia and SoftBank each threw in $30B, and the post-money valuation lands at roughly $840 billion.
๐ก Why this matters: When your three biggest investors are also your cloud provider, your chip supplier, and a sovereign-wealth-adjacent conglomerate, this is the entire AI supply chain locking arms around one company.
Source: Reuters | TechCrunch
OpenAI closed one of the largest private funding rounds in history, pulling in $110 billion at a $730 billion valuation. Amazon dropped $50B, Nvidia and SoftBank each threw in $30B, and the post-money valuation lands at roughly $840 billion.
๐ก Why this matters: When your three biggest investors are also your cloud provider, your chip supplier, and a sovereign-wealth-adjacent conglomerate, this is the entire AI supply chain locking arms around one company.
Source: Reuters | TechCrunch
๐ฅ1
๐ช Federal Agencies Said Grok Was Unsafe. The Pentagon Deployed It Anyway.
The GSA's top official personally warned the White House that xAI's Grok was unreliable, sycophantic, and too easy to manipulate with bad data. Multiple federal agencies echoed those concerns. The Pentagon approved it for classified military use anyway, calling it a "risk-accepted" decision for operational flexibility.
๐ก Why this matters: The same week the Pentagon pressured Anthropic to drop safety guardrails and got refused, it turned around and approved a chatbot that its own government partners flagged as unsafe. That's not a safety strategy, that's a preference for compliance over caution.
Source: WSJ
The GSA's top official personally warned the White House that xAI's Grok was unreliable, sycophantic, and too easy to manipulate with bad data. Multiple federal agencies echoed those concerns. The Pentagon approved it for classified military use anyway, calling it a "risk-accepted" decision for operational flexibility.
๐ก Why this matters: The same week the Pentagon pressured Anthropic to drop safety guardrails and got refused, it turned around and approved a chatbot that its own government partners flagged as unsafe. That's not a safety strategy, that's a preference for compliance over caution.
Source: WSJ
๐ฅ1
๐ค Sam Altman Just Backed His Biggest Rival.
The Pentagon pressured Anthropic to strip its AI safety guardrails for military use. Anthropic said no. Now OpenAI's Sam Altman is publicly backing that decision, saying he "mostly trusts Anthropic" and that OpenAI holds the same red lines in any DoD deal.
๐ก Why this matters: When the two biggest AI labs agree to hold the same safety floor against government pressure, it gets a lot harder for anyone to play them against each other.
Source: CNN | The Hill
The Pentagon pressured Anthropic to strip its AI safety guardrails for military use. Anthropic said no. Now OpenAI's Sam Altman is publicly backing that decision, saying he "mostly trusts Anthropic" and that OpenAI holds the same red lines in any DoD deal.
๐ก Why this matters: When the two biggest AI labs agree to hold the same safety floor against government pressure, it gets a lot harder for anyone to play them against each other.
Source: CNN | The Hill
๐ฅ1
๐ Anthropic Quietly Dropped Its Binding Safety Pledge. In the Middle of a Pentagon Standoff.
Anthropic updated its Responsible Scaling Policy to v3, removing the binding safety commitment that defined its previous versions. The company says the change is unrelated to its ongoing dispute with the Pentagon over military AI red lines.
๐ก Why this matters: Dropping your flagship safety pledge while publicly refusing Pentagon pressure is a strange message to send. Whether the timing is coincidence or not, the optics of weakening a core safety commitment during an AI safety fight are hard to ignore.
Source: Time | NYT
Anthropic updated its Responsible Scaling Policy to v3, removing the binding safety commitment that defined its previous versions. The company says the change is unrelated to its ongoing dispute with the Pentagon over military AI red lines.
๐ก Why this matters: Dropping your flagship safety pledge while publicly refusing Pentagon pressure is a strange message to send. Whether the timing is coincidence or not, the optics of weakening a core safety commitment during an AI safety fight are hard to ignore.
Source: Time | NYT
๐1๐ฅ1
๐๏ธ Blackstone Plans Public Vehicle to Buy AI Data Centers
Blackstone is preparing a publicly traded company or vehicle to acquire data centers, with initial checks sought from sovereign wealth funds and other institutions before potentially broadening access to more investors, per Bloomberg. People familiar with the matter said the firm ultimately aims to raise tens of billions, while the structure and timeline have not been disclosed.
๐ก Why this matters: If launched, it would expand retail access to the AI infrastructure theme via a Blackstone-sponsored listed vehicle. It also highlights a key debate investors are watching: how durable data center demand remains if AI technology and workloads shift.
Source: Bloomberg | Investing.com
Blackstone is preparing a publicly traded company or vehicle to acquire data centers, with initial checks sought from sovereign wealth funds and other institutions before potentially broadening access to more investors, per Bloomberg. People familiar with the matter said the firm ultimately aims to raise tens of billions, while the structure and timeline have not been disclosed.
๐ก Why this matters: If launched, it would expand retail access to the AI infrastructure theme via a Blackstone-sponsored listed vehicle. It also highlights a key debate investors are watching: how durable data center demand remains if AI technology and workloads shift.
Source: Bloomberg | Investing.com
๐1
๐ง OPCD: Distilling System-Prompt Behavior Into Weights
Microsoft researchers posted a new arXiv paper on On-Policy Context Distillation (OPCD), which trains a student model to internalize behaviors from context, including system prompts, into its parameters. The goal is to reduce reliance on prepending long prompts at inference time, which can add overhead and latency.
๐ก Why this matters: System prompts are a common way to steer models in production, but long prompts can get expensive at scale. If approaches like OPCD hold up across settings, developers could ship models that preserve more desired behavior with less prompt bloat at runtime.
Source: arXiv:2602.12275 | VentureBeat
Microsoft researchers posted a new arXiv paper on On-Policy Context Distillation (OPCD), which trains a student model to internalize behaviors from context, including system prompts, into its parameters. The goal is to reduce reliance on prepending long prompts at inference time, which can add overhead and latency.
๐ก Why this matters: System prompts are a common way to steer models in production, but long prompts can get expensive at scale. If approaches like OPCD hold up across settings, developers could ship models that preserve more desired behavior with less prompt bloat at runtime.
Source: arXiv:2602.12275 | VentureBeat
โก1
๐จ Trump Directs Federal Agencies to Halt Anthropic Tech
Trump directed federal agencies to stop using Anthropic tech after the company refused Pentagon demands to lift Claude restrictions for military applications, including autonomous weapons. Most agencies halt immediately, the Pentagon gets a six month phase out, and Defense Secretary Pete Hegseth labeled Anthropic a supply chain risk while barring contractors, while the Pentagon says it is not seeking illegal surveillance of Americans or fully autonomous weapons.
๐ก Why this matters: This puts a real price on sticking to AI safety red lines and forces every frontier lab to choose between government contracts and model controls. The next showdown is coming, and how OpenAI and others respond will set the industry norm.
Source: Reuters | NPR | AP News
Trump directed federal agencies to stop using Anthropic tech after the company refused Pentagon demands to lift Claude restrictions for military applications, including autonomous weapons. Most agencies halt immediately, the Pentagon gets a six month phase out, and Defense Secretary Pete Hegseth labeled Anthropic a supply chain risk while barring contractors, while the Pentagon says it is not seeking illegal surveillance of Americans or fully autonomous weapons.
๐ก Why this matters: This puts a real price on sticking to AI safety red lines and forces every frontier lab to choose between government contracts and model controls. The next showdown is coming, and how OpenAI and others respond will set the industry norm.
Source: Reuters | NPR | AP News
๐ฅ1๐1
๐งจ OpenAI Fires Employee Over Prediction Market Trades
OpenAI has fired an employee after an internal investigation found the worker used confidential company information to trade on prediction markets like Polymarket and Kalshi, according to reports and an internal memo. The company said this violated its policies and confirmed the termination.
๐ก Why this matters: This is one of the first high profile cases showing prediction markets can create insider trading style risk inside AI labs. As those markets grow, expect tighter access controls, stricter employee trading rules, and more regulatory scrutiny.
Source: TechCrunch | Wired
OpenAI has fired an employee after an internal investigation found the worker used confidential company information to trade on prediction markets like Polymarket and Kalshi, according to reports and an internal memo. The company said this violated its policies and confirmed the termination.
๐ก Why this matters: This is one of the first high profile cases showing prediction markets can create insider trading style risk inside AI labs. As those markets grow, expect tighter access controls, stricter employee trading rules, and more regulatory scrutiny.
Source: TechCrunch | Wired
๐1๐ฅ1
โ๏ธ DeepSeek Updates DeepGEMM, V4 Speculation Spikes
DeepSeek updated its DeepGEMM low level operator library with new code paths for Nvidia Blackwell class GPUs and experimental FP4 related support, plus changes tied to upcoming GPU architectures. The update has sparked community speculation, fueled by recent code hints, that a V4 class model is getting close.
๐ก Why this matters: Kernel level upgrades like these often precede major model and hardware shifts, so this is a credible signal of a near term performance jump. If FP4 and Blackwell tuning move into production at scale, cost per token could drop sharply.
Source: DeepGEMM GitHub | r/LocalLLaMA
DeepSeek updated its DeepGEMM low level operator library with new code paths for Nvidia Blackwell class GPUs and experimental FP4 related support, plus changes tied to upcoming GPU architectures. The update has sparked community speculation, fueled by recent code hints, that a V4 class model is getting close.
๐ก Why this matters: Kernel level upgrades like these often precede major model and hardware shifts, so this is a credible signal of a near term performance jump. If FP4 and Blackwell tuning move into production at scale, cost per token could drop sharply.
Source: DeepGEMM GitHub | r/LocalLLaMA
๐ฅ2
OpenAI Steps In as Pentagon's New AI Partner ๐ค
Hours after Trump banned Anthropic from all federal agencies, OpenAI signed a deal to deploy its models on classified Pentagon networks. Sam Altman publicly backed Anthropic's stance just days ago. Then he signed the contract anyway.
๐ก Why this matters: Anthropic drew hard lines on mass surveillance and autonomous weapons, and got kicked out for it. OpenAI voiced support, then walked right through the open door. Every AI company watching now knows exactly what principles are worth in a federal contract negotiation.
Source: Reuters | NYT
Hours after Trump banned Anthropic from all federal agencies, OpenAI signed a deal to deploy its models on classified Pentagon networks. Sam Altman publicly backed Anthropic's stance just days ago. Then he signed the contract anyway.
๐ก Why this matters: Anthropic drew hard lines on mass surveillance and autonomous weapons, and got kicked out for it. OpenAI voiced support, then walked right through the open door. Every AI company watching now knows exactly what principles are worth in a federal contract negotiation.
Source: Reuters | NYT
๐1
โ๏ธ Anthropic Is Ready to Take the Pentagon to Court
After being labeled a potential supply chain risk, Anthropic signaled it will challenge the Pentagon in court instead of backing down. The dispute escalates the fight over how AI labs set military boundaries.
๐ก Why this matters: This is no longer just a policy debate, it is becoming legal precedent for how much control governments can assert over frontier AI providers. If this case moves forward, every major lab will have to re-evaluate its federal strategy.
Source: r/singularity | The Verge
After being labeled a potential supply chain risk, Anthropic signaled it will challenge the Pentagon in court instead of backing down. The dispute escalates the fight over how AI labs set military boundaries.
๐ก Why this matters: This is no longer just a policy debate, it is becoming legal precedent for how much control governments can assert over frontier AI providers. If this case moves forward, every major lab will have to re-evaluate its federal strategy.
Source: r/singularity | The Verge
๐1๐1
๐ค Googleโs Opal Update Is a Quiet Blueprint for Enterprise Agents
Google updated Opal with an "agent step" that lets workflows choose tools and model paths dynamically instead of forcing rigid branches. The release also pushes persistent memory and interactive human checkpoints into the default build flow.
๐ก Why this matters: This is the architecture shift enterprise teams have been waiting for, less brittle flowcharts, more goal-driven agents with guardrails. Teams that learn this pattern early will ship internal automation faster with fewer rebuilds.
Source: VentureBeat
Google updated Opal with an "agent step" that lets workflows choose tools and model paths dynamically instead of forcing rigid branches. The release also pushes persistent memory and interactive human checkpoints into the default build flow.
๐ก Why this matters: This is the architecture shift enterprise teams have been waiting for, less brittle flowcharts, more goal-driven agents with guardrails. Teams that learn this pattern early will ship internal automation faster with fewer rebuilds.
Source: VentureBeat
๐1
๐ Goldmanโs AI Reality Check: Huge Spend, Minimal GDP Lift So Far
Goldman Sachs Chief Economist Jan Hatzius said AI investment had "basically zero" contribution to US GDP growth in 2025, citing heavy imports of chips and hardware. The money is moving fast, but much of the near term macro lift is landing outside the US.
๐ก Why this matters: The AI race is moving from hype headlines to ROI accountability, and boardrooms will ask harder questions this year. Builders who tie deployments to measurable output, not just model access, will keep winning budget.
Source: Gizmodo
Goldman Sachs Chief Economist Jan Hatzius said AI investment had "basically zero" contribution to US GDP growth in 2025, citing heavy imports of chips and hardware. The money is moving fast, but much of the near term macro lift is landing outside the US.
๐ก Why this matters: The AI race is moving from hype headlines to ROI accountability, and boardrooms will ask harder questions this year. Builders who tie deployments to measurable output, not just model access, will keep winning budget.
Source: Gizmodo
๐1
๐ง Your local AI just got a brain upgrade - My latest YouTube Video just dropped ๐
You can now give LM Studio (fully local, zero data leaving your machine) direct access to NotebookLM using MCP.
No API key needed. Your local model queries your notebooks like it's been there the whole time.
๐ฅ Watch: https://youtu.be/OmtvmPBQzkM
You can now give LM Studio (fully local, zero data leaving your machine) direct access to NotebookLM using MCP.
No API key needed. Your local model queries your notebooks like it's been there the whole time.
๐ฅ Watch: https://youtu.be/OmtvmPBQzkM
YouTube
Give Your Local AI Access to NotebookLM! (LM Studio + MCP)
Stop hand-coding your MCP configs! If you're manually typing JSON to connect LM Studio to NotebookLM, you are actively wasting your build time. I just updated the NotebookLM CLI, and it now does the heavy lifting for you in seconds.
In this video, I walkโฆ
In this video, I walkโฆ
๐1๐ฅ1
๐ฆ Mizuho Using AI to Cut Workload Equal to 5,000 Roles
Mizuho Financial Group says AI and automation will reduce administrative workload in Japan by the equivalent of up to 5,000 jobs over the next decade. Coverage says this is expected to happen through attrition, role shifts, and lower hiring, not a sudden layoff round.
๐ก Why this matters: Big banks are now publishing hard workforce impact numbers instead of vague AI productivity claims. Once a top institution quantifies back office displacement at this level, peers will be pushed to show the same kind of measurable return.
Source: Nikkei Asia
Mizuho Financial Group says AI and automation will reduce administrative workload in Japan by the equivalent of up to 5,000 jobs over the next decade. Coverage says this is expected to happen through attrition, role shifts, and lower hiring, not a sudden layoff round.
๐ก Why this matters: Big banks are now publishing hard workforce impact numbers instead of vague AI productivity claims. Once a top institution quantifies back office displacement at this level, peers will be pushed to show the same kind of measurable return.
Source: Nikkei Asia
๐2
๐ฑ Google AI Edge Gallery Is Live on iPhone with On Device Inference
Google AI Edge Gallery is now on the App Store, letting users run compatible models directly on iPhone after download, including Gemma options surfaced through Hugging Face integrations. The app description and project docs position it as local inference that works offline once models are loaded.
๐ก Why this matters: Mobile AI is shifting from cloud chat apps to private on device workflows where speed and data control become real product advantages. If adoption grows, app makers will compete on efficient local execution, not just raw model size.
Source: App Store | Google AI Edge Gallery GitHub
Google AI Edge Gallery is now on the App Store, letting users run compatible models directly on iPhone after download, including Gemma options surfaced through Hugging Face integrations. The app description and project docs position it as local inference that works offline once models are loaded.
๐ก Why this matters: Mobile AI is shifting from cloud chat apps to private on device workflows where speed and data control become real product advantages. If adoption grows, app makers will compete on efficient local execution, not just raw model size.
Source: App Store | Google AI Edge Gallery GitHub
๐2