๐ Perplexity Just Dropped Its Own Embedding Models, and They're Built for Billion-Page Scale
Perplexity released pplx-embed-v1 and pplx-embed-mrl-v1, two text embedding models available at 0.6B and 4B parameter scales. The 4B model tops the MTEB Multilingual benchmark, beats Qwen3-Embedding at the same scale, and sets a new state-of-the-art on ConTEB's contextual retrieval benchmark. Both models are MIT-licensed and available via Hugging Face and the Perplexity API right now.
๐ก Why this matters: Perplexity built these to power its own billion-page search stack, which means they're stress-tested at a scale most embedding models never see. INT8 and binary quantization are baked into training, cutting storage by up to 32x with minimal quality loss. That's the difference between a nice research result and something you can actually ship at scale.
Source: Perplexity Research
Perplexity released pplx-embed-v1 and pplx-embed-mrl-v1, two text embedding models available at 0.6B and 4B parameter scales. The 4B model tops the MTEB Multilingual benchmark, beats Qwen3-Embedding at the same scale, and sets a new state-of-the-art on ConTEB's contextual retrieval benchmark. Both models are MIT-licensed and available via Hugging Face and the Perplexity API right now.
๐ก Why this matters: Perplexity built these to power its own billion-page search stack, which means they're stress-tested at a scale most embedding models never see. INT8 and binary quantization are baked into training, cutting storage by up to 32x with minimal quality loss. That's the difference between a nice research result and something you can actually ship at scale.
Source: Perplexity Research
๐ฅ1
๐ถ๏ธ ChatGPT may soon a spicy new setting, and it's exactly what it sounds like
OpenAI quietly tucked a new toggle into the ChatGPT Android app (v1.2026.055) called "Naughty chats," letting the AI use "spicier, adult-themed language" in conversation. It's gated behind an 18+ age verification step, so this isn't an accidental leak. Someone shipped this on purpose.
๐ก Why this matters: OpenAI is deliberately moving into adult content territory, which means the sanitized, family-friendly ChatGPT era may be ending. If they're building age-gated modes, competitors like Character.AI and Replika just felt the heat from the biggest player in the room.
Source: @btibor91 on X
OpenAI quietly tucked a new toggle into the ChatGPT Android app (v1.2026.055) called "Naughty chats," letting the AI use "spicier, adult-themed language" in conversation. It's gated behind an 18+ age verification step, so this isn't an accidental leak. Someone shipped this on purpose.
๐ก Why this matters: OpenAI is deliberately moving into adult content territory, which means the sanitized, family-friendly ChatGPT era may be ending. If they're building age-gated modes, competitors like Character.AI and Replika just felt the heat from the biggest player in the room.
Source: @btibor91 on X
๐ฅ1๐1
๐ Burger King is using AI to score how friendly your fries guy is
BK is piloting AI-powered headsets called "BK Assistant" with a voice chatbot named Patty across 500 US locations. The system listens for phrases like "welcome to Burger King," "please," and "thank you," then generates a friendliness score for each employee to use in coaching sessions. It runs on OpenAI under the hood.
๐ก Why this matters: This is the fast food industry's first real move toward AI-powered performance monitoring at scale. When 500 locations is just the pilot, you're looking at a model that could reshape how millions of frontline workers are evaluated - by a chatbot named Patty.
Source: BBC News
BK is piloting AI-powered headsets called "BK Assistant" with a voice chatbot named Patty across 500 US locations. The system listens for phrases like "welcome to Burger King," "please," and "thank you," then generates a friendliness score for each employee to use in coaching sessions. It runs on OpenAI under the hood.
๐ก Why this matters: This is the fast food industry's first real move toward AI-powered performance monitoring at scale. When 500 locations is just the pilot, you're looking at a model that could reshape how millions of frontline workers are evaluated - by a chatbot named Patty.
Source: BBC News
๐ฅ1
๐บ Netflix Walks Away from Warner Bros. Discovery, Clears Path for Paramount
Netflix had an $82B deal locked in for Warner Bros. Discovery's film and streaming assets, but Paramount Skydance came in with a $111B all-cash bid for the whole company at $31 per share. WBD's board called it superior, Netflix declined to match, and Ted Sarandos said the deal was always a "nice to have, not a must have at any price."
๐ก Why this matters: Paramount and Warner Bros. are now on track to merge into a single media giant combining HBO Max, CNN, Nickelodeon, CBS, and more. California's AG is already signaling a tough regulatory review, so this one is far from over.
Source: BBC News
Netflix had an $82B deal locked in for Warner Bros. Discovery's film and streaming assets, but Paramount Skydance came in with a $111B all-cash bid for the whole company at $31 per share. WBD's board called it superior, Netflix declined to match, and Ted Sarandos said the deal was always a "nice to have, not a must have at any price."
๐ก Why this matters: Paramount and Warner Bros. are now on track to merge into a single media giant combining HBO Max, CNN, Nickelodeon, CBS, and more. California's AG is already signaling a tough regulatory review, so this one is far from over.
Source: BBC News
๐ฅ1
๐ฅ Jack Dorsey Just Fired 4,000 People. He Blames AI. And He Says You're Next.
Block (Square, Cash App, Afterpay) just cut 40% of its workforce, over 4,000 people, and Dorsey didn't sugarcoat it. "The intelligence tools we're creating, paired with smaller and flatter teams, are enabling a new way of working which fundamentally changes what it means to build and run a company." He then told employees that most other companies would eventually do the same.
๐ก Why this matters: This is the first major CEO to publicly tie a 40% workforce cut directly to AI tools, on the record, with no apology. If Dorsey is right that "most companies will do the same," we just watched the starting gun get fired on the AI layoff era.
Source: AP News | BBC News
Block (Square, Cash App, Afterpay) just cut 40% of its workforce, over 4,000 people, and Dorsey didn't sugarcoat it. "The intelligence tools we're creating, paired with smaller and flatter teams, are enabling a new way of working which fundamentally changes what it means to build and run a company." He then told employees that most other companies would eventually do the same.
๐ก Why this matters: This is the first major CEO to publicly tie a 40% workforce cut directly to AI tools, on the record, with no apology. If Dorsey is right that "most companies will do the same," we just watched the starting gun get fired on the AI layoff era.
Source: AP News | BBC News
๐ฅ1๐ฑ1
โก Meta Is Building a Wall Around Nvidia. Google and AMD Are the Bricks.
Meta just closed two massive AI chip deals back to back. First, a $100B+ agreement with AMD for 6 gigawatts of GPU capacity. Now, a separate multibillion-dollar deal with Google for TPU access in 2026 and outright purchases in 2027. That's two major Nvidia alternatives locked in simultaneously, and Meta now holds enough chip diversity to run its AI buildout without depending on any single supplier.
๐ก Why this matters: This is the clearest sign yet that Big Tech is executing a deliberate strategy to break Nvidia's grip on AI infrastructure. When Meta, Google, and AMD are all cutting deals with each other, Nvidia's moat just got a lot narrower.
Source: Reuters | WSJ
Meta just closed two massive AI chip deals back to back. First, a $100B+ agreement with AMD for 6 gigawatts of GPU capacity. Now, a separate multibillion-dollar deal with Google for TPU access in 2026 and outright purchases in 2027. That's two major Nvidia alternatives locked in simultaneously, and Meta now holds enough chip diversity to run its AI buildout without depending on any single supplier.
๐ก Why this matters: This is the clearest sign yet that Big Tech is executing a deliberate strategy to break Nvidia's grip on AI infrastructure. When Meta, Google, and AMD are all cutting deals with each other, Nvidia's moat just got a lot narrower.
Source: Reuters | WSJ
๐1๐ฅ1
๐ฆ OpenFang: Someone Just Built an Agent OS in Rust and It's Already at 3,000 Stars
A solo dev dropped OpenFang three days ago: a full Agent Operating System written in Rust that compiles to a 32MB binary, starts in under 200ms, and runs AI agents autonomously on schedules without any human prompting. It ships with 7 pre-built agents, 27 LLM providers, 40 messaging platforms, MCP and A2A support, and 16 security layers including WASM sandboxing and Merkle hash-chain audit trails.
๐ก Why this matters: Most agent frameworks are Python wrappers that wait for input. This one wakes up at 6 AM, researches competitors, and drops a report on your desk before you open your laptop. If this holds up under scrutiny, it's one of the more serious agent infrastructure projects to land in 2026.
Source: GitHub | openfang.sh
A solo dev dropped OpenFang three days ago: a full Agent Operating System written in Rust that compiles to a 32MB binary, starts in under 200ms, and runs AI agents autonomously on schedules without any human prompting. It ships with 7 pre-built agents, 27 LLM providers, 40 messaging platforms, MCP and A2A support, and 16 security layers including WASM sandboxing and Merkle hash-chain audit trails.
๐ก Why this matters: Most agent frameworks are Python wrappers that wait for input. This one wakes up at 6 AM, researches competitors, and drops a report on your desk before you open your laptop. If this holds up under scrutiny, it's one of the more serious agent infrastructure projects to land in 2026.
Source: GitHub | openfang.sh
๐ฅ2
๐ฐ OpenAI Just Raised $110 Billion. Yes, Billion with a B.
OpenAI closed one of the largest private funding rounds in history, pulling in $110 billion at a $730 billion valuation. Amazon dropped $50B, Nvidia and SoftBank each threw in $30B, and the post-money valuation lands at roughly $840 billion.
๐ก Why this matters: When your three biggest investors are also your cloud provider, your chip supplier, and a sovereign-wealth-adjacent conglomerate, this is the entire AI supply chain locking arms around one company.
Source: Reuters | TechCrunch
OpenAI closed one of the largest private funding rounds in history, pulling in $110 billion at a $730 billion valuation. Amazon dropped $50B, Nvidia and SoftBank each threw in $30B, and the post-money valuation lands at roughly $840 billion.
๐ก Why this matters: When your three biggest investors are also your cloud provider, your chip supplier, and a sovereign-wealth-adjacent conglomerate, this is the entire AI supply chain locking arms around one company.
Source: Reuters | TechCrunch
๐ฅ1
๐ช Federal Agencies Said Grok Was Unsafe. The Pentagon Deployed It Anyway.
The GSA's top official personally warned the White House that xAI's Grok was unreliable, sycophantic, and too easy to manipulate with bad data. Multiple federal agencies echoed those concerns. The Pentagon approved it for classified military use anyway, calling it a "risk-accepted" decision for operational flexibility.
๐ก Why this matters: The same week the Pentagon pressured Anthropic to drop safety guardrails and got refused, it turned around and approved a chatbot that its own government partners flagged as unsafe. That's not a safety strategy, that's a preference for compliance over caution.
Source: WSJ
The GSA's top official personally warned the White House that xAI's Grok was unreliable, sycophantic, and too easy to manipulate with bad data. Multiple federal agencies echoed those concerns. The Pentagon approved it for classified military use anyway, calling it a "risk-accepted" decision for operational flexibility.
๐ก Why this matters: The same week the Pentagon pressured Anthropic to drop safety guardrails and got refused, it turned around and approved a chatbot that its own government partners flagged as unsafe. That's not a safety strategy, that's a preference for compliance over caution.
Source: WSJ
๐ฅ1
๐ค Sam Altman Just Backed His Biggest Rival.
The Pentagon pressured Anthropic to strip its AI safety guardrails for military use. Anthropic said no. Now OpenAI's Sam Altman is publicly backing that decision, saying he "mostly trusts Anthropic" and that OpenAI holds the same red lines in any DoD deal.
๐ก Why this matters: When the two biggest AI labs agree to hold the same safety floor against government pressure, it gets a lot harder for anyone to play them against each other.
Source: CNN | The Hill
The Pentagon pressured Anthropic to strip its AI safety guardrails for military use. Anthropic said no. Now OpenAI's Sam Altman is publicly backing that decision, saying he "mostly trusts Anthropic" and that OpenAI holds the same red lines in any DoD deal.
๐ก Why this matters: When the two biggest AI labs agree to hold the same safety floor against government pressure, it gets a lot harder for anyone to play them against each other.
Source: CNN | The Hill
๐ฅ1
๐ Anthropic Quietly Dropped Its Binding Safety Pledge. In the Middle of a Pentagon Standoff.
Anthropic updated its Responsible Scaling Policy to v3, removing the binding safety commitment that defined its previous versions. The company says the change is unrelated to its ongoing dispute with the Pentagon over military AI red lines.
๐ก Why this matters: Dropping your flagship safety pledge while publicly refusing Pentagon pressure is a strange message to send. Whether the timing is coincidence or not, the optics of weakening a core safety commitment during an AI safety fight are hard to ignore.
Source: Time | NYT
Anthropic updated its Responsible Scaling Policy to v3, removing the binding safety commitment that defined its previous versions. The company says the change is unrelated to its ongoing dispute with the Pentagon over military AI red lines.
๐ก Why this matters: Dropping your flagship safety pledge while publicly refusing Pentagon pressure is a strange message to send. Whether the timing is coincidence or not, the optics of weakening a core safety commitment during an AI safety fight are hard to ignore.
Source: Time | NYT
๐1๐ฅ1
๐๏ธ Blackstone Plans Public Vehicle to Buy AI Data Centers
Blackstone is preparing a publicly traded company or vehicle to acquire data centers, with initial checks sought from sovereign wealth funds and other institutions before potentially broadening access to more investors, per Bloomberg. People familiar with the matter said the firm ultimately aims to raise tens of billions, while the structure and timeline have not been disclosed.
๐ก Why this matters: If launched, it would expand retail access to the AI infrastructure theme via a Blackstone-sponsored listed vehicle. It also highlights a key debate investors are watching: how durable data center demand remains if AI technology and workloads shift.
Source: Bloomberg | Investing.com
Blackstone is preparing a publicly traded company or vehicle to acquire data centers, with initial checks sought from sovereign wealth funds and other institutions before potentially broadening access to more investors, per Bloomberg. People familiar with the matter said the firm ultimately aims to raise tens of billions, while the structure and timeline have not been disclosed.
๐ก Why this matters: If launched, it would expand retail access to the AI infrastructure theme via a Blackstone-sponsored listed vehicle. It also highlights a key debate investors are watching: how durable data center demand remains if AI technology and workloads shift.
Source: Bloomberg | Investing.com
๐1
๐ง OPCD: Distilling System-Prompt Behavior Into Weights
Microsoft researchers posted a new arXiv paper on On-Policy Context Distillation (OPCD), which trains a student model to internalize behaviors from context, including system prompts, into its parameters. The goal is to reduce reliance on prepending long prompts at inference time, which can add overhead and latency.
๐ก Why this matters: System prompts are a common way to steer models in production, but long prompts can get expensive at scale. If approaches like OPCD hold up across settings, developers could ship models that preserve more desired behavior with less prompt bloat at runtime.
Source: arXiv:2602.12275 | VentureBeat
Microsoft researchers posted a new arXiv paper on On-Policy Context Distillation (OPCD), which trains a student model to internalize behaviors from context, including system prompts, into its parameters. The goal is to reduce reliance on prepending long prompts at inference time, which can add overhead and latency.
๐ก Why this matters: System prompts are a common way to steer models in production, but long prompts can get expensive at scale. If approaches like OPCD hold up across settings, developers could ship models that preserve more desired behavior with less prompt bloat at runtime.
Source: arXiv:2602.12275 | VentureBeat
โก1
๐จ Trump Directs Federal Agencies to Halt Anthropic Tech
Trump directed federal agencies to stop using Anthropic tech after the company refused Pentagon demands to lift Claude restrictions for military applications, including autonomous weapons. Most agencies halt immediately, the Pentagon gets a six month phase out, and Defense Secretary Pete Hegseth labeled Anthropic a supply chain risk while barring contractors, while the Pentagon says it is not seeking illegal surveillance of Americans or fully autonomous weapons.
๐ก Why this matters: This puts a real price on sticking to AI safety red lines and forces every frontier lab to choose between government contracts and model controls. The next showdown is coming, and how OpenAI and others respond will set the industry norm.
Source: Reuters | NPR | AP News
Trump directed federal agencies to stop using Anthropic tech after the company refused Pentagon demands to lift Claude restrictions for military applications, including autonomous weapons. Most agencies halt immediately, the Pentagon gets a six month phase out, and Defense Secretary Pete Hegseth labeled Anthropic a supply chain risk while barring contractors, while the Pentagon says it is not seeking illegal surveillance of Americans or fully autonomous weapons.
๐ก Why this matters: This puts a real price on sticking to AI safety red lines and forces every frontier lab to choose between government contracts and model controls. The next showdown is coming, and how OpenAI and others respond will set the industry norm.
Source: Reuters | NPR | AP News
๐ฅ1๐1
๐งจ OpenAI Fires Employee Over Prediction Market Trades
OpenAI has fired an employee after an internal investigation found the worker used confidential company information to trade on prediction markets like Polymarket and Kalshi, according to reports and an internal memo. The company said this violated its policies and confirmed the termination.
๐ก Why this matters: This is one of the first high profile cases showing prediction markets can create insider trading style risk inside AI labs. As those markets grow, expect tighter access controls, stricter employee trading rules, and more regulatory scrutiny.
Source: TechCrunch | Wired
OpenAI has fired an employee after an internal investigation found the worker used confidential company information to trade on prediction markets like Polymarket and Kalshi, according to reports and an internal memo. The company said this violated its policies and confirmed the termination.
๐ก Why this matters: This is one of the first high profile cases showing prediction markets can create insider trading style risk inside AI labs. As those markets grow, expect tighter access controls, stricter employee trading rules, and more regulatory scrutiny.
Source: TechCrunch | Wired
๐1๐ฅ1
โ๏ธ DeepSeek Updates DeepGEMM, V4 Speculation Spikes
DeepSeek updated its DeepGEMM low level operator library with new code paths for Nvidia Blackwell class GPUs and experimental FP4 related support, plus changes tied to upcoming GPU architectures. The update has sparked community speculation, fueled by recent code hints, that a V4 class model is getting close.
๐ก Why this matters: Kernel level upgrades like these often precede major model and hardware shifts, so this is a credible signal of a near term performance jump. If FP4 and Blackwell tuning move into production at scale, cost per token could drop sharply.
Source: DeepGEMM GitHub | r/LocalLLaMA
DeepSeek updated its DeepGEMM low level operator library with new code paths for Nvidia Blackwell class GPUs and experimental FP4 related support, plus changes tied to upcoming GPU architectures. The update has sparked community speculation, fueled by recent code hints, that a V4 class model is getting close.
๐ก Why this matters: Kernel level upgrades like these often precede major model and hardware shifts, so this is a credible signal of a near term performance jump. If FP4 and Blackwell tuning move into production at scale, cost per token could drop sharply.
Source: DeepGEMM GitHub | r/LocalLLaMA
๐ฅ2
OpenAI Steps In as Pentagon's New AI Partner ๐ค
Hours after Trump banned Anthropic from all federal agencies, OpenAI signed a deal to deploy its models on classified Pentagon networks. Sam Altman publicly backed Anthropic's stance just days ago. Then he signed the contract anyway.
๐ก Why this matters: Anthropic drew hard lines on mass surveillance and autonomous weapons, and got kicked out for it. OpenAI voiced support, then walked right through the open door. Every AI company watching now knows exactly what principles are worth in a federal contract negotiation.
Source: Reuters | NYT
Hours after Trump banned Anthropic from all federal agencies, OpenAI signed a deal to deploy its models on classified Pentagon networks. Sam Altman publicly backed Anthropic's stance just days ago. Then he signed the contract anyway.
๐ก Why this matters: Anthropic drew hard lines on mass surveillance and autonomous weapons, and got kicked out for it. OpenAI voiced support, then walked right through the open door. Every AI company watching now knows exactly what principles are worth in a federal contract negotiation.
Source: Reuters | NYT
๐1
โ๏ธ Anthropic Is Ready to Take the Pentagon to Court
After being labeled a potential supply chain risk, Anthropic signaled it will challenge the Pentagon in court instead of backing down. The dispute escalates the fight over how AI labs set military boundaries.
๐ก Why this matters: This is no longer just a policy debate, it is becoming legal precedent for how much control governments can assert over frontier AI providers. If this case moves forward, every major lab will have to re-evaluate its federal strategy.
Source: r/singularity | The Verge
After being labeled a potential supply chain risk, Anthropic signaled it will challenge the Pentagon in court instead of backing down. The dispute escalates the fight over how AI labs set military boundaries.
๐ก Why this matters: This is no longer just a policy debate, it is becoming legal precedent for how much control governments can assert over frontier AI providers. If this case moves forward, every major lab will have to re-evaluate its federal strategy.
Source: r/singularity | The Verge
๐1๐1
๐ค Googleโs Opal Update Is a Quiet Blueprint for Enterprise Agents
Google updated Opal with an "agent step" that lets workflows choose tools and model paths dynamically instead of forcing rigid branches. The release also pushes persistent memory and interactive human checkpoints into the default build flow.
๐ก Why this matters: This is the architecture shift enterprise teams have been waiting for, less brittle flowcharts, more goal-driven agents with guardrails. Teams that learn this pattern early will ship internal automation faster with fewer rebuilds.
Source: VentureBeat
Google updated Opal with an "agent step" that lets workflows choose tools and model paths dynamically instead of forcing rigid branches. The release also pushes persistent memory and interactive human checkpoints into the default build flow.
๐ก Why this matters: This is the architecture shift enterprise teams have been waiting for, less brittle flowcharts, more goal-driven agents with guardrails. Teams that learn this pattern early will ship internal automation faster with fewer rebuilds.
Source: VentureBeat
๐1