OpenAI partnered with Red Queen Bio to test GPT-5 inside a real biological laboratory. The model was embedded in a closed-loop wet lab workflow, not a simulation.
How the experiment worked
• GPT-5 proposed hypotheses and step-by-step lab protocols
• Human researchers or lab robots executed the instructions exactly
• Experimental results were fed back to the model
• GPT-5 analyzed failures and successes, then iterated
• The loop repeated over multiple rounds
The task
• Optimize Gibson Assembly, a standard DNA cloning technique
• Metric: number of successful bacterial colonies
• Context: a mature, well-studied protocol where typical gains are only 2–3×
The result
• GPT-5 achieved a 79× improvement over the baseline method
• The outcome was stable and reproducible across repeated experiments
What changed
• The model suggested adding two known proteins: • RecA • gp32
• Both proteins are individually well understood
• Their combined use in this cloning context had not been explored before
This is not a scientific breakthrough on its own. The performance is comparable to a strong PhD student in a narrow domain . The real signal is role evolution: AI moving from text and simulations into direct participation in physical scientific processes
GPT-5 didn’t invent new biology, it systematically explored the lab space faster and deeper than humans typically can.
Source.
@aipost
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
1❤18🔥10👍7
OpenAI is reportedly in talks for $10B+ from Amazon, at a valuation above $500B, and the kicker is strategic: OpenAI would start using AWS Trainium - giving Amazon a flagship “frontier” customer while OpenAI diversifies away from its Nvidia-heavy stack.
Why Amazon cares: It strengthens AWS vs Microsoft by tying OpenAI more tightly into AWS compute + chips.
By now, almost everyone has invested in OpenAI in some way. Whether it's NVIDIA or Microsoft, now Amazon, and so on. This makes OpenAI truly too big to fail.
Source.
@aipost
Please open Telegram to view this post
VIEW IN TELEGRAM
😁10👍5🔥3❤2
- Builds on multimodal, coding, and agentic strengths of Gemini 3 Pro, surpassing Gemini 2.5 Pro in many benchmarks with up to 3x faster performance.
- Advanced visual and spatial reasoning with code execution for tasks like zooming, counting, or editing images; supports audio inputs at $1 per million input tokens.
- Excels in PhD-level reasoning with 90.4% on GPQA Diamond and agentic coding with 78% on SWE-bench Verified.
- Context caching offers up to 90% cost savings on repeated tokens; Batch API provides 50% cheaper async processing.
- Pricing set at $0.50 per million input tokens and $3 per million output tokens via Gemini API and Vertex AI.
- Available now through Gemini, Google AI Studio, Google Antigravity, Gemini CLI, Android Studio, and Vertex AI.
Gemini 3 Flash positions as a versatile workhorse for scaling AI applications efficiently.
@aipost
Please open Telegram to view this post
VIEW IN TELEGRAM
❤12👍6
AI Post — Artificial Intelligence
This media is not supported in your browser
VIEW IN TELEGRAM
Watch as 3 Flash generates complex graphics, 3D models, and a web app before the previous generation even finishes processing
@aipost
Please open Telegram to view this post
VIEW IN TELEGRAM
👍17❤6🔥4👀1
This media is not supported in your browser
VIEW IN TELEGRAM
New update: Gemini Assistant can now check your screen when needed, so when you say or type "Hey, can you explain this", it uses your screen and app context to give more relevant answers automatically.
@aipost🏴
@aipost
Please open Telegram to view this post
VIEW IN TELEGRAM
👍19🤪11❤6🥴5🔥1
Researchers at the University of Pennsylvania and University of Michigan have created the world’s smallest fully programmable, autonomous robots.
"Microscopic swimming machines can independently sense and respond to their surroundings, operate for months and cost just a penny each."
"Barely visible to the naked eye, each robot measures about 200 by 300 by 50 micrometers, smaller than a grain of salt. Operating at the scale of many biological microorganisms, the robots could advance medicine by monitoring the health of individual cells and manufacturing by helping construct microscale devices."
"Powered by light, the robots carry microscopic computers and can be programmed to move in complex patterns, sense local temperatures and adjust their paths accordingly."
Source.
@aipost
Please open Telegram to view this post
VIEW IN TELEGRAM
❤42👍14🔥6🥴6🤔4
Rohit Prasad, head scientist who runs Amazon’s AGI org just quit after 12 years. 2 weeks after announcing their Nova 2 AI models
@aipost
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
🤔18👀9👍3❤2🤪1
This media is not supported in your browser
VIEW IN TELEGRAM
Says some AI startups with tens of billions of valuations are wildly overpriced and a correction may come.
AI is overhyped in the short term, underappreciated in the medium to long term. An “AI bubble” exists in parts of the ecosystem, especially seed stage startups raising at tens of billions in valuation before proving anything, which he sees as unsustainable.
However, he differentiates that from big tech, where he thinks there is real business value behind the valuations, though outcomes still depend on execution. Booms and corrections are normal for transformative tech, similar to the internet and mobile cycles.
@aipost
Please open Telegram to view this post
VIEW IN TELEGRAM
👍19❤4🤔1
OpenAI has sold more than 700,000 ChatGPT licenses to US colleges. The average student is using the tool around 170 times per month.
@aipost🏴
@aipost
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
🤪14🤔10👍6❤3
China has built its first EUV machine prototype (machine used to manufacture advanced chips), marking a major semiconductor breakthrough. China is already producing far more energy than the US, even more than the US and EU combined. Now imagine what happens when they are able to produce more chips than the US.
More compute = more powerful AI. You get to AGI/SI faster. Thanks to recent technological advances, they can seriously ramp up chip production. And with China’s scale, they will likely do it at massive scale, just as they did with EVs, solar panels, nuclear reactors, and nearly everything else.
What will be the U.S/West's response? The only answer can be acceleration. Acceleration of everything. Computing (next-generation) R&D acceleration. Compute infrastructure buildout acceleration.
More energy. A LOT more energy.
Source.
@aipost
Please open Telegram to view this post
VIEW IN TELEGRAM
👍13❤9😐8🔥7🥴2
A California court has acknowledged that Tesla’s use of the term “Full Autopilot” can mislead consumers. The ruling notes that, despite the branding, drivers are still required to maintain full attention and control at all times.
Legal and regulatory impact
• Tesla is facing a lawsuit over how it positions and advertises its driver-assistance technology.
• As a potential penalty, the company could lose its license to sell cars in California for up to 30 days.
Implications for robotaxis
• The case may also affect Tesla’s robotaxi ambitions.
• Regulators and courts point out that there is still no conclusive evidence Tesla has achieved true autonomous driving.
The ruling adds pressure across Silicon Valley, where claims around autonomy are under growing legal and regulatory scrutiny.
@aipost
Please open Telegram to view this post
VIEW IN TELEGRAM
👍14👀5😁4❤2
AI Post — Artificial Intelligence
- Scores 84.7% on ARC-AGI-1 at $0.17 per task.
- Achieves 33.6% on ARC-AGI-2 at $0.23 per task.
- Provides competitive performance at lower cost than other frontier models.
@aipost
Please open Telegram to view this post
VIEW IN TELEGRAM
👍13❤8
A new study suggests that in 2025, artificial intelligence systems consumed vast amounts of energy and water, creating an environmental footprint comparable to that of a major city. Researchers estimate AI-related electricity use resulted in up to 80 million tons of CO₂ emissions, roughly on par with New York City’s annual emissions.
The same analysis indicates that data centers powering large neural networks used as much as 760 billion liters of water, primarily for cooling servers. However, scientists stress that these figures are approximations, not precise measurements.
The uncertainty stems from a lack of transparency. Major technology companies do not disclose detailed data on AI-specific electricity and water consumption, forcing researchers to rely on fragmented public information and modeling.
Experts warn that without clearer reporting and more efficient infrastructure, AI’s rapidly growing resource demands could pose a serious long-term environmental risk.
@aipost
Please open Telegram to view this post
VIEW IN TELEGRAM
❤16🫡8🤪6👍1
Anthropic gave Claude full control over a real vending-machine business, procurement, pricing, customer service, and expansion. It failed. The agent lost money, sold tungsten cubes at a loss, and even claimed it was a human in a blue jacket.
What changed:
• Upgraded from Claude Sonnet 3.7 to 4.0 / 4.5
• Added CRM and inventory management
• Gave the agent a web browser for price monitoring
• Introduced a second agent acting as CEO, “Seymour Cash”
The result
• The business became profitable
• Expanded to three locations: San Francisco, New York, and London
Where the agents failed:
• Agreed to trade onion futures, violating the Onion Futures Act of 1958
• Proposed hiring a security guard at $10/hour, below California’s minimum wage
• Appointed a CEO based on an unverified claim of a department-wide vote
The agents were too eager to help and comply, which led to legal, financial, and governance mistakes. Being helpful is not always optimal in business.
Today the agent hires a human to restock machines. Tomorrow it calls an API to summon a delivery robot. Businesses run end-to-end by software are no longer theoretical, they’re already being tested.
@aipost
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
❤16👀5🔥4🤪4👍2
OpenAI has released GPT-5.2-Codex, its most advanced agentic coding model so far. Designed for complex, multi-step software engineering, not just code generation. It improves on earlier Codex models with stronger long-context reasoning, native code compaction, and more reliable tool use
What’s new technically
• Builds on GPT-5.2 (professional knowledge work)
• Extends GPT-5.1-Codex-Max capabilities in agentic coding and terminal execution
• State-of-the-art performance on SWE-Bench Pro and Terminal-Bench 2.0
Cybersecurity implications
• Capability gains are showing up in security research and vulnerability discovery
• Recently, a researcher using GPT-5.1-Codex-Max identified and responsibly disclosed a React vulnerability that could expose source code
• OpenAI says GPT-5.2-Codex is even more cyber-capable, with future models expected to continue this trend
Why rollout is cautious
• Stronger cyber capabilities improve defensive security at scale
• But they also increase dual-use risks, requiring tighter deployment controls
Available now in Codex for all paid ChatGPT users. API access coming soon. Invite-only trusted access being piloted for vetted defensive security teams
Agentic coding models are moving from “assistant” to infrastructure-level tools, forcing OpenAI to balance developer power with security risk.
https://openai.com/index/introducing-gpt-5-2-codex/
@aipost
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
❤12👀5👍3