crea.ai
979 subscribers
4 photos
4 files
190 links
How artificial minds build, cure, and create — one solved problem at a time.
Download Telegram
Will the Humanities Survive Artificial Intelligence? – The New Yorker

This essay explores how AI tools like ChatGPT are reshaping higher education and the humanities. The author discusses initial resistance to AI in academic settings and how, through classroom experiments, AI revealed its potential to aid research and catalyze profound intellectual engagement. The piece concludes with a hopeful vision that confronting the AI revolution might help rediscover the true essence of the humanities.

https://www.newyorker.com/culture/the-weekend-essay/will-the-humanities-survive-artificial-intelligence
The Art of Intelligence – The Atlantic

The Atlantic explores how artists are using AI not as a replacement, but as a collaborator—expanding what’s possible in art, design, and storytelling. A fresh look at the intersection of human imagination and machine intelligence.

https://www.theatlantic.com/sponsored/google/the-art-of-intelligence/3887
Why This Artist Isn’t Afraid of AI’s Role in the Future of Art – TIME

Panamanian photographer Dahlia Dreszer embraces AI as a transformative tool in art. In her Miami exhibition, she showcases works created using both traditional methods and AI-generated techniques, highlighting AI as a “supercharger” of creativity rather than a replacement.

https://time.com/7282582/ai-art-dahlia-dreszer-interview/
“You’ll NEVER Watch Movies the Same Again — And AI Is Why” | Andy Weir Interview

In this thought-provoking interview, bestselling sci-fi author Andy Weir (The Martian) explores how artificial intelligence is revolutionizing the film industry. From AI-generated visuals to new storytelling possibilities, Weir discusses the transformative power of this technology and what it means for the future of movies.

A must-watch for anyone interested in the intersection of cinema, creativity, and AI.

Watch the full interview
Tall Tales is a Critique of AI — So Why Do People Think It Was Made with AI? – The Verge

This collaborative audiovisual project by Jonathan Zawada, Mark Pritchard, and Thom Yorke critiques contemporary life through a surreal blend of CGI and real-world footage. Despite being largely handcrafted, viewers mistakenly believed it was AI-generated, leading to discussions about authenticity and the impact of AI on art perception.

https://www.theverge.com/film/664120/tall-tales-is-a-critique-of-ai-so-why-do-people-think-it-was-made-with-ai
👍1😐1
No, Graduates: AI Hasn’t Ended Your Career Before It Starts – Wired

In a commencement address, tech journalist Steven Levy reassures liberal arts graduates that AI cannot replicate the uniquely human qualities of empathy, consciousness, and authentic creativity. He emphasizes that while AI may alter the labor landscape, it cannot replace the emotional resonance of human-created art and ideas.

https://www.wired.com/story/plaintext-commencement-speech-artificial-intelligence/
Future of AI: Perspectives for Startups 2025 – Google

This report emphasizes opportunities for startups to innovate by building specialized AI applications, particularly as foundational models stabilize over the next 18 months.
Hao’s Empire: Inside OpenAI’s Quest to Control the Future – MIT Technology Review

Investigative feature by Karen Hao explores OpenAI’s transformation from a nonprofit lab into a powerful AI empire. The article examines how OpenAI’s leadership, vision, and commercial partnerships have reshaped the global AI race—raising critical questions about control, transparency, and the ethics of concentrating such transformative power in the hands of one organization.

https://www.technologyreview.com/2025/05/19/1116614/hao-empire-ai-openai/
1
“The AI Con” by Emily M. Bender and Alex Hanna – The Guardian

This review discusses the book “The AI Con”, which critically examines the inflated promises surrounding artificial intelligence. The authors argue that what is marketed as AI—particularly large language models like ChatGPT—lacks genuine understanding and often produces plagiarized or inaccurate content. They raise ethical concerns about job losses in creative industries and the erosion of critical thinking caused by AI-generated content dominating search results.

https://www.theguardian.com/books/2025/may/19/the-ai-con-by-emily-m-bender-and-alex-hanna-review-debunking-myths-of-the-ai-revolution
“Let’s Talk About ChatGPT and Cheating in the Classroom” – WIRED

This podcast episode explores the growing impact of AI tools like ChatGPT on education. It discusses how students are using generative AI to research, write papers, and improve grades—often blurring the line between efficiency and academic dishonesty. The hosts advocate for AI literacy and ethics education to navigate the opportunities and risks posed by generative AI in academia.

https://www.wired.com/story/uncanny-valley-podcast-chatgpt-cheating-in-the-classroom/
Trends: Artificial Intelligence (2025)BOND Capital

Mary Meeker, once dubbed the “Queen of the Internet,” has returned with her first major trends report since 2019, this time focusing on the transformative impact of artificial intelligence. Her 340-page document, “Trends – Artificial Intelligence,” offers a comprehensive analysis of AI’s rapid evolution and its implications for the global tech landscape.

Here’s a concise summary of the key insights from Mary Meeker’s report:

AI adoption is unprecedented

ChatGPT reached 800 million weekly users in just 17 months — faster than any major tech product before it. AI adoption is happening at internet/smartphone speed.

Work is being redefined

AI is automating repetitive tasks and enhancing expert work. Entire workflows, roles, and even business models are shifting toward “AI-native” operations.

Commoditization pressure

Foundation models are becoming cheaper and more widely available. This creates pricing pressure and forces companies to differentiate beyond raw model performance.

Open vs. closed models showdown

Countries like China and India are aggressively developing open-source AI. Openness is becoming a strategic advantage in the global AI arms race.

By 2035, AI could drive research

Meeker predicts AI will be able to formulate hypotheses, conduct scientific research, and design experiments largely on its own.

Key takeaway

“For some, the evolution of AI will create a race to the bottom; for others, a race to the top.”


For those interested in delving deeper into Meeker’s insights, the full report is available here: Trends – Artificial Intelligence (AI).



Source: BOND Capital
1
Apple reveals that today’s top AI models only appear to “think” — but quickly give up when faced with complex problems.

In a new research paper, Apple tested large models like ChatGPT (OpenAI), Claude (Anthropic), and Gemini (Google) on classic logical puzzles that require step-by-step reasoning rather than memorized answers. Here’s what they found:

Benchmark Design
Apple used four classic problems with scalable difficulty: Tower of Hanoi, Blocks World, Checker Jumping, and River Crossing. This setup allowed precise measurement of how models perform as tasks get harder.

Performance Collapse
All tested models — including so-called “thinking” versions — experience a sharp drop in accuracy as complexity increases. At a certain point, they essentially fail completely.

The “Giving-Up” Phenomenon
When the problem becomes too hard, models don’t just get things wrong — they stop trying. Instead of reasoning more, they reduce the number of thinking steps and fall back on shallow guesses.

Chain-of-Thought Degradation
On easy tasks, models overthink and derail. On harder ones, they stop reasoning altogether. This shows they lack consistent strategies for thinking through problems.

Algorithm Injection Fails
Even when given a correct, step-by-step algorithm (e.g., for Tower of Hanoi), models still fail at higher difficulty levels. Knowing the method isn’t enough — they struggle to apply it.

Failure Analysis
For example, Claude-Thinking can complete around 100 correct moves in Tower of Hanoi, but breaks down after just 3 or 4 moves in River Crossing. It suggests these models don’t understand — they imitate.

Key Takeaway
Current large language models don’t truly reason. They simulate thought when the task is familiar, but collapse when it requires real adaptability.

Apple concludes: Instead of making models bigger, the next step in AI should focus on helping them reason more reliably — with dynamic thinking budgets and external planning mechanisms.

Read the full report (PDF):
The Illusion of Thinking — Apple Machine Learning Research
4
AI Model Leaderboard 2025: OpenAI’s o3 Tops New Scientific Evaluation Platform

A new wave of AI evaluation has arrived. The Allen Institute for AI (AI2) has launched SciArena, a public, open-source platform for testing large language models (LLMs) in real-world scientific reasoning. The results? OpenAI’s o3 model leads the field.

Here are the main takeaways:

o3 leads across disciplines
In the first leaderboard based on expert voting, OpenAI’s o3 ranked first across multiple domains — including natural sciences, health, engineering, and social sciences. Over 13,000 votes were collected from 102 researchers.

Transparent, side-by-side testing
SciArena presents paired model outputs with references to scientific literature. Researchers vote on the more accurate or informative response, highlighting real-world capabilities rather than synthetic benchmarks.

Rising competition
The next top models include DeepSeek-R1 and Gemini-2.5-Pro, showing a highly competitive landscape where multiple labs are pushing the boundaries of performance.

Open science, open evaluation
Unlike traditional benchmarks, SciArena is designed to evolve continuously. It is free to access, open for contributions, and intended to serve both the AI and scientific communities.

Why it matters
As AI tools increasingly assist in research, education, and healthcare, trustworthy evaluation frameworks are critical. SciArena offers a community-driven, evidence-backed way to measure true scientific reasoning.



Source: Nature, “OpenAI’s o3 tops new AI league table for answering scientific questions,” July 2025
Full article: nature.com/articles/d41586-025-02177-7
11
“Underwriting Superintelligence” – Essay by Kvist, Dattani & Wang

This essay lays out a bold roadmap to safely scale superintelligent AI by 2030 through market-based solutions: insurance, audits, and safety standards. The authors propose an “incentive flywheel”—similar to how fire insurance once shaped safer cities—to align economic incentives with AI safety.

They argue that as AI rapidly evolves from student-level to potentially superintelligent systems, the absence of structured accountability could lead to either catastrophic outcomes or excessive regulation that halts progress. A well-designed insurance framework could fix that.

https://underwriting-superintelligence.com
AI Safety Index Summer 2025 – Future of Life Institute

The Future of Life Institute has released its updated AI Safety Index, evaluating seven leading AI companies across six key areas of safety, transparency, and risk preparedness. The findings raise significant concerns.

Key findings

All companies received low marks
No company scored higher than a C+. Anthropic ranked highest with a C+, followed by OpenAI (C) and Google DeepMind (C–). Meta, Mistral, xAI, Zhipu AI, and DeepSeek all received grades of D or lower.

Six critical domains evaluated
The index assesses AI developers across six safety areas:
– Risk assessment and responsible scaling
– Harm mitigation
– Safety frameworks
– Existential risk planning
– Governance
– Information sharing and transparency

Most companies received failing grades in the existential safety category, which examines whether organizations are preparing for the long-term, high-risk trajectory of advanced AI systems.

Transparency remains a major concern
Only OpenAI, xAI, and Zhipu AI responded to the institute’s survey. Key areas such as third-party audits, whistleblower protections, and governance policies remain insufficient or undisclosed across nearly all evaluated firms.

Why it matters
The report underscores a growing gap between rapid AI deployment and meaningful safety governance. FLI emphasizes that voluntary self-regulation is proving inadequate. The institute calls for the development of legally binding safety standards, similar to those used in aviation or medicine.



Source: Future of Life Institute, AI Safety Index – Summer 2025, published July 17, 2025
Full report: https://futureoflife.org/ai-safety-index-summer-2025/
11
Large Open-Source Models (150B+) – Artificial Analysis

An up-to-date breakdown of the most advanced open-weight AI models (150 billion+ parameters) that you can download, fine-tune, and run locally. The front-runners:

Qwen3 235B A22B 2507 (Reasoning) by Alibaba tops the Intelligence Index at 57, with a 256K context window and ~$2.6/million tokens.

• Close behind: DeepSeek V3.1 (Reasoning) scores 54, rocks a massive 685B total parameters (37B active), a 128K context window, at just $1.0/million tokens.

• Also noteworthy: DeepSeek R1 (May ’25) with a score of 52, matching V3.1’s specs and pricing.

Kimi K2 0905 (Moonshot AI) brings 50 intelligence score with 1.0 trillion params (32B active), context 256K, at ~$1.4/million tokens.

Source: Artificial Analysis, Open-Source Large Models Tracker – September 2025
Full report: https://artificialanalysis.ai/models/open-source/large
Tinker — a “mini‑GPT” builder in a few lines of code

Thinking Machines Lab has launched Tinker — a flexible API for fine‑tuning open LLMs without wrangling clusters and pipelines. You keep control of your data and algorithms; the service handles distributed training and fault tolerance.

What you get

• Support for models from compact to MoE giants (e.g., Qwen‑235B‑A22B). Switch models by changing a single line of code.

• Low‑level API primitives (forward_backward, sample) plus an open Tinker Cookbook with state‑of‑the‑art post‑training methods.

• A fully managed service: scheduling, resource allocation, recovery, and cost‑efficient LoRA to share compute across many runs.

Who’s already trying it

• Teams at Princeton, Stanford (Rotskoff Chemistry), Berkeley (SkyRL), and Redwood Research — from theorem provers to RL experiments with multimodal agents.

Context

Engineers and researchers behind ChatGPT, Character.ai, and Mistral — and contributors to PyTorch, OpenAI Gym, fairseq, and Segment Anything — are involved. The focus: make AI more understandable, configurable, and useful for real‑world tasks.

Why it matters for businesses and developers

• Spin up an “internal AI” that knows your terminology, document formats, and tone — without building ML infrastructure.

• Scale experiments: switching models or post‑training methods takes just a few lines.

Announcement, details, waitlist → https://thinkingmachines.ai/blog/announcing-tinker/

💡 In one line: Tinker is like Heroku for LLM customization — fine‑tune on your data in a day and ship domain‑specific mini‑GPTs that actually fit your workflows.
🔥21
FTSG Tech Trends 2025 Future Today Strategy Group

The Future Today Strategy Group (FTSG) has released its Tech Trends Report 2025, a thousand-page analysis arguing that we are no longer standing at the threshold of a technological era — we are already living inside it.

The report identifies not just forecasts but concrete signals: where capital is shifting, which industries are reinventing themselves, and where the line between the organic and the artificial has already blurred.

10 mega-trends shaping the new economy

Living Intelligence
AI, sensors, and biotechnology merge into adaptive ecosystems that evolve on their own. Machines cease to be tools and become the environment in which humans live.

Large Action Models (LAMs)
The next stage after ChatGPT. These models execute actions — launching code, managing workflows, and controlling interfaces.

Robots Beyond Factories
Adaptive robotics moves into unstructured environments, from logistics to healthcare. Siemens projects automation costs will drop by 90% by 2030.

Agentic AI
AI agents learn to set their own goals and make autonomous decisions. The rise of self-directed corporate “brains.”

Metamaterials and Self-Healing Infrastructure
New-generation materials that manipulate light, heat, and sound will enable self-regulating cities. Roads and buildings will repair themselves.

Unlikely Alliances
Big Tech, energy, and telecom sectors converge around compute power. Cross-sector data alliances — AI + nuclear + cloud — are forming a new techno-political economy.

Climate Innovation
The climate crisis is no longer a subsection of ESG reports — it is now the core driver of investment. Clean tech is about survival, not branding.

Nuclear Renaissance
A return of atomic energy — in miniature form. Microsoft plans to open a small reactor at Three Mile Island by 2028 to power its data centers. Deployment will take three to five years instead of a decade.

Quantum Leap
Breakthroughs in quantum error correction mark a turning point. Industrial quantum simulations are moving from labs to commercial supply chains.

The Cislunar Economy
Private companies are building infrastructure between Earth and the Moon — satellite hubs, logistics, extraction, and processing. FTSG calls it “the next Silicon Valley — but without gravity.”

Context and implications

Small modular reactors reduce launch timelines from 15 to 5 years.

Edge computing and 5G enable instant data processing anywhere on Earth.

Investment in AI has reached dot-com-era levels, while traditional tech (SaaS, e-commerce) loses investor interest.

China is rapidly closing the AI capability gap and exporting computational capacity at scale.

Why it matters

FTSG describes the global AI infrastructure as the fourth industrial network — after electricity, oil, and the internet.

Energy and computation are fusing: whoever controls energy controls intelligence.

Regulators are unable to keep pace with integration speed, and FTSG predicts a wave of systemic crises — from data trust to energy shortages.



Source: Future Today Strategy Group, Tech Trends Report 2025
Full report: https://ftsg.com/wp-content/uploads/2025/03/FTSG_2025_TR_FINAL_LINKED.pdf
2
The Geopolitics of AI JPMorganChase Center for Geopolitics

JPMorganChase’s Center for Geopolitics has released its October 2025 report, “The Geopolitics of AI: Decoding the New Global Operating System.”
The study frames artificial intelligence as a new layer of global infrastructure — a system reshaping trade, energy, defense, and the strategic balance between nations.



Key findings

A fractured AI world
China and the U.S. dominate, but along divergent paths.
Beijing advances a state-led, open-source model of self-reliance; Washington bets on private innovation, infrastructure build-out, and defense integration.

Tech sovereignty and new blocs
Governments are building digital walls and exporting their own standards. The result: a fragmented AI ecosystem that forces companies to navigate competing regulatory zones.

Energy and hardware as chokepoints
Compute capacity, semiconductors, and grid power now define who can scale AI — and who cannot. Nations with abundant energy and advanced infrastructure will dominate the decade ahead.

Capital realignment
Middle Eastern sovereign wealth funds, backed by energy surpluses, are emerging as critical investors in global AI infrastructure and semiconductor supply chains.

Defense transformation
AI is becoming the operating system of modern military power — from autonomous drones to decision-loop compression. Nations integrating AI fastest will hold decisive battlefield advantages.



Seven axes reshaping global AI geopolitics

1. An assertive China
A whole-of-nation strategy blends state funding, private innovation, and massive energy investment. China leads in AI diffusion and open-source adoption, exporting low-cost ecosystems across the Global South.

2. A repositioned United States
Post-2024, Washington pursues dominance through deregulation, AI-driven infrastructure, and public-private megaprojects such as Stargate (OpenAI + SoftBank + Oracle). The AI Action Plan centers on innovation, security, and energy independence.

3. Europe’s pursuit of tech sovereignty
The EU pushes for digital independence via the AI Act and a proposed €300 billion “EuroStack,” while transatlantic frictions rise. The UK, aligned with Washington, diverges from Brussels on regulation.

4. Middle East investment power
Saudi Arabia, the UAE, and Qatar channel hundreds of billions into AI data centers, chips, and cloud hubs — positioning the region as a global compute crossroads.
Saudi’s Vision 2030 includes $40 billion in AI funding and partnerships with NVIDIA, AMD, Amazon, and Microsoft.

5. Talent, populism, and the workforce
AI-driven automation could displace 12 million jobs by 2030. Labor unrest and populist backlash are reshaping domestic politics, even as nations compete for scarce AI talent.

6. Energy, hardware, and components
AI’s power demand could double global data-center electricity use by 2030, with the U.S. and China responsible for 80% of that growth.
Semiconductors become strategic assets; Washington now holds a 10% equity stake in Intel.

7. Defense and deterrence
AI compresses decision cycles and enables low-cost autonomous systems. The future of deterrence hinges on scaling AI-enabled military production and closing the innovation-to-adoption gap.



Why it matters

AI is emerging as a “new global operating system” — the connective tissue of economies, militaries, and energy grids.
Whoever controls compute, chips, and power grids will shape the next geopolitical order.

For business and policy leaders, JPMorganChase frames AI not as a spectator issue but a strategic variable that will define competitiveness, resilience, and national strength for decades.



Source: JPMorganChase Center for Geopolitics, The Geopolitics of AI: Decoding the New Global Operating System, October 2025
Full report: JPMorganChase PDF link