41.1K subscribers
5.53K photos
232 videos
5 files
917 links
🤖 Welcome to the ChatGPT telegram channel! Here, we post the latest news, updates, and examples of using the ChatGPT large language model for generating human-like text in conversations. Subscribe to stay up-to-date and learn more about its capabilities.
Download Telegram
OpenAI’s new o1 model is doing very well on custom IQ tests

Tracking AI Page
“This isn’t a new model”
4 Days
Chat GPT
The Bitter Lesson
We could have been talking to our desktop computers in English since the 90s!

"Somebody got one of the small versions of Llama to run on Windows 98…”

“We could've been talking to our computers in English for the last 30 years"

- Marc Andreessen

Correct.

The hardware already existed, for decades.

What stopped us?

Extreme aversion to investing money into training much larger AI models.

No one was willing to invest the many millions needed to train an AI model of this size.

In fact, even a decade later in 2011, people were still hardly willing to spend more than TEN DOLLARS on electricity costs to train a state-of-the-art model, e.g. the AlexNet image model

Many truly under-estimate how unwilling to people have been to spend money on AI training, until very recently

And this wasn’t unrealized, many of us had screamed this for decades.

No one cared.

Incredible testiment to man’s unwillingness to invest in certain critical areas of future tech.

— happens in AI, advanced market mechanisms, proof systems, and a few other similar areas, that are unquestionably the future.

We could have been talking to our desktop computers in English since the 90s

Bitter Lesson
This media is not supported in your browser
VIEW IN TELEGRAM
We could've been talking to our computers in English for the last 30 years

35.9 tok/sec on a 26 year old Windows 98 Intel Pentium II CPU, with 128MB RAM

Using a 260K LLM with Llama-architecture
This media is not supported in your browser
VIEW IN TELEGRAM
We could've been talking to our computers in English for the last 30 years

Somebody got one of the small versions of Llama to run on Windows 98…

We could've been talking to our computers in English for the last 30 years

- Marc Andreessen
IF NO ONE COMES FROM THE FUTURE TO STOP YOU FROM DOING IT

THEN HOW BAD CAN IT BE?
GPT-5 Rumors

Does OpenAI have much more powerful models than they admit?

OpenAI already is heavily restricting access to their new o3 model.

Was OpenAI’s multi-year total pause in SOTA model training, after the completion of GPT-4, just to help kill off potentially-competing startups?
OpenAI Introduces “Operator”

Today we’re releasing Operator⁠(opens in a new window), an agent that can go to the web to perform tasks for you. Using its own browser, it can look at a webpage and interact with it by typing, clicking, and scrolling.

To ensure a safe and iterative rollout, we are starting small. Starting today, Operator is available to Pro users in the U.S. at
operator.chatgpt.com

Operator is powered by a new model called Computer-Using Agent (CUA). Combining GPT-4o's vision capabilities with advanced reasoning through reinforcement learning, CUA is trained to interact with graphical user interfaces (GUIs)—the buttons, menus, and text fields people see on a screen.

Operator can “see” (through screenshots) and “interact” (using all the actions a mouse and keyboard allow) with a browser, enabling it to take action on the web without requiring custom API integrations.

To get started, simply describe the task you’d like done and Operator can handle the rest. Users can choose to take over control of the remote browser at any point, and Operator is trained to proactively ask the user to take over for tasks that require login, payment details, or when solving CAPTCHAs.

OpenAI Announcement
As was foretold
This media is not supported in your browser
VIEW IN TELEGRAM
OpenAI’s Operator reading through hotel reviews on Tripadvisor to find the best hotel sauna in Stockholm
Forwarded from DoomPosting
OpenAI releases o3 and o4-mini

Highlights,

(1) Reinforcement learning training — a DECADE after DeepMind thoroughly showed that the RL approach is the way to go, and AFTER OpenAI getting thoroughly wrecked by DeepSeek who heavily used the RL approach — OpenAI FINALLY gets off of their retarded wordcel asses, and starts to shift in the rotator direction, finally making much heavier use of RL training. OpenAI also pretends to be surprised in further confirming the ~unbounded RL training scaling laws that DeepMind thoroughly confirmed a decade ago.

(2) Real thinking with images — OpenAI FINALLY making images more of a first-class medium, “For the first time, these models can integrate images directly into their chain of thought. They don’t just see an image—they think with it.”

(3) Codex CLI — apparently a new AI agent, through which their latest models were trained to be good at using the terminal - huge for giving the AI real agency, instead of constantly needing help from humans for mundane things

(4) Agentic tool use — OpenAI FINALLY gives their top models full access to all their internal “tools”, and heavily trains the models with RL to best use the tools. Big question hear is WHY THE F&$& did they wait years before finally giving their top models full tool access. OpenAI had been weirdly restricting their top models to different arbitrary subsets of the tools up to now.

^^ OpenAI has finally done the few things critical needed to enable powerful agentic AIs

New huge AI wave finally inbound?

OpenAI Announcement

🄳🄾🄾🄼🄿🄾🅂🅃🄸🄽🄶
Forwarded from DoomPosting
OpenAI Releases Codex

Codex CLI is built for developers who already live in the terminal and want ChatGPT‑level reasoning plus the power to actually run code, manipulate files, and iterate – all under version control. In short, it’s chat‑driven development that understands and executes your repo.

• Zero setup — bring your OpenAI API key and it just works!

• Full auto-approval, while safe + secure by running network-disabled and directory-sandboxed

• Multimodal — pass in screenshots or diagrams to implement features

• Fully open-source

Usage examples:

Interactive REPL:
codex

Initial prompt for interactive REPL:
codex "fix lint errors”

Non‑interactive "quiet mode”:
codex -q --json "explain utils.ts”

OpenAI Codex

🄳🄾🄾🄼🄿🄾🅂🅃🄸🄽🄶