Offshore
Photo
Ahmad
RT @TheAhmadOsman: the Buy a GPU website & guide is launching this week
so, what should you expect? https://t.co/e36YLjAdoo
tweet
RT @TheAhmadOsman: the Buy a GPU website & guide is launching this week
so, what should you expect? https://t.co/e36YLjAdoo
tweet
Offshore
Photo
Ahmad
RT @TheAhmadOsman: > today this guy axes FAIR at Meta
> so this is a quick recap of his origin story
> and why he should not be the one
> making that decision
> Alexandr Wang, born January 1997
> age 19, drop out of MIT
> co-found Scale AI
> "what if we label data, but mid?"
> convince every LLM company that this is fine
> 2016–2023
> flood the market with barely-labeled goat photos and out-of-context Reddit takes
> call it “foundational data”
> raise billions
> valuation hits $7.3B
> everyone claps
> 2025
> sell Scale AI to Meta for $14B
> not a typo.
> fourteen. billion. dollars.
> join Meta as Chief AI Officer
> rename division to Meta Superintelligence Labs
> start saying things like “AGI by 2027” in interviews
> meanwhile, researchers:
> "the data from Scale is trash"
> models hallucinate goat facts and mislabel wheelchairs as motorcycles
> AI alignment folks are malding
> i am Alexandr. unbothered. moisturized. thriving.
> ranked #1 in Times Top Grifters of All Time
> beat out SBF, Elizabeth Holmes, and your favorite VC
> literally built an empire out of copy-pasted Amazon Mechanical Turk tasks
> mfw I labeled 4chan posts for pennies and turned it into a 14B exit
> mfw I am now leading Meta's quest for godlike AI
> mfw data quality was never part of the business model
> never bet against the grind
tweet
RT @TheAhmadOsman: > today this guy axes FAIR at Meta
> so this is a quick recap of his origin story
> and why he should not be the one
> making that decision
> Alexandr Wang, born January 1997
> age 19, drop out of MIT
> co-found Scale AI
> "what if we label data, but mid?"
> convince every LLM company that this is fine
> 2016–2023
> flood the market with barely-labeled goat photos and out-of-context Reddit takes
> call it “foundational data”
> raise billions
> valuation hits $7.3B
> everyone claps
> 2025
> sell Scale AI to Meta for $14B
> not a typo.
> fourteen. billion. dollars.
> join Meta as Chief AI Officer
> rename division to Meta Superintelligence Labs
> start saying things like “AGI by 2027” in interviews
> meanwhile, researchers:
> "the data from Scale is trash"
> models hallucinate goat facts and mislabel wheelchairs as motorcycles
> AI alignment folks are malding
> i am Alexandr. unbothered. moisturized. thriving.
> ranked #1 in Times Top Grifters of All Time
> beat out SBF, Elizabeth Holmes, and your favorite VC
> literally built an empire out of copy-pasted Amazon Mechanical Turk tasks
> mfw I labeled 4chan posts for pennies and turned it into a 14B exit
> mfw I am now leading Meta's quest for godlike AI
> mfw data quality was never part of the business model
> never bet against the grind
tweet
Ahmad
RT @TheAhmadOsman: all the snarky replies i get about how local models “don’t stand a chance”
make one thing clear
people are still judging based on LLaMA 2
if they touched Qwen 3 32B or 30B‑A3B for even a second,
they’d realize they’re stuck in 2023
open models have gotten SO GOOD
tweet
RT @TheAhmadOsman: all the snarky replies i get about how local models “don’t stand a chance”
make one thing clear
people are still judging based on LLaMA 2
if they touched Qwen 3 32B or 30B‑A3B for even a second,
they’d realize they’re stuck in 2023
open models have gotten SO GOOD
tweet
Offshore
Photo
Ahmad
RT @TheAhmadOsman: i love having my own private UNRESTRICTED COMPUTE
Buy a GPU https://t.co/6r8c46owH7
tweet
RT @TheAhmadOsman: i love having my own private UNRESTRICTED COMPUTE
Buy a GPU https://t.co/6r8c46owH7
tweet
Offshore
Photo
Ahmad
RT @TheAhmadOsman: Feynman was right.
In a world of rented APIs and black-box models,
one truth remains:
> “What I cannot create, I do not understand.”
This fall, in this timeline:
> Buy a GPU
> Learn LLMs
Understand the machine.
Create with it. https://t.co/9C438y6n7a
tweet
RT @TheAhmadOsman: Feynman was right.
In a world of rented APIs and black-box models,
one truth remains:
> “What I cannot create, I do not understand.”
This fall, in this timeline:
> Buy a GPU
> Learn LLMs
Understand the machine.
Create with it. https://t.co/9C438y6n7a
tweet
Offshore
Photo
Ahmad
RT @TheAhmadOsman: the Buy a GPU website & guide is launching this week
so, what should you expect? https://t.co/e36YLjAdoo
tweet
RT @TheAhmadOsman: the Buy a GPU website & guide is launching this week
so, what should you expect? https://t.co/e36YLjAdoo
tweet
Offshore
Video
Ahmad
RT @TheAhmadOsman: i built a simple tool that makes
Claude Code work with any local LLM
full demo:
> vLLM serving GLM-4.5 Air on 4x RTX 3090s
> Claude Code generating code + docs via my proxy
> 1 Python file + .env handles all requests
> nvtop showing live GPU load
> how it all works
Buy a GPU https://t.co/7nYsId4Uyu
tweet
RT @TheAhmadOsman: i built a simple tool that makes
Claude Code work with any local LLM
full demo:
> vLLM serving GLM-4.5 Air on 4x RTX 3090s
> Claude Code generating code + docs via my proxy
> 1 Python file + .env handles all requests
> nvtop showing live GPU load
> how it all works
Buy a GPU https://t.co/7nYsId4Uyu
tweet
Offshore
Photo
Ahmad
RT @TheAhmadOsman: working on getting nanochat training running with TT‑NN
the more i push my single Tenstorrent QuietBox Blackhole,
the more i see just how much headroom this thing has
counting down until my 4x TT‑QuietBox Blackhole cluster arrives
this cluster's going to be an absolute beast https://t.co/lN9VsITgDs
tweet
RT @TheAhmadOsman: working on getting nanochat training running with TT‑NN
the more i push my single Tenstorrent QuietBox Blackhole,
the more i see just how much headroom this thing has
counting down until my 4x TT‑QuietBox Blackhole cluster arrives
this cluster's going to be an absolute beast https://t.co/lN9VsITgDs
tweet
Offshore
Photo
Ahmad
RT @TheAhmadOsman: the Tenstorrent QuietBox Blackhole
> is a 3.2 Tb/s Ethernet mesh
> that pools memory
> and scales almost linearly
> when you daisy‑chain more boxes
the TT-QuietBox Blackhole comes with
> ~80 lbs liquid-cooled chassis
> AMD EPYC 8124P, 16c/32t
> 512 GB DDR5 ECC
> 4 TB NVMe
> ASRock Rack SIENAD8‑2L2T w/ 2x 10 GbE + IPMI
> 4x Blackhole p150c cards, totalling:
> 560 Tensix Cores
> 64 “big” RISC-V cores
> 128 GB GDDR6
> 840 MB On‑Chip SRAM
> 3.2 Tb/s Ethernet mesh
> 16x QSFP‑DD 800G ports for card⇔card comms
> 8x passive direct‑attach copper (DAC) cables (0.6m)
> all of this is powered by a single
> 1650W Platinum PSU, passively cooled
> ready to daisy-chain to the next QuietBox
> also, opensource stack (TT‑Forge → TT‑NN → TT‑Metalium)
the interconnect is the star
> what does “4x QSFP‑DD 800G” actually mean?
> QSFP‑DD = Quad Small Form‑Factor Pluggable — Double Density
> 8 electrical lanes per port
> ~100 GB/s per lane using PAM4 signalling
> total: 800 Gb/s full‑duplex per port → ~100 GB/s usable each way after Ethernet framing + FEC
each card talks directly to its siblings over QSFP‑DD 800G
> 4 ports per card x 800 Gb/s each =
> 3.2 Tb/s of aggregate bidirectional fabric per card
> 16 ports total per “quietbox” =
> 3.2 Tb/s internal mesh across all 4 cards
> this is your NVLink replacement
> no PCIe bottlenecks, no host-side relays
> just a true east-west ethernet fabric
there’s a hard rule
> the QSFP‑DD 800G ports are passive
> they only connect to other Blackhole cards via direct‑attach copper (DAC)
> max length = 2 meters, not optics, not switches, not uplinks to your ethernet fabric
> Blackhole fabric is its own world: card⇔card, box⇔box, nothing else
daisy‑chain the DACs and you’re all set, add more boxes and enjoy the 3.2 Tb/s ethernet mesh that pools memory and scales almost linearly
pretty sleek hardware UX, more soon
tweet
RT @TheAhmadOsman: the Tenstorrent QuietBox Blackhole
> is a 3.2 Tb/s Ethernet mesh
> that pools memory
> and scales almost linearly
> when you daisy‑chain more boxes
the TT-QuietBox Blackhole comes with
> ~80 lbs liquid-cooled chassis
> AMD EPYC 8124P, 16c/32t
> 512 GB DDR5 ECC
> 4 TB NVMe
> ASRock Rack SIENAD8‑2L2T w/ 2x 10 GbE + IPMI
> 4x Blackhole p150c cards, totalling:
> 560 Tensix Cores
> 64 “big” RISC-V cores
> 128 GB GDDR6
> 840 MB On‑Chip SRAM
> 3.2 Tb/s Ethernet mesh
> 16x QSFP‑DD 800G ports for card⇔card comms
> 8x passive direct‑attach copper (DAC) cables (0.6m)
> all of this is powered by a single
> 1650W Platinum PSU, passively cooled
> ready to daisy-chain to the next QuietBox
> also, opensource stack (TT‑Forge → TT‑NN → TT‑Metalium)
the interconnect is the star
> what does “4x QSFP‑DD 800G” actually mean?
> QSFP‑DD = Quad Small Form‑Factor Pluggable — Double Density
> 8 electrical lanes per port
> ~100 GB/s per lane using PAM4 signalling
> total: 800 Gb/s full‑duplex per port → ~100 GB/s usable each way after Ethernet framing + FEC
each card talks directly to its siblings over QSFP‑DD 800G
> 4 ports per card x 800 Gb/s each =
> 3.2 Tb/s of aggregate bidirectional fabric per card
> 16 ports total per “quietbox” =
> 3.2 Tb/s internal mesh across all 4 cards
> this is your NVLink replacement
> no PCIe bottlenecks, no host-side relays
> just a true east-west ethernet fabric
there’s a hard rule
> the QSFP‑DD 800G ports are passive
> they only connect to other Blackhole cards via direct‑attach copper (DAC)
> max length = 2 meters, not optics, not switches, not uplinks to your ethernet fabric
> Blackhole fabric is its own world: card⇔card, box⇔box, nothing else
daisy‑chain the DACs and you’re all set, add more boxes and enjoy the 3.2 Tb/s ethernet mesh that pools memory and scales almost linearly
pretty sleek hardware UX, more soon
tweet
Offshore
Photo
Ahmad
RT @TheAhmadOsman: My house has 33 GPUs.
> 21x RTX 3090s
> 4x RTX 4090s
> 4x RTX 5090s
> 4x Tenstorrent Blackhole p150a
Before AGI arrives:
Acquire GPUs.
Go into debt if you must.
But whatever you do, secure the GPUs. https://t.co/8U89OStknt
tweet
RT @TheAhmadOsman: My house has 33 GPUs.
> 21x RTX 3090s
> 4x RTX 4090s
> 4x RTX 5090s
> 4x Tenstorrent Blackhole p150a
Before AGI arrives:
Acquire GPUs.
Go into debt if you must.
But whatever you do, secure the GPUs. https://t.co/8U89OStknt
tweet