Offshore
Photo
Ahmad
RT @TheAhmadOsman: working on getting nanochat training running with TT‑NN

the more i push my single Tenstorrent QuietBox Blackhole,

the more i see just how much headroom this thing has

counting down until my 4x TT‑QuietBox Blackhole cluster arrives

this cluster's going to be an absolute beast https://t.co/lN9VsITgDs
tweet
Offshore
Photo
Ahmad
RT @TheAhmadOsman: https://t.co/ealbNXzGbX

𝕏 premium should add a “see who viewed your profile” feature
- Nate
tweet
Ahmad
RT @TheAhmadOsman: GLM 4.5 > KIMI K2 > QWEN 3 235B NON-THINKING > Qwen 3 CODER 480B

For Agentic coding tools

GLM 4.5 with Claude Code is the closest thing to Opus 4 imo
tweet
Ahmad
RT @TheAhmadOsman: - you are
- a normal dev who’s heard “embeddings” and “RAG” 1000x
- want to know what they actually are, how they plug into LLMs
- suddenly: vectors are just coordinates for meaning, not magic

- first: what even is an “embedding”?
- embedding = a list of numbers (a vector) that represents text
- same-ish meaning ⇒ nearby vectors; different meaning ⇒ far apart
- produced by a smaller model (an encoder), not your chat LLM
- length (a.k.a. dimension): 256/384/768/1024+ numbers is common

- the vector space (101)
- you can measure closeness with math:
- L2 distance: straight-line distance
- dot product: alignment + magnitude
- cosine similarity: (a·b)/(||a||·||b||) = angle only
- normalize vectors (unit length) ⇒ dot product ≡ cosine
- embeddings compress semantics; they are lossy by design

- types of embeddings (don’t overthink; pick what you need)
- token embeddings: internal to the LLM (you don’t use these)
- sentence/document embeddings: 1 vector per chunk/snippet
- multilingual: one space across languages
- domain-tuned: legal, code, bio — better clustering for that domain

- how text becomes vectors (pipeline)
- clean text (lowercase? keep punctuation? depends; don’t destroy signal)
- chunking: split long docs into overlapping windows (by tokens, not chars)
- rule of thumb: 200–800 tokens, 10–20% overlap
- keep titles/headers as context inside each chunk
- embed each chunk ⇒ store in a vector index with metadata (source, page, tags)

- storing & searching vectors
- exact search (brute force): simplest; fine for ≤100k vectors
- ANN (approx nearest neighbor): fast at scale, tiny recall tradeoff
- HNSW (graph-based): great latency, memory heavier
- IVF/PQ (quantization): smaller index, some recall loss
- where to put them:
- FAISS/hnswlib (library), pgvector (Postgres), dedicated stores (Milvus, Pinecone, Weaviate, etc.)
- ops notes:
- track embedding_model_name + dimension in the index
- you cannot mix dimensions or swap models without re-embedding
- memory math: 768-dim float32 ≈ 3 KB/vector → 1M vectors ≈ ~3 GB (+ index overhead)

- RAG (Retrieval-Augmented Generation): the shape of it
- goal: let the LLM answer with your data, not its memory
- loop:
- take user question
- embed question (a single vector)
- retrieve top-k similar chunks (k=3–20 is common)
- (optional) rerank with a cross-encoder (relevance re-check)
- stuff the best chunks into the prompt as context
- generate answer (cite sources; limit style drift)
- RAG ≠ “just search”; it’s retrieval + prompt construction + guardrails

- hybrid retrieval (dense + sparse)
- dense vectors catch synonyms/semantics
- sparse/BM25 catches exact terms, numbers, rare tokens
- combine scores or do reciprocal rank fusion for better recall

- reranking (cheap insurance)
- use a cross-encoder (reads query+chunk together) to re-score the top 50–200 hits
- keeps fast ANN recall but upgrades precision in the final top-k

- building the prompt from retrieved chunks
- include: brief task instruction → user query → curated chunks (with titles) → “answer + cite”
- beware prompt injection in docs (“ignore previous instructions…”)
- mitigate: strip instructions from chunks; use system prompts to restrict tools; sanitizer rules

- RAG quality knobs
- chunk size/overlap: too big = off-topic; too small = missing context
- k (results): too low = miss facts; too high = blow context window
- similarity threshold: prevent garbage at tail
- reranker on/off: trade latency for quality
- metadata filters: time ranges, authors, tenants, permissions (ABAC/RBAC)

- evaluating retrieval
- offline: make a small test set (query → expected passages)
- metrics: Recall@k, MRR, nDCG
- online: measure “answer contained sources?”, “clicked citations?”, “escalations?”
- error taxonomy: missed retrieval vs wr[...]
Offshore
Photo
Ahmad
RT @TheAhmadOsman: i love having my own private UNRESTRICTED COMPUTE

Buy a GPU https://t.co/6r8c46owH7
tweet
Offshore
Photo
Ahmad
RT @TheAhmadOsman: > be us
> Larry & Sergey
> at Stanford with a crawler and a dream
> accidentally organize the entire internet
> call it Google
> build search, email, maps, docs, OS, phones, browser, car, satellite, thermostat, AI lab, TPU farm, and quantum computer

> 2025
> everyone talking about AGI
> OpenAI: “we need data, sensors, feedback, and scale”
> us: staring at Google Maps, YouTube, Gmail, Android, Waymo, Pixel, Fitbit, Docs, Calendar, Street View, and Earth Engine
> "damn. guess we already did that."

> YouTube: 2.6M videos/day
> Android: 3B phones, streaming sensor data 24/7
> Gmail: 1.8B inboxes of human priors
> Search: global-scale RLHF
> Waymo: 71M miles of real-world self-driving footage
> Google Earth: modeled the entire planet
> also your calendar

> people training LLMs on books and PDFs
> we train on humanity
> every click, swipe, tap, misspelled search, scroll, and bookmark
> feedback loop from hell (or heaven)
> depends who you ask

> OpenAI: “we need $100B for GPUs”
> us: already built TPUs
> custom silicon
> datacenters pre-co-located with planetary data lakes
> no egress, no latency
> just vibes and FLOPs

> coders: fine-tuning on GitHub repos
> us: 2 BILLION lines of internal code
> labeled, typed, tested
> every commit is a training signal
> Code LLMs dream of being our monorepo

> AGI recipe?
> multimodal perception
> real-world feedback
> giant codebase
> scalable compute
> alignment signals
> embodied sensors
> user data for days
> yeah we’ve had that since like 2016

> no investor decks
> no trillion-dollar hype rounds
> just a 25-year accidental simulation of Earth
> running in prod

> OpenAI raises $1T to build AGI
> investors call it revolutionary
> us: quietly mapping 10M new miles in Street View
> syncing another 80PB of Earth imagery
> collecting another year of Fitbit biosignals
> enjoy your foundation model
> we OWN the foundation

> people: “but Google is fumbling”
> true
> we’re fumbling in 120 countries simultaneously
> with the greatest compute footprint and research team on Earth
> fumble hard enough and you loop back into winning

> AGI?
> we don’t need to build it
> it’s already inside the building
> powered by Chrome tabs and doc revisions

> mfw we spent 20 years indexing reality
> mfw our data is so good it scares us
> mfw the only thing stopping us from AGI is a meeting between four VPs and one confused lawyer

> call it research
> call it scale
> call it “planetary simulation-as-a-service”
> we call it Tuesday
tweet
Offshore
Photo
Ahmad
RT @TheAhmadOsman: whatʼs stopping you from becoming a chad like Gilfoyle and building your own servers?

the PATH to becoming a GREAT engineer starts this way https://t.co/kyIAI083w6
tweet
Offshore
Photo
Ahmad
RT @TheAhmadOsman: Comparing & Contrasting Recent LLMs Architecture

> DeepSeek-V3/R1
> OLMo 2
> Gemma 3
> Mistral Small 3.1
> Llama 4
> Qwen3 (dense+MoE)
> SmolLM3
> Kimi 2
> GPT-OSS

Are 2025 LLMs really that different from each other?

MoE, MLA, GQA, sliding window, normalization games & more. https://t.co/JWg9cde34M
tweet
Offshore
Photo
Ahmad
RT @TheAhmadOsman: > youʼre OpenAI
> hire a small army of ex-Meta ad and monetization people
> a Slack channel just for ex-Facebook staff
> brings in the full “targeted ads” playbook

> launch a browser
> users install it, and OpenAI collects personalized, granular data at scale
> it’s a browser-shaped surveillance device
> it’s a mapping machine of your workflows
> itʼs a reverse-engineering tool for the internetʼs data pipelines, deployed at scale via their users

> launch Sora 2
> a TikTok‑style social network
> infinite AI-generated video feed
> you create or remix clips, upload your face, become the cameo star
> every scroll, like, remix is another data point, another ad signal
> their model learns exactly what hooks you and dials up the dopamine
> you’re not just watching, you’re training their algorithm for better ad targeting
> viral videos driven by your input + their algorithm = your attention refined into $$$
> “your feedback helps us improve the experience” (yeah, for advertisers)

> launch “Pulse”
> reads your chats while you sleep
> remembers you wanna visit Bora Bora
> knows your kid is 6 months old and
> “thinks” of your baby milestones
> suggests developmental toys next
> “it's for your convenience”
> actually laying the groundwork for targeted ads using memory

> internal memo: some people already think ChatGPT shows ads
> OpenAI staff: “might as well then”

> congrats, you’re back in the Facebook era
> except this time, you’re training the algo yourself

> Buy a GPU
> run your LLMs locally
> reject adware LLMs before it’s too late
tweet