Offshore
Photo
Ahmad
RT @TheAhmadOsman: My house has 33 GPUs.

> 21x RTX 3090s
> 4x RTX 4090s
> 4x RTX 5090s
> 4x Tenstorrent Blackhole p150a

Before AGI arrives:

Acquire GPUs.

Go into debt if you must.

But whatever you do, secure the GPUs. https://t.co/8U89OStknt
tweet
Offshore
Video
Ahmad
RT @TheAhmadOsman: can’t write code because Cursor and Codex are both down thanks to the aws-us-east-1 outage?

tired of Anthropic’s weekly limits and nerfed models?

with one command and a few GPUs,
you can route Claude Code to your own local LLM with ZERO downtime

Buy a GPU https://t.co/aj8r201V83

i built a simple tool that makes

Claude Code work with any local LLM

full demo:
> vLLM serving GLM-4.5 Air on 4x RTX 3090s
> Claude Code generating code + docs via my proxy
> 1 Python file + .env handles all requests
> nvtop showing live GPU load
> how it all works

Buy a GPU https://t.co/7nYsId4Uyu
- Ahmad
tweet
Dimitry Nakhla | Babylon Capital®
RT @DimitryNakhla: 5 High-Quality Stocks With Good CAGR Potential Assuming Reasonable Multiples 💵

📦 Amazon $AMZN
•2028E EPS: $11.01
•Multiple: 29x
•CAGR: +12%

💰 S&P Global $SPGI
•2028E EPS: $24.41
•Multiple: 28x
•CAGR: +13%

🏦 Fair Isaac $FICO
•2028E EPS: $63.60
•Multiple: 36x
•CAGR: +13%

✈️ Booking Holdings $BKNG
•2028E EPS: $344.04
•Multiple: 23x
•CAGR: +15%

🫱🏼‍🫲🏻 MercadoLibre $MELI
•2028E EPS: $123.53
•Multiple: 30x
•CAGR: +18%
_______

*Estimates can change
tweet
Offshore
Photo
Investing visuals
Presenting you my first deep dive, covering $RBRK:

• Founder led
• Mission-critical
• Growing over 50%
• Named 6x data protection leader

This is the story of a business that evolved from simple backups to an industry-leading cyber resilience platform.

Let’s dive in! (~25 min. read) 🧵👇
tweet
Offshore
Photo
Dimitry Nakhla | Babylon Capital®
This week’s key scheduled reports 🗓️

Let’s dive into the earnings expectations, valuations, & business segments for 20 quality stocks reporting this week 🧵 https://t.co/7ea4rIBaDL
tweet
Offshore
Photo
Clark Square Capital
Just shared a new write-up on a US-listed stock with a very asymmetric return profile.

Be sure to check it out.

Thanks for reading! https://t.co/HjIZ6DeXcj
tweet
Clark Square Capital
Ok, guys. It's been about a month since the last idea thread. What's a good prompt for the next one? I will pick the best one and use that.
tweet
Offshore
Photo
Dimitry Nakhla | Babylon Capital®
RT @DimitryNakhla: 10 Quality Stocks Offering 33% Higher FCF Yield Than the S&P 500 (LTM) 💵

1. $MA 3.18%
2. $DHR 3.18%
3. $INTU 3.21%
4. $V 3.30%
5. $SPGI 3.61%
6. $ADP 4.19%
7. $CSU 4.21%
8. $ICE 4.94%
9. $ABNB 5.37%
10. $BKNG 5.54%
—-

$SPY FCF Yield 2.35% (LTM)

$SPY The S&P 500 free cash flow yield currently sits at 2.35%. https://t.co/MQYaZZGvNE
- Koyfin
tweet
Offshore
Photo
Ahmad
أنا عادةً بكبر دماغي، بس ديه مش أول مرة أحمد ينتقدني فيها، وكل مرة بيزداد كلامه ثقلا على النفس إنها تعديه

والمشكلة إن مفيش إنتقاد واضح أصلًا، لو أنا حسابي مؤذي إعملي بلوك/متعملش فولو، لو هدفك تساعدني وتنصحني ابعت على الخاص سواء نصيحة او استواضح، ومش هتبقى أول مرة نتواصل مع بعض يعني، انما هنقول كلوت ونسخر من غير حتى ما نبقى بننتقد المحتوى نفسه بشكل موسع ونسمع لبعض؟... يا أخي التمس لأخيك ٧٠ عذرًا

ردي عليه واضح، وربنا وحده يعلم بالنوايا، فياريت نخف على بعض ونتعامل بحسن ظن

احمد كان أبتدى كويس و الواحد كان بيحب يسمع بيقول ايه، الاهتمام تحول الي الريتش و الكلوت (أنا عارف انه كلاوت بس كلوت أوقع)

فكل كل الكلام بقى كلام شعبوي للاستهلاك بدون اي فايده حقيقية للبيستهلك المحتوى.

كان نفسي يفضل أحمد الـresearcher او حتى يكون في توازن ما بين الطبلة و ما بين القيمة الحقيقية
- Ahmed
tweet
Ahmad
RT @TheAhmadOsman: - local llms 101

- running a model = inference (using model weights)
- inference = predicting the next token based on your input plus all tokens generated so far
- together, these make up the "sequence"

- tokens ≠ words
- they're the chunks representing the text a model sees
- they are represented by integers (token IDs) in the model
- "tokenizer" = the algorithm that splits text into tokens
- common types: BPE (byte pair encoding), SentencePiece
- token examples:
- "hello" = 1 token or maybe 2 or 3 tokens
- "internationalization" = 5–8 tokens
- context window = max tokens model can "see" at once (2K, 8K, 32K+)
- longer context = more VRAM for KV cache, slower decode

- during inference, the model predicts next token
- by running lots of math on its "weights"
- model weights = billions of learned parameters (the knowledge and patterns from training)

- model parameters: usually billions of numbers (called weights) that the model learns during training
- these weights encode all the model's "knowledge" (patterns, language, facts, reasoning)
- think of them as the knobs and dials inside the model, specifically computed to recognize what could come next
- when you run inference, the model uses these parameters to compute its predictions, one token at a time

- every prediction is just: model weights + current sequence → probabilities for what comes next
- pick a token, append it, repeat, each new token becomes part of the sequence for the next prediction

- models are more than weight files
- neural network architecture: transformer skeleton (layers, heads, RoPE, MQA/GQA, more below)
- weights: billions of learned numbers (parameters, not "tokens", but calculated from tokens)
- tokenizer: how text gets chunked into tokens (BPE/SentencePiece)
- config: metadata, shapes, special tokens, license, intended use, etc
- sometimes: chat template are required for chat/instruct models, or else you get gibberish
- you give a model a prompt (your text, converted into tokens)

- models differ in parameter size:
- 7B means ~7 billion learned numbers
- common sizes: 7B, 13B, 70B
- bigger = stronger, but eats more VRAM/memory & compute
- the model computes a probability for every possible next token (softmax over vocab)
- picks one: either the highest (greedy) or
- samples from the probability distribution (temperature, top-p, etc)
- then appends that token to the sequence, then repeats the whole process
- this is generation:
- generate; predict, sample, append
- over and over, one token at a time
- rinse and repeat
- each new token depends on everything before it; the model re-reads the sequence every step

- generation is always stepwise: token by token, not all at once
- mathematically: model is a learned function, f_θ(seq) → p(next_token)
- all the "magic" is just repeating "what's likely next?" until you stop

- all conversation "tokens" live in the KV cache, or the "session memory"

- so what's actually inside the model?
- everything above-tokens, weights, config-is just setup for the real engine underneath

- the core of almost every modern llm is a transformer architecture
- this is the skeleton that moves all those numbers around
- it's what turns token sequences and weights into predictions
- designed for sequence data (like language),
- transformers can "look back" at previous tokens and
- decide which ones matter for the next prediction

- transformers work in layers, passing your sequence through the same recipe over and over
- each layer refines the representation, using attention to focus on the important parts of your input and context
- every time you generate a new token, it goes through this stack of layers-every single step

- inside each transformer layer:
- self-attention: figures out which previous tokens are important to the current prediction
- MLPs (multi-layer perceptrons): further process token representations, adding non-linearity and expressiveness
- layer n[...]
Offshore
Ahmad RT @TheAhmadOsman: - local llms 101 - running a model = inference (using model weights) - inference = predicting the next token based on your input plus all tokens generated so far - together, these make up the "sequence" - tokens ≠ words - they're…
orms and residuals: stabilize learning and prediction, making deep networks possible
- positional encodings (like RoPE): tell the model where each token sits in the sequence
- so "cat" and "catastrophe" aren't confused by position

- by stacking these layers (sometimes dozens or even hundreds)
- transformers build a complex understanding of your prompt, context, and conversation history

- transformer recap:
- decoder-only: model only predicts what comes next, each token looks back at all previous tokens
- self-attention picks what to focus on (MQA/GQA = efficient versions for less memory)
- feed-forward MLP after attention for every token (usually 2 layers, GELU activation)
- everything's wrapped in layer norms + linear layers (QKV projections, MLPs, outputs)
- residuals + norms = stable, trainable, no exploding/vanishing gradients
- RoPE (rotary embeddings): tells the model where each token sits in the sequence
- stack N layers of this → final logits → pick the next token
- scale up: more layers, more heads, wider MLPs = bigger brains

- VRAM: memory, the bottleneck
- VRAM must must fit:
1. weights (main model, whether quantized or not)
2. KV cache (per token, per layer, per head)
- weights:
- FP16: ~2 bytes/param → 7B = ~14GB
- 8-bit: ~1 byte/param → 7B = ~7GB
- 4-bit: ~0.5 byte/param → 7B = ~3.5GB
- add 10–30% for runtime overheads
- KV cache:
- rule of thumb: 0.5MB per token (Llama-like 7B, 32 layers, 4K tokens = ~2GB)
- some runtimes support KV cache quantization (8/4-bit) = big savings

- throughput = memory bandwidth + GPU FLOPs + attention implementation (FlashAttention/SDPA help) + quantization + batch size
- offload to CPU? expect MASSIVE slowdown

- GPU or bust: CPUs run quantized models (slow), but any real context/model needs CUDA/ROCm/Metal
- CPU spill = sadness (check device_map and memory fit)

- quantization: reduce precision for memory wins (sometimes a tiny quality hit)
- FP32/FP16/BF16 = full/floored
- INT8/INT4/NF4 = quantized
- 4-bit (NF4/GPTQ/AWQ) = sweet spot for most consumer GPUs (big memory win, small quality hit for most tasks)
- math-heavy or finicky tasks degrade first (math, logic, coding)

- KV cache quantization: even more memory saved for long contexts (check runtime support)

- formats/runtimes:
- PyTorch + safetensors: flexible, standard, GPU/TPU/CPU
- GGUF (llama.cpp): CPU/GPU/portable, best for quant + edge devices
- ONNX, TensorRT-LLM, MLC: advanced flavors for special hardware/use
- protip: avoid legacy .bin (pickle risk), use safetensors for safety

- everything is a tradeoff
- smaller = fits anywhere, less power
- more context = more latency + VRAM burn
- quantization = speed/memory, but maybe less accurate
- local = more control/knobs, more work

- what happens when you "load a model"?
- download weights, tokenizer, config
- resolve license/trust (don't use trust_remote_code unless you really trust the author)
- load to VRAM/CPU (check memory fit)
- warmup: kernels/caches initialized, first pass is slowest
- inference: forward passes per token, updating KV cache each step

- decoding = how next token is chosen:
- greedy: always top-1 (robotic)
- temperature: softens or sharpens probabilities (higher = more random)
- top-k: pick from top k
- top-p: pick from smallest set with ≥p prob
- typical sampling, repetition penalty, no-repeat n-gram: extra controls
- deterministic = set a seed and no sampling
- tune for your use-case: chat, summarization, code

- serving options?
- vLLM for high throughput, parallel serving
- llama.cpp server (OpenAI-compatible API)
- ExLlama V2/V3 w/ Tabby API (OpenAI-compatible API)
- run as a local script (CLI)
- FastAPI/Flask for local API endpoint

- local ≠ offline; run it, serve it, or build apps on top

- fine-tuning, ultra-brief:
- LoRA / QLoRA = adapter layers (efficient, minimal VRAM)
- still need a dataset and eval plan; adapters can be merged or kept separate
- most users get far with prompting + retrieval (RAG) or[...]
Offshore
orms and residuals: stabilize learning and prediction, making deep networks possible - positional encodings (like RoPE): tell the model where each token sits in the sequence - so "cat" and "catastrophe" aren't confused by position - by stacking these layers…
few-shot for niche tasks

- common pitfalls
- OOM? out of memory. Model or context too big, quantize or shrink context
- gibberish? used a base model with a chat prompt, or wrong template; check temperature/top_p
- slow? offload to CPU, wrong drivers, no FlashAttention; check CUDA/ROCm/Metal, memory fit
- unsafe? don't use random .bin or trust_remote_code; prefer safetensors, verify source

- why run locally?
- control: all the knobs are yours to tweak:
- sampler, chat templates, decoding, system prompts, quantization, context
- cost: no per-token API billing-just upfront hardware
- privacy: prompts and outputs stay on your machine
- latency: no network roundtrips, instant token streaming

- challenges:
- hardware limits (VRAM/memory = max model/context)
- ecosystem variance (different runtimes, quant schemes, templates)
- ops burden (setup, drivers, updates)

- running local checklist:
- pick a model (prefer chat-tuned, sized for your VRAM)
- pick precision (4-bit saves RAM, FP16 for max quality)
- install runtime (vLLM, llama.cpp, Transformers+PyTorch, etc)
- run it, get tokens/sec, check memory fit
- use correct chat template (apply_chat_template)
- tune decoding (temp/top_p)
- benchmark on your task
- serve as local API (or go wild and fine-tune it)

- glossary:
- token: smallest unit (subword/char)
- context window: max tokens visible to model
- KV cache: session memory, per-layer attention state
- quantization: lower precision for memory/speed
- RoPE: rotary position embeddings (for order)
- GQA/MQA: efficient attention for memory bandwidth
- decoding: method for picking next token
- RAG: retrieval-augmented generation, add real info

- misc:
- common architectures: LLaMA, Falcon, Mistral, GPT-NeoX, etc
- base model: not fine-tuned for chat (LLaMA, Falcon, etc)
- chat-tuned: fine-tuned for dialogue (Alpaca, Vicuna, etc)
- instruct-tuned: fine-tuned for following instructions (LLaMA-2-Chat, Mistral-Instruct, etc)

- chat/instruct models usually need a special prompt template to work well
- chat template: system/user/assistant markup is required; wrong template = junk output
- base models can do few-shot chat prompting, but not as well as chat-tuned ones

- quantized: weights stored in lower precision (8-bit, 4-bit) for memory savings, at some quality loss
- quantization is a tradeoff: memory/speed vs quality
- 4-bit (NF4/GPTQ/AWQ) is the sweet spot for most consumer GPUs (huge memory win, minor quality drop for most tasks)
- math-heavy or finicky tasks degrade first (math, logic, code)
- quantization types: FP16 (full), INT8 (quantized), INT4/NF4 (more quantized), etc.
- some runtimes support quantized KV cache (8/4-bit), big savings for long contexts

- formats/runtimes:
- PyTorch + safetensors: flexible, standard, works on GPU/TPU/CPU
- GGUF (llama.cpp): CPU/GPU, portable, best for quant + edge devices
- ONNX, TensorRT-LLM, MLC: advanced options for special hardware

- avoid legacy .bin (pickle risk), use safetensors for safety

- everything is a tradeoff:
- smaller = fits anywhere, less power
- more context = more latency + VRAM burn
- quantization = faster/leaner, maybe less accurate
- local = full control/knobs, but more work

- final words:
- local LLMs = memory math + correct formatting
- fit weights and KV cache in memory
- use the right chat template and decoding strategy
- know your knobs: quantization, context, decoding, batch, hardware

- master these, and you can run (and reason about) almost any modern model locally
tweet
Offshore
Photo
Ahmad
RT @TheAhmadOsman: the Buy a GPU website & guide is launching this week

so, what should you expect? https://t.co/e36YLjAdoo
tweet