Offshore
Photo
Read
RT @InvestiAnalyst: Still debating agent vs. agentless? You’re asking the wrong question.
One of the most consistent debates in cloud security over the past five years has been around deployment models: agent vs agentless. It’s easy to treat this as a binary discussion, but the reality is far more nuanced and shaped heavily by evolving infrastructure and market dynamics.
The pivotal shift came when companies like Orca and Wiz introduced agentless scanning. By leveraging cloud APIs to snapshot disk volumes and extract workload data without installation, these solutions offered a dramatically simplified deployment path. It enabled faster time to value and minimized disruption to developer workflows, two major pain points for security teams.
The market responded. Agentless scanning gained traction not because it was inherently superior, but because it addressed real operational constraints. Visibility without friction.
However, this came with tradeoffs.
Agentless tools often struggle with ephemeral workloads like containers, which may only exist for seconds. They rely on static data and lack access to runtime behavior, things like process execution, file access, and in memory activity. As cloud environments grow more dynamic and containerized, this limitation becomes more apparent.
By contrast, agent based solutions offer deeper visibility. They allow real time detection and response, more precise telemetry, and granular insight into application layer behavior. While historically harder to deploy, recent improvements, like helm charts and ArgoCD-based automation, have lowered the operational burden significantly.
In short:
Agentless = low effort, faster deployment, broader initial visibility.
Agents = higher fidelity data, runtime insights, better suited to containers.
This isn’t about which is better universally, it’s about what fits your organization’s maturity, architecture, and goals. Many security teams begin with agentless tools to quickly gain context, then incorporate agents as they move toward runtime protection and deeper incident response capabilities.
Framing the right deployment model isn’t just a technical decision, it’s a strategic one.
tweet
RT @InvestiAnalyst: Still debating agent vs. agentless? You’re asking the wrong question.
One of the most consistent debates in cloud security over the past five years has been around deployment models: agent vs agentless. It’s easy to treat this as a binary discussion, but the reality is far more nuanced and shaped heavily by evolving infrastructure and market dynamics.
The pivotal shift came when companies like Orca and Wiz introduced agentless scanning. By leveraging cloud APIs to snapshot disk volumes and extract workload data without installation, these solutions offered a dramatically simplified deployment path. It enabled faster time to value and minimized disruption to developer workflows, two major pain points for security teams.
The market responded. Agentless scanning gained traction not because it was inherently superior, but because it addressed real operational constraints. Visibility without friction.
However, this came with tradeoffs.
Agentless tools often struggle with ephemeral workloads like containers, which may only exist for seconds. They rely on static data and lack access to runtime behavior, things like process execution, file access, and in memory activity. As cloud environments grow more dynamic and containerized, this limitation becomes more apparent.
By contrast, agent based solutions offer deeper visibility. They allow real time detection and response, more precise telemetry, and granular insight into application layer behavior. While historically harder to deploy, recent improvements, like helm charts and ArgoCD-based automation, have lowered the operational burden significantly.
In short:
Agentless = low effort, faster deployment, broader initial visibility.
Agents = higher fidelity data, runtime insights, better suited to containers.
This isn’t about which is better universally, it’s about what fits your organization’s maturity, architecture, and goals. Many security teams begin with agentless tools to quickly gain context, then incorporate agents as they move toward runtime protection and deeper incident response capabilities.
Framing the right deployment model isn’t just a technical decision, it’s a strategic one.
tweet
Offshore
Photo
Ahmad
the madlad bought another GPU and NVLink
that setup is gonna be a treat, Buy a GPU keeps on winning
tweet
the madlad bought another GPU and NVLink
that setup is gonna be a treat, Buy a GPU keeps on winning
@TheAhmadOsman What have I done.. https://t.co/TBeFllcRhr - Joe Petrakovichtweet
Offshore
Photo
Read
RT @profplum99: The ~25 year return from the S&P500 from Sep 2000 is basically EPS growth + dividend yield. What bubble? https://t.co/rxidgtXrof
tweet
RT @profplum99: The ~25 year return from the S&P500 from Sep 2000 is basically EPS growth + dividend yield. What bubble? https://t.co/rxidgtXrof
tweet
Clark Square Capital
RT @ClarkSquareCap: Idea thread #2!
What's your favorite special situation? (Spin/m&a/etc any market cap).
Add a sentence explaining why you like it + valuation.
I will compile the responses and share them with everyone.
Please retweet for visibility. Thx in advance! 🙏
tweet
RT @ClarkSquareCap: Idea thread #2!
What's your favorite special situation? (Spin/m&a/etc any market cap).
Add a sentence explaining why you like it + valuation.
I will compile the responses and share them with everyone.
Please retweet for visibility. Thx in advance! 🙏
tweet
Clark Square Capital
RT @ClarkSquareCap: Idea thread #1!
What's your favorite Japanese stock? (Any market cap/style).
Add a sentence explaining why you like it + valuation.
As usual, I will compile the responses and share them with everyone.
Please retweet for visibility. Thx in advance! 🙏
tweet
RT @ClarkSquareCap: Idea thread #1!
What's your favorite Japanese stock? (Any market cap/style).
Add a sentence explaining why you like it + valuation.
As usual, I will compile the responses and share them with everyone.
Please retweet for visibility. Thx in advance! 🙏
tweet
Offshore
Photo
Ahmad
RT @TheAhmadOsman: My house has 33 GPUs.
> 21x RTX 3090s
> 4x RTX 4090s
> 4x RTX 5090s
> 4x Tenstorrent Blackhole p150a
Before AGI arrives:
Acquire GPUs.
Go into debt if you must.
But whatever you do, secure the GPUs. https://t.co/8U89OStknt
tweet
RT @TheAhmadOsman: My house has 33 GPUs.
> 21x RTX 3090s
> 4x RTX 4090s
> 4x RTX 5090s
> 4x Tenstorrent Blackhole p150a
Before AGI arrives:
Acquire GPUs.
Go into debt if you must.
But whatever you do, secure the GPUs. https://t.co/8U89OStknt
tweet
Offshore
Photo
Ahmad
RT @TheAhmadOsman: > today this guy axes FAIR at Meta
> so this is a quick recap of his origin story
> and why he should not be the one
> making that decision
> Alexandr Wang, born January 1997
> age 19, drop out of MIT
> co-found Scale AI
> "what if we label data, but mid?"
> convince every LLM company that this is fine
> 2016–2023
> flood the market with barely-labeled goat photos and out-of-context Reddit takes
> call it “foundational data”
> raise billions
> valuation hits $7.3B
> everyone claps
> 2025
> sell Scale AI to Meta for $14B
> not a typo.
> fourteen. billion. dollars.
> join Meta as Chief AI Officer
> rename division to Meta Superintelligence Labs
> start saying things like “AGI by 2027” in interviews
> meanwhile, researchers:
> "the data from Scale is trash"
> models hallucinate goat facts and mislabel wheelchairs as motorcycles
> AI alignment folks are malding
> i am Alexandr. unbothered. moisturized. thriving.
> ranked #1 in Times Top Grifters of All Time
> beat out SBF, Elizabeth Holmes, and your favorite VC
> literally built an empire out of copy-pasted Amazon Mechanical Turk tasks
> mfw I labeled 4chan posts for pennies and turned it into a 14B exit
> mfw I am now leading Meta's quest for godlike AI
> mfw data quality was never part of the business model
> never bet against the grind
tweet
RT @TheAhmadOsman: > today this guy axes FAIR at Meta
> so this is a quick recap of his origin story
> and why he should not be the one
> making that decision
> Alexandr Wang, born January 1997
> age 19, drop out of MIT
> co-found Scale AI
> "what if we label data, but mid?"
> convince every LLM company that this is fine
> 2016–2023
> flood the market with barely-labeled goat photos and out-of-context Reddit takes
> call it “foundational data”
> raise billions
> valuation hits $7.3B
> everyone claps
> 2025
> sell Scale AI to Meta for $14B
> not a typo.
> fourteen. billion. dollars.
> join Meta as Chief AI Officer
> rename division to Meta Superintelligence Labs
> start saying things like “AGI by 2027” in interviews
> meanwhile, researchers:
> "the data from Scale is trash"
> models hallucinate goat facts and mislabel wheelchairs as motorcycles
> AI alignment folks are malding
> i am Alexandr. unbothered. moisturized. thriving.
> ranked #1 in Times Top Grifters of All Time
> beat out SBF, Elizabeth Holmes, and your favorite VC
> literally built an empire out of copy-pasted Amazon Mechanical Turk tasks
> mfw I labeled 4chan posts for pennies and turned it into a 14B exit
> mfw I am now leading Meta's quest for godlike AI
> mfw data quality was never part of the business model
> never bet against the grind
tweet
Ahmad
RT @TheAhmadOsman: pro tip:
tell codex-cli or claude code to
generate relevant pre-commit hooks for your project
tweet
RT @TheAhmadOsman: pro tip:
tell codex-cli or claude code to
generate relevant pre-commit hooks for your project
tweet
Ahmad
RT @TheAhmadOsman: > be you
> want to actually learn how LLMs work
> sick of “just start with linear algebra and come back in 5 years”
> decide to build my own roadmap
> no fluff. no detours. no 200-hour generic ML playlists
> just the stuff that actually gets you from “what’s a token?” to “I trained a mini-GPT with LoRA adapters and FlashAttention”
> goal: build, fine-tune, and ship LLMs
> not vibe with them. not "learn the theory" forever
> build them
> you will:
> > build an autograd engine from scratch
> > write a mini-GPT from scratch
> > implement LoRA and fine-tune a model on real data
> > hate CUDA at least once
> > cry
> > keep going
> 5 phases
> if you already know something? skip
> if you're lost? rewatch
> if you’re stuck? use DeepResearch
> this is a roadmap, not a leash
> by the end: you either built the thing or you didn’t
> phase 0: foundations
> > if matrix multiplication is scary, you’re not ready yet
> > watch 3Blue1Brown’s linear algebra series
> > MIT 18.06 with Strang, yes, he’s still the GOAT
> > code Micrograd from scratch (Karpathy)
> > train a mini-MLP on MNIST
> > no frameworks, no shortcuts, no mercy
> phase 1: transformers
> > the name is scary
> > it’s just stacked matrix multiplies and attention blocks
> > Jay Alammar + 3Blue1Brown for the “aha”
> > Stanford CS224N for the theory
> > read "Attention Is All You Need" only AFTER building mental models
> > Karpathy's "Let's Build GPT" will break your brain in a good way
> > project: build a decoder-only GPT from scratch
> > bonus: swap tokenizers, try BPE/SentencePiece
> phase 2: scaling
> > LLMs got good by scaling, not magic
> > Kaplan paper -> Chinchilla paper
> > learn Data, Tensor, Pipeline parallelism
> > spin up multi-GPU jobs using HuggingFace Accelerate
> > run into VRAM issues
> > fix them
> > welcome to real training hell
> phase 3: alignment & fine-tuning
> > RLHF: OpenAI blog -> Ouyang paper
> > SFT -> reward model -> PPO (don’t get lost here)
> > Anthropic's Constitutional AI = smart constraints
> > LoRA/QLoRA: read, implement, inject into HuggingFace models
> > fine-tune on real data
> > project: fine-tune gpt2 or distilbert with your own adapters
> > not toy examples. real use cases or bust
> phase 4: production
> this is the part people skip to, but you earned it
> inference optimization: FlashAttention, quantization, sub-second latency
> read the paper, test with quantized models
> resources:
> math/coding:
> > 3Blue1Brown, MIT 18.06, Goodfellow’s book
> PyTorch:
> > Karpathy, Zero to Mastery
> > transformers:
> > Alammar, Karpathy, CS224N, Vaswani et al
> > scaling:
> > Kaplan, Chinchilla, HuggingFace Accelerate
> > alignment:
> > OpenAI, Anthropic, LoRA, QLoRA
> > inference:
> > FlashAttention
> the endgame:
> > understand how these models actually work
> > see through hype
> > ignore LinkedIn noise
> > build tooling
> > train real stuff
> > ship your own stack
> > look at a paper and think “yeah I get it”
> > build your own AI assistant, infra, whatever
> make it all the way through?
> ship something real?
> DM me.
> I wanna see what you built.
> happy hacking.
tweet
RT @TheAhmadOsman: > be you
> want to actually learn how LLMs work
> sick of “just start with linear algebra and come back in 5 years”
> decide to build my own roadmap
> no fluff. no detours. no 200-hour generic ML playlists
> just the stuff that actually gets you from “what’s a token?” to “I trained a mini-GPT with LoRA adapters and FlashAttention”
> goal: build, fine-tune, and ship LLMs
> not vibe with them. not "learn the theory" forever
> build them
> you will:
> > build an autograd engine from scratch
> > write a mini-GPT from scratch
> > implement LoRA and fine-tune a model on real data
> > hate CUDA at least once
> > cry
> > keep going
> 5 phases
> if you already know something? skip
> if you're lost? rewatch
> if you’re stuck? use DeepResearch
> this is a roadmap, not a leash
> by the end: you either built the thing or you didn’t
> phase 0: foundations
> > if matrix multiplication is scary, you’re not ready yet
> > watch 3Blue1Brown’s linear algebra series
> > MIT 18.06 with Strang, yes, he’s still the GOAT
> > code Micrograd from scratch (Karpathy)
> > train a mini-MLP on MNIST
> > no frameworks, no shortcuts, no mercy
> phase 1: transformers
> > the name is scary
> > it’s just stacked matrix multiplies and attention blocks
> > Jay Alammar + 3Blue1Brown for the “aha”
> > Stanford CS224N for the theory
> > read "Attention Is All You Need" only AFTER building mental models
> > Karpathy's "Let's Build GPT" will break your brain in a good way
> > project: build a decoder-only GPT from scratch
> > bonus: swap tokenizers, try BPE/SentencePiece
> phase 2: scaling
> > LLMs got good by scaling, not magic
> > Kaplan paper -> Chinchilla paper
> > learn Data, Tensor, Pipeline parallelism
> > spin up multi-GPU jobs using HuggingFace Accelerate
> > run into VRAM issues
> > fix them
> > welcome to real training hell
> phase 3: alignment & fine-tuning
> > RLHF: OpenAI blog -> Ouyang paper
> > SFT -> reward model -> PPO (don’t get lost here)
> > Anthropic's Constitutional AI = smart constraints
> > LoRA/QLoRA: read, implement, inject into HuggingFace models
> > fine-tune on real data
> > project: fine-tune gpt2 or distilbert with your own adapters
> > not toy examples. real use cases or bust
> phase 4: production
> this is the part people skip to, but you earned it
> inference optimization: FlashAttention, quantization, sub-second latency
> read the paper, test with quantized models
> resources:
> math/coding:
> > 3Blue1Brown, MIT 18.06, Goodfellow’s book
> PyTorch:
> > Karpathy, Zero to Mastery
> > transformers:
> > Alammar, Karpathy, CS224N, Vaswani et al
> > scaling:
> > Kaplan, Chinchilla, HuggingFace Accelerate
> > alignment:
> > OpenAI, Anthropic, LoRA, QLoRA
> > inference:
> > FlashAttention
> the endgame:
> > understand how these models actually work
> > see through hype
> > ignore LinkedIn noise
> > build tooling
> > train real stuff
> > ship your own stack
> > look at a paper and think “yeah I get it”
> > build your own AI assistant, infra, whatever
> make it all the way through?
> ship something real?
> DM me.
> I wanna see what you built.
> happy hacking.
tweet