Offshore
Photo
DAIR.AI
Everyone is building "data agents" but nobody agrees on what that means.
The term gets applied to everything from a simple SQL chatbot to a fully autonomous data scientist. This ambiguity makes it impossible for users and builders to reason about what a system can actually do.
But data agents face fundamentally different challenges than general-purpose LLM agents.
This new tutorial proposes the first hierarchical taxonomy of data agents, from Level 0 (no autonomy) to Level 5 (full autonomy), inspired by the standard that brought clarity to self-driving car capabilities.
The six levels define a clear progression.
* L0: humans do everything manually.
* L1: stateless assistants that suggest queries or generate code but don't execute anything.
* L2: agents that perceive and interact with environments, invoke tools, and execute within human-designed pipelines.
* L3: agents that autonomously orchestrate end-to-end data workflows under human supervision.
* L4: proactive agents that continuously monitor data ecosystems and discover issues without being asked.
* L5: fully autonomous generative data scientists that invent new solutions and paradigms.
What separates data agents from general LLM agents?
They operate on large-scale, heterogeneous, and noisy raw data rather than small curated inputs. They interact with specialized toolkits like SQL engines, visualization libraries, and database loaders. And critically, their errors cascade through downstream pipelines rather than being confined to a single response.
The survey maps over 80 existing systems across these levels and the full data lifecycle: management, preparation, and analysis.
Most production systems today cluster at L1 and L2. A handful of research prototypes exhibit partial L3 capabilities through LLM-based orchestrators, predefined operators, and workflow optimization.
According to the authors, no system has achieved L4 or L5.
The key bottlenecks preventing advancement to higher levels: limited pipeline orchestration beyond predefined operators, inadequate causal and meta-reasoning to prevent cascading errors, difficulty adapting to dynamic environments with changing data and workloads, and heavy reliance on human-crafted guardrails for alignment.
Paper: https://t.co/M3sn1XAcwo
Learn to build effective AI agents in our academy: https://t.co/LRnpZN7L4c
tweet
Everyone is building "data agents" but nobody agrees on what that means.
The term gets applied to everything from a simple SQL chatbot to a fully autonomous data scientist. This ambiguity makes it impossible for users and builders to reason about what a system can actually do.
But data agents face fundamentally different challenges than general-purpose LLM agents.
This new tutorial proposes the first hierarchical taxonomy of data agents, from Level 0 (no autonomy) to Level 5 (full autonomy), inspired by the standard that brought clarity to self-driving car capabilities.
The six levels define a clear progression.
* L0: humans do everything manually.
* L1: stateless assistants that suggest queries or generate code but don't execute anything.
* L2: agents that perceive and interact with environments, invoke tools, and execute within human-designed pipelines.
* L3: agents that autonomously orchestrate end-to-end data workflows under human supervision.
* L4: proactive agents that continuously monitor data ecosystems and discover issues without being asked.
* L5: fully autonomous generative data scientists that invent new solutions and paradigms.
What separates data agents from general LLM agents?
They operate on large-scale, heterogeneous, and noisy raw data rather than small curated inputs. They interact with specialized toolkits like SQL engines, visualization libraries, and database loaders. And critically, their errors cascade through downstream pipelines rather than being confined to a single response.
The survey maps over 80 existing systems across these levels and the full data lifecycle: management, preparation, and analysis.
Most production systems today cluster at L1 and L2. A handful of research prototypes exhibit partial L3 capabilities through LLM-based orchestrators, predefined operators, and workflow optimization.
According to the authors, no system has achieved L4 or L5.
The key bottlenecks preventing advancement to higher levels: limited pipeline orchestration beyond predefined operators, inadequate causal and meta-reasoning to prevent cascading errors, difficulty adapting to dynamic environments with changing data and workloads, and heavy reliance on human-crafted guardrails for alignment.
Paper: https://t.co/M3sn1XAcwo
Learn to build effective AI agents in our academy: https://t.co/LRnpZN7L4c
tweet
Offshore
Photo
Fiscal.ai
Land and Expand.
55% of Datadog's customers now use 4 or more products.
$DDOG https://t.co/aENOHe6sa3
tweet
Land and Expand.
55% of Datadog's customers now use 4 or more products.
$DDOG https://t.co/aENOHe6sa3
tweet
Offshore
Photo
Illiquid
FSC has approved IBKR access to Korean stocks so Fintwit can mop up the Korea Discount. https://t.co/pm5u4n119d
tweet
FSC has approved IBKR access to Korean stocks so Fintwit can mop up the Korea Discount. https://t.co/pm5u4n119d
tweet
Offshore
Photo
Hidden Value Gems
That’s quite a statement 😉
From ‘Our Dollar, Your Problem’ by Kenneth Rogoff https://t.co/FuNClEZIGv
tweet
That’s quite a statement 😉
From ‘Our Dollar, Your Problem’ by Kenneth Rogoff https://t.co/FuNClEZIGv
tweet
Offshore
Photo
Benjamin Hernandez😎
📉 Deep Value Recovery: $JZXN
Recommendation: $JZXN
near $2.18 Even after a 63% rally, $JZXN remains fundamentally undervalued relative to its $1B token acquisition plans.
One-line why: This is a technical "mean reversion" play to the 200-day EMA near $1.65. https://t.co/J3Mm5EADUe
tweet
📉 Deep Value Recovery: $JZXN
Recommendation: $JZXN
near $2.18 Even after a 63% rally, $JZXN remains fundamentally undervalued relative to its $1B token acquisition plans.
One-line why: This is a technical "mean reversion" play to the 200-day EMA near $1.65. https://t.co/J3Mm5EADUe
tweet
Offshore
Video
Brady Long
Our QA team wrote 47 test cases yesterday. None of us can code...
Been using @testmuai's KaneAI for 2 weeks and it's actually wild how this works.
You literally just describe the test in plain english: "user logs in, adds 3 items to cart, applies promo code, checks out"
It converts that into executable code.
Selenium, playwright, cypress (whatever framework you use).
The part that saved us 6+ hours this week was auto-healing.
UI changes that normally break 20+ tests? It fixes them automatically based on original intent.
Also handles TOTP codes natively which is weirdly huge if you've ever dealt with auth in automation.
Not saying it replaces our test strategy.
But writing/maintaining tests went from "only senior QA can do this" to "anyone on the team can contribute"
7-day trial to play around with it: https://t.co/z3MiIxhqiS
tweet
Our QA team wrote 47 test cases yesterday. None of us can code...
Been using @testmuai's KaneAI for 2 weeks and it's actually wild how this works.
You literally just describe the test in plain english: "user logs in, adds 3 items to cart, applies promo code, checks out"
It converts that into executable code.
Selenium, playwright, cypress (whatever framework you use).
The part that saved us 6+ hours this week was auto-healing.
UI changes that normally break 20+ tests? It fixes them automatically based on original intent.
Also handles TOTP codes natively which is weirdly huge if you've ever dealt with auth in automation.
Not saying it replaces our test strategy.
But writing/maintaining tests went from "only senior QA can do this" to "anyone on the team can contribute"
7-day trial to play around with it: https://t.co/z3MiIxhqiS
tweet
Offshore
Photo
App Economy Insights
$SPOT Spotify Q4 FY25:
• MAU +11% to 751M (6M beat).
• Premium Subs +10% to 290M (1M beat).
• Revenue +7% Y/Y to €4.5B (€10M beat).
• Operating margin 15% (+4pp Y/Y).
Q1 FY26 Guidance:
• MAU +12% Y/Y to 759M (7M beat).
• Premium Subs +9% Y/Y to 293M (in line). https://t.co/op5r8LbZqW
tweet
$SPOT Spotify Q4 FY25:
• MAU +11% to 751M (6M beat).
• Premium Subs +10% to 290M (1M beat).
• Revenue +7% Y/Y to €4.5B (€10M beat).
• Operating margin 15% (+4pp Y/Y).
Q1 FY26 Guidance:
• MAU +12% Y/Y to 759M (7M beat).
• Premium Subs +9% Y/Y to 293M (in line). https://t.co/op5r8LbZqW
tweet
Clark Square Capital
RT @ClarkSquareCap: Idea thread time!
What's your best idea right now? (Any style, any market cap, any geography).
Be sure to add why you like it + valuation.
I will compile the responses and share.
Appreciate a RT for visibility! 🙏
tweet
RT @ClarkSquareCap: Idea thread time!
What's your best idea right now? (Any style, any market cap, any geography).
Be sure to add why you like it + valuation.
I will compile the responses and share.
Appreciate a RT for visibility! 🙏
tweet
Offshore
Photo
The Few Bets That Matter
$ALAB is my second-largest position & reports earnings tonight.
I’m structurally bullish for two main reasons:
🔹The CapEx cycle is not over - confirmed by $GOOG $AMZN & other hyperscalers.
🔹Compute optimization is priority #1.
With energy & space being the limiting factors and real-world constraints making both hard to expand quickly, the only short-term path forward for companies overwhelmed by compute demand is optimization.
That means more compute per unit of energy & space. And that requires more efficient hardware.
Maximizing this metric is priority #1. That runs through $ALAB & a few others.
Looking for confirmation of this thesis tonight.
tweet
$ALAB is my second-largest position & reports earnings tonight.
I’m structurally bullish for two main reasons:
🔹The CapEx cycle is not over - confirmed by $GOOG $AMZN & other hyperscalers.
🔹Compute optimization is priority #1.
With energy & space being the limiting factors and real-world constraints making both hard to expand quickly, the only short-term path forward for companies overwhelmed by compute demand is optimization.
That means more compute per unit of energy & space. And that requires more efficient hardware.
Maximizing this metric is priority #1. That runs through $ALAB & a few others.
Looking for confirmation of this thesis tonight.
The only tech stocks I still own are $ALAB and $BABA, and I expect both to outperform.
$ALAB generates real cash today.
As long as CapEx keeps rising - and $AMZN just confirmed it isn’t slowing after Meta and Google did the same, semis will keep printing.
Unlike hyperscalers, semis are positive FCF. They don’t depend on future ROI; they monetize real spending happening now.
With hyperscalers guiding CapEx ~50% above expectations, that cash flows directly to semis.
The market wants free cash flow.
They have it.
$BABA is the only hyperscaler holding up while markets fall. Likely because.
1. Ownership is largely non-US → US tech pressure doesn’t apply
2. China has a different economic and financial regime and situation
3. Valuation is incomparable to US hyperscalers, with cash-generating assets outside tech
Both names sit in very specific situations. As the market gets picky, performance will come from singular assets, not sectors.
This isn’t the end of AI.
It’s the start of the stock-picking era. - The Few Bets That Mattertweet