Offshore
Photo
NecoKronos
So... who got the trade?
Once we filled the selling imbalances, it was lights out.
An instant 2% drop to start the week. Thanks for playing. 🤝
#BTC https://t.co/Olwk8sMFx1
tweet
So... who got the trade?
Once we filled the selling imbalances, it was lights out.
An instant 2% drop to start the week. Thanks for playing. 🤝
#BTC https://t.co/Olwk8sMFx1
The dream scenario for #BTC right now?
✅ Range VAH deviation confirmed.
✅ Selling stacked imbalance cleared.
If we get both, the trade sets itself up https://t.co/pOqLSlFcrG - NecoKronostweet
Offshore
Photo
Offshore
Photo
Illiquid
This is why you are early to Seikoh Giken.
tweet
This is why you are early to Seikoh Giken.
We makes 1.6T from 2026 onwards
We makes 3.2T from 2028 onwards https://t.co/4mCVa2gc5T - Vikram Sekartweet
Offshore
Photo
God of Prompt
RT @free_ai_guides: I stopped using ChatGPT for daily tasks.
Switched to an open-source agent that:
→ Remembers every conversation I've had
→ Knows my preferences without being told
→ Adds new skills from a community library
→ Runs 24/7 without me touching it
→ Costs almost nothing
It's called OpenClaw.
And it's what AI assistants should have been from the start.
I wrote the setup guide. Free.
Comment "OpenClaw" and I'll DM it to you
tweet
RT @free_ai_guides: I stopped using ChatGPT for daily tasks.
Switched to an open-source agent that:
→ Remembers every conversation I've had
→ Knows my preferences without being told
→ Adds new skills from a community library
→ Runs 24/7 without me touching it
→ Costs almost nothing
It's called OpenClaw.
And it's what AI assistants should have been from the start.
I wrote the setup guide. Free.
Comment "OpenClaw" and I'll DM it to you
tweet
Offshore
Photo
Michael Fritzell (Asian Century Stocks)
RT @douglaskimkorea: In the next several years, 6G networks are expected to be rolled out aggressively in Korea and Samji Electronics could be one of the beneficiaries of this 6G network expansion. https://t.co/OkxBfIwOqg
tweet
RT @douglaskimkorea: In the next several years, 6G networks are expected to be rolled out aggressively in Korea and Samji Electronics could be one of the beneficiaries of this 6G network expansion. https://t.co/OkxBfIwOqg
tweet
Offshore
Photo
DAIR.AI
On evaluating multi-step scientific tool use in LLM agents.
SciAgentGym provides an interactive environment with 1,780 specialized tools across 4 scientific disciplines.
The core finding: even advanced models like GPT-5 see success rates drop sharply from 60.6% to 30.9% as tasks require more interaction steps.
Multi-step execution remains a fundamental bottleneck.
To address this, the researchers developed SciForge, a data synthesis method that models tool interactions as dependency graphs. Their fine-tuned SciAgent-8B outperformed much larger competing models on scientific workflows.
Scientific automation requires reliable multi-step tool use. Targeted training on graph-structured trajectories is more effective than raw model scale for these tasks.
Paper: https://t.co/Z9u1zi5K0U
Learn to build effective AI agents in our academy: https://t.co/LRnpZN7L4c
tweet
On evaluating multi-step scientific tool use in LLM agents.
SciAgentGym provides an interactive environment with 1,780 specialized tools across 4 scientific disciplines.
The core finding: even advanced models like GPT-5 see success rates drop sharply from 60.6% to 30.9% as tasks require more interaction steps.
Multi-step execution remains a fundamental bottleneck.
To address this, the researchers developed SciForge, a data synthesis method that models tool interactions as dependency graphs. Their fine-tuned SciAgent-8B outperformed much larger competing models on scientific workflows.
Scientific automation requires reliable multi-step tool use. Targeted training on graph-structured trajectories is more effective than raw model scale for these tasks.
Paper: https://t.co/Z9u1zi5K0U
Learn to build effective AI agents in our academy: https://t.co/LRnpZN7L4c
tweet
Offshore
Photo
Clark Square Capital
RT @ClarkSquareCap: Would love a RT if you found this useful! Thank you!
tweet
RT @ClarkSquareCap: Would love a RT if you found this useful! Thank you!
Here is this week's special situations digest.
281 situations in activist campaigns, M&A/divestments, management changes, and other corporate events.
Make sure to check it out https://t.co/NZBTGqVC6P - Clark Square Capitaltweet
Offshore
Photo
DAIR.AI
RT @omarsar0: Interesting new work on adaptive reasoning depth for LLM agents.
Not every agent step requires the same level of thinking. Some steps need strategic planning. Others are routine execution.
This research introduces CogRouter, a framework inspired by ACT-R cognitive theory that dynamically adjusts reasoning depth at each decision step across four hierarchical cognitive levels.
Appropriate cognitive depth should maximize the confidence of the resulting action. Training combines supervised fine-tuning for stable cognitive patterns with policy optimization for step-level credit assignment.
A 7B parameter model achieved 82.3% success rate on agent benchmarks, outperforming GPT-4o while consuming 62% fewer tokens.
Why does it matter?
Adaptive reasoning is a more practical path to efficient agents than simply scaling model size. Think fast when you can, slow when you must.
Paper: https://t.co/kYLqeHaY8p
Learn to build effective AI agents in our academy: https://t.co/1e8RZKs4uX
tweet
RT @omarsar0: Interesting new work on adaptive reasoning depth for LLM agents.
Not every agent step requires the same level of thinking. Some steps need strategic planning. Others are routine execution.
This research introduces CogRouter, a framework inspired by ACT-R cognitive theory that dynamically adjusts reasoning depth at each decision step across four hierarchical cognitive levels.
Appropriate cognitive depth should maximize the confidence of the resulting action. Training combines supervised fine-tuning for stable cognitive patterns with policy optimization for step-level credit assignment.
A 7B parameter model achieved 82.3% success rate on agent benchmarks, outperforming GPT-4o while consuming 62% fewer tokens.
Why does it matter?
Adaptive reasoning is a more practical path to efficient agents than simply scaling model size. Think fast when you can, slow when you must.
Paper: https://t.co/kYLqeHaY8p
Learn to build effective AI agents in our academy: https://t.co/1e8RZKs4uX
tweet