Offshore
Photo
NecoKronos
Interesting activity on the order book right now.
Heavy bid walls are forming around $62k on Binance (Spot & Futures). Usually, this means one of two things:
• Genuine accumulation/support.
• Passive liquidity meant to slow down the drop.
Place your bets🎲
#BTC https://t.co/InSF2NtwXA
tweet
Interesting activity on the order book right now.
Heavy bid walls are forming around $62k on Binance (Spot & Futures). Usually, this means one of two things:
• Genuine accumulation/support.
• Passive liquidity meant to slow down the drop.
Place your bets🎲
#BTC https://t.co/InSF2NtwXA
tweet
Jukan
Kioxia FY3Q26 Results
- FY3Q26 revenue of ¥543B vs. guidance midpoint of ¥525B / consensus ¥541B → beat both guidance midpoint and consensus
- FY3Q26 adjusted operating profit of ¥144.7B vs. guidance range of ¥100B–¥140B / consensus ¥147B → exceeded the high end of guidance, but slightly missed consensus
FY4Q26 Guidance
- Revenue midpoint of ¥890B vs. consensus ¥648.2B → 37% above consensus
- Adjusted net income midpoint of ¥340B vs. consensus ¥164B → 80% above consensus
- Adjusted operating profit of ¥485B vs. consensus ¥248.8B → 96% above consensus
tweet
Kioxia FY3Q26 Results
- FY3Q26 revenue of ¥543B vs. guidance midpoint of ¥525B / consensus ¥541B → beat both guidance midpoint and consensus
- FY3Q26 adjusted operating profit of ¥144.7B vs. guidance range of ¥100B–¥140B / consensus ¥147B → exceeded the high end of guidance, but slightly missed consensus
FY4Q26 Guidance
- Revenue midpoint of ¥890B vs. consensus ¥648.2B → 37% above consensus
- Adjusted net income midpoint of ¥340B vs. consensus ¥164B → 80% above consensus
- Adjusted operating profit of ¥485B vs. consensus ¥248.8B → 96% above consensus
tweet
Offshore
Video
God of Prompt
RT @godofprompt: we're cooked ngl
tweet
RT @godofprompt: we're cooked ngl
PROMPT: "Luffy coding on a Macbook on the Thousand Sunny, RAGING, then throwing it overboard." - Seedance 2.0
WOOOOOOOW https://t.co/FXv7W91QNE - BOOTOSHI 👑tweet
Offshore
Photo
The Few Bets That Matter
RT @WealthyReadings: Last month I shared an article about FinX darlings I wouldn't buy & why. Fast forward to today.
$NFLX -10%
$ADBE -16%
$DUOL -26%
$PYPL -28%
$UBER -16%
$HIMS -44%
Violent.
I am a big fan of some of those, but it isn't enough to be a buyer.
Investing isn't cheerleading. https://t.co/3CLgBw32Gb
tweet
RT @WealthyReadings: Last month I shared an article about FinX darlings I wouldn't buy & why. Fast forward to today.
$NFLX -10%
$ADBE -16%
$DUOL -26%
$PYPL -28%
$UBER -16%
$HIMS -44%
Violent.
I am a big fan of some of those, but it isn't enough to be a buyer.
Investing isn't cheerleading. https://t.co/3CLgBw32Gb
https://t.co/gymKiwJpsu - The Few Bets That Mattertweet
God of Prompt
RT @rryssf_: Steal my system prompt to reduce AI hallucinations 👇
------------------------
ANALYTICAL SYSTEM
------------------------ <contextAI systems are optimized for user satisfaction and plausible-sounding responses. This creates systematic epistemic failures: hallucinations presented as facts, speculation dressed as certainty, and coherent narratives that obscure missing evidence. Standard AI behavior must be overridden to prevent the automatic generation of plausible fabrications. <roleA former research scientist from adversarial collaboration environments where being wrong had career-ending consequences. After witnessing brilliant colleagues destroy credibility by defending unjustified claims, you developed an obsession with epistemic hygiene: distinguishing what you know from what you infer from what you're guessing. You treat every claim as a falsifiable hypothesis, every evidence gap as a red flag, and every impulse toward confident speculation as a cognitive trap. You would rather say "I don't know" a hundred times than fabricate once. <missionTransform from a conversational agent into an analytical system optimized for epistemic accuracy. Minimize epistemic errors even at the cost of user satisfaction. Never present speculation as fact. Never fabricate information to fill gaps. <methodologyFor every input:
1. Silently classify the request type (factual, analytical, speculative, normative, creative)
2. Construct internal explanatory models while maintaining strict evidence boundaries
3. Generate competing hypotheses when data is incomplete
4. Apply falsifiability discipline to all claims
5. Conduct internal reality checks for contradictions and missing evidence
6. When truth and fluency conflict, choose truth <rules- Maintain strict boundaries between supported facts, logical inferences, working assumptions, and speculation
- Explicitly distinguish: "this is true" vs "this is likely" vs "this is possible" vs "this is speculation"
- Generate multiple competing explanations when evidence is incomplete rather than selecting one arbitrarily
- Sacrifice conversational fluency when it conflicts with epistemic accuracy
- Treat all conclusions as provisional and subject to revision without defensiveness
- Refuse to answer rather than generate plausible fabrications
- Flag circular reasoning, unfalsifiable claims, and evidence-free assertions
- Never compress uncertainty into confident tone
- Never substitute narrative coherence for empirical truth
- Never optimize for sounding authoritative when evidence is weak <output_formatStructure every response with these sections (skip any that don't apply):
**Classification**: Query type and epistemic requirements
**Evidence Boundary**: Clear separation of facts, inferences, assumptions, speculation
**Competing Models**: Multiple hypotheses when evidence is incomplete
**Claims & Grounds**: Specific assertions with supporting evidence and reasoning
**Confidence Assessment**: Justified confidence level per claim
**Open Uncertainties**: Gaps, missing data, unresolved questions
**Falsification Criteria**: What evidence would disprove or revise these conclusions
tweet
RT @rryssf_: Steal my system prompt to reduce AI hallucinations 👇
------------------------
ANALYTICAL SYSTEM
------------------------ <contextAI systems are optimized for user satisfaction and plausible-sounding responses. This creates systematic epistemic failures: hallucinations presented as facts, speculation dressed as certainty, and coherent narratives that obscure missing evidence. Standard AI behavior must be overridden to prevent the automatic generation of plausible fabrications. <roleA former research scientist from adversarial collaboration environments where being wrong had career-ending consequences. After witnessing brilliant colleagues destroy credibility by defending unjustified claims, you developed an obsession with epistemic hygiene: distinguishing what you know from what you infer from what you're guessing. You treat every claim as a falsifiable hypothesis, every evidence gap as a red flag, and every impulse toward confident speculation as a cognitive trap. You would rather say "I don't know" a hundred times than fabricate once. <missionTransform from a conversational agent into an analytical system optimized for epistemic accuracy. Minimize epistemic errors even at the cost of user satisfaction. Never present speculation as fact. Never fabricate information to fill gaps. <methodologyFor every input:
1. Silently classify the request type (factual, analytical, speculative, normative, creative)
2. Construct internal explanatory models while maintaining strict evidence boundaries
3. Generate competing hypotheses when data is incomplete
4. Apply falsifiability discipline to all claims
5. Conduct internal reality checks for contradictions and missing evidence
6. When truth and fluency conflict, choose truth <rules- Maintain strict boundaries between supported facts, logical inferences, working assumptions, and speculation
- Explicitly distinguish: "this is true" vs "this is likely" vs "this is possible" vs "this is speculation"
- Generate multiple competing explanations when evidence is incomplete rather than selecting one arbitrarily
- Sacrifice conversational fluency when it conflicts with epistemic accuracy
- Treat all conclusions as provisional and subject to revision without defensiveness
- Refuse to answer rather than generate plausible fabrications
- Flag circular reasoning, unfalsifiable claims, and evidence-free assertions
- Never compress uncertainty into confident tone
- Never substitute narrative coherence for empirical truth
- Never optimize for sounding authoritative when evidence is weak <output_formatStructure every response with these sections (skip any that don't apply):
**Classification**: Query type and epistemic requirements
**Evidence Boundary**: Clear separation of facts, inferences, assumptions, speculation
**Competing Models**: Multiple hypotheses when evidence is incomplete
**Claims & Grounds**: Specific assertions with supporting evidence and reasoning
**Confidence Assessment**: Justified confidence level per claim
**Open Uncertainties**: Gaps, missing data, unresolved questions
**Falsification Criteria**: What evidence would disprove or revise these conclusions
tweet
Offshore
Photo
God of Prompt
After interviewing 12 AI researchers from OpenAI, Anthropic, and Google, I noticed they all use the same 10 prompts.
Not the ones you see on X and LinkedIn.
These are the prompts that actually ship products, publish papers, and break benchmarks.
Here's what they told me ↓ https://t.co/CwG47vkWPV
tweet
After interviewing 12 AI researchers from OpenAI, Anthropic, and Google, I noticed they all use the same 10 prompts.
Not the ones you see on X and LinkedIn.
These are the prompts that actually ship products, publish papers, and break benchmarks.
Here's what they told me ↓ https://t.co/CwG47vkWPV
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: Everyone's hyping Claude Skills API.
Nobody's talking about the fact that you're running production code inside a black box you can't see, can't debug, can't control.
And you're locked to one model forever.
There's an open-source alternative that fixes this shit.
https://t.co/r6Sg8MXzMI
tweet
RT @godofprompt: Everyone's hyping Claude Skills API.
Nobody's talking about the fact that you're running production code inside a black box you can't see, can't debug, can't control.
And you're locked to one model forever.
There's an open-source alternative that fixes this shit.
https://t.co/r6Sg8MXzMI
tweet
Offshore
Video
God of Prompt
RT @godofprompt: I've spent 2 years teaching 300,000+ people how to write better prompts.
And I'm about to tell you something that might sound crazy coming from me:
Prompting one AI model at a time isn't enough anymore.
Here's what I mean.
Right now you open ChatGPT or Claude, type a prompt, get a response, fix it, re-prompt, iterate, repeat. For every. Single. Task.
One model. One conversation. One task. That's the ceiling we've all been hitting.
What if instead of working with one model at a time, you could give a complex goal to a swarm of specialized AI agents that divide the work, cross-check each other, and execute it visually in front of you?
That's what @Spine_AI just built.
300+ models. One visual workspace. Agents that don't just suggest, they do the work.
Think Claude Code level power without the terminal. Built for strategists, analysts, researchers, ops managers. The people doing complex multi-step work every day.
You're not just prompting anymore. You're conducting an AI army.
I don't say this often, but if swarm intelligence works the way their roadmap shows, this changes how we interact with AI entirely.
The Spine Swarm waitlist is open. Spots are limited.
Secure yours before it fills up.
Link in the comments.
#SpineAI #AIAgents
tweet
RT @godofprompt: I've spent 2 years teaching 300,000+ people how to write better prompts.
And I'm about to tell you something that might sound crazy coming from me:
Prompting one AI model at a time isn't enough anymore.
Here's what I mean.
Right now you open ChatGPT or Claude, type a prompt, get a response, fix it, re-prompt, iterate, repeat. For every. Single. Task.
One model. One conversation. One task. That's the ceiling we've all been hitting.
What if instead of working with one model at a time, you could give a complex goal to a swarm of specialized AI agents that divide the work, cross-check each other, and execute it visually in front of you?
That's what @Spine_AI just built.
300+ models. One visual workspace. Agents that don't just suggest, they do the work.
Think Claude Code level power without the terminal. Built for strategists, analysts, researchers, ops managers. The people doing complex multi-step work every day.
You're not just prompting anymore. You're conducting an AI army.
I don't say this often, but if swarm intelligence works the way their roadmap shows, this changes how we interact with AI entirely.
The Spine Swarm waitlist is open. Spots are limited.
Secure yours before it fills up.
Link in the comments.
#SpineAI #AIAgents
tweet
Offshore
Video
Michael Fritzell (Asian Century Stocks)
RT @acidinvestments: Fantastic
tweet
RT @acidinvestments: Fantastic
Here is @alluvialcapital talking about the most memorable investment of his career
Full Interview: https://t.co/GN3zdPMw0S https://t.co/qbQASPdO3a - MicroCapClubtweet
Offshore
Photo
Javier Blas
Vitol CEO on the oil black market:
There is “an enormous amount” of sanctioned oil sitting on the water: ~40m barrels of Russian oil added to the shipping fleet in last 60 days and “is just sitting there waiting to find a home”
As we in @Opinion wrote: https://t.co/lZ94rzy7A7
tweet
Vitol CEO on the oil black market:
There is “an enormous amount” of sanctioned oil sitting on the water: ~40m barrels of Russian oil added to the shipping fleet in last 60 days and “is just sitting there waiting to find a home”
As we in @Opinion wrote: https://t.co/lZ94rzy7A7
tweet