Offshore
Photo
Javier Blas
RT @JavierBlas: COLUMN: Britrish oil giant BP should suspend its $750 million quarterly share buybacks to give incoming CEO Meg O'Neill (who arrives in April) extra financial breathing room.
@Opinion $BP
https://t.co/23vNRD4Nsw
tweet
RT @JavierBlas: COLUMN: Britrish oil giant BP should suspend its $750 million quarterly share buybacks to give incoming CEO Meg O'Neill (who arrives in April) extra financial breathing room.
@Opinion $BP
https://t.co/23vNRD4Nsw
tweet
Offshore
Video
Moon Dev
The Code Equalizer: Build a Professional Market Maker Bot and Stop Trading Manually
imagine a world where the market crashing actually puts more money in your pocket than a moonshot ever could. this isn't some pipe dream but the reality of a market maker that trades algorithmically every single second of the year. if you've ever felt the sting of a liquidation then you know exactly why manual trading is a rigged game that we are finally going to beat
the secret to directionless profit isn't about guessing the next candle but about becoming the liquidity that everyone else is desperate to buy. within the next few minutes you'll see why most bots fail and how a simple kill switch is the only thing standing between you and a total account wipeout. it's time to stop gambling and start building systems that don't care about the news or the hype
i spent hundreds of thousands on developers for different apps in the past because i was convinced i could never code myself. i believed that programming was for geniuses and that i was destined to lose money through overtrading and painful liquidations. then i realized that code is the great equalizer because it removes the emotion that usually destroys a trader's bankroll
now we are here with fully automated systems trading for me while i live my life. i decided to learn to code live on youtube to show that anyone can iterate their way to success if they just stay consistent. the path from getting liquidated to building a market maker is paved with many small failures and constant adjustments
market making is the adult version of a video game where the goal is to win regardless of which way the player moves. if the price goes up we win and if the price goes down we win and if the price goes sideways we also win. we are looking for steady times to buy and sell because that is where the real money is hidden from the average retail trader
to get started we need to connect to an exchange like phemex using the ccxt library in python. this connection is the bridge between our logic and the actual capital in the market. we set our symbol to ubtc and establish our initial inputs like position size and sleep timers to keep things moving smoothly
most people get excited and jump straight into the trading logic but that is how you lose everything. before we ever place a trade we have to implement a kill switch that monitors our risk levels in real time. if our position size ever exceeds a certain limit like one thousand dollars the bot will immediately close everything out
this safety feature is what i call a size kill and it protects you from those rare moments when the bot goes crazy or the market moves too fast. i would much rather have ten thousand small trades at five hundred dollars than one massive trade that ruins the account. risk management is the boring part that actually makes you wealthy over the long term
once the safety nets are in place we dive into the bid and ask data to see where the market is actually priced. we pull the highest bid and the lowest ask to determine where our orders should sit on the books. our bot is designed to be patient and wait for the market to come to us instead of chasing price like a desperate amateur
but how do we know if the market is too volatile to trade in the first place. this is where we bring in indicators like the average true range to measure the volatility of the assets we are watching. if the volatility is too high then our bot will simply sit on the sidelines and wait for things to calm down
we set up a no trade rule where if the current atr is higher than our predefined threshold we won't enter the market. this keeps us out of dangerous flash crashes where the spread becomes unpredictable and the risk of loss increases. we are looking for a steady environment where we can safely provide liquidity and collect our profits
every twenty seconds our bot loops through the logic to check our open positions and active orders. it pulls the latest ca[...]
The Code Equalizer: Build a Professional Market Maker Bot and Stop Trading Manually
imagine a world where the market crashing actually puts more money in your pocket than a moonshot ever could. this isn't some pipe dream but the reality of a market maker that trades algorithmically every single second of the year. if you've ever felt the sting of a liquidation then you know exactly why manual trading is a rigged game that we are finally going to beat
the secret to directionless profit isn't about guessing the next candle but about becoming the liquidity that everyone else is desperate to buy. within the next few minutes you'll see why most bots fail and how a simple kill switch is the only thing standing between you and a total account wipeout. it's time to stop gambling and start building systems that don't care about the news or the hype
i spent hundreds of thousands on developers for different apps in the past because i was convinced i could never code myself. i believed that programming was for geniuses and that i was destined to lose money through overtrading and painful liquidations. then i realized that code is the great equalizer because it removes the emotion that usually destroys a trader's bankroll
now we are here with fully automated systems trading for me while i live my life. i decided to learn to code live on youtube to show that anyone can iterate their way to success if they just stay consistent. the path from getting liquidated to building a market maker is paved with many small failures and constant adjustments
market making is the adult version of a video game where the goal is to win regardless of which way the player moves. if the price goes up we win and if the price goes down we win and if the price goes sideways we also win. we are looking for steady times to buy and sell because that is where the real money is hidden from the average retail trader
to get started we need to connect to an exchange like phemex using the ccxt library in python. this connection is the bridge between our logic and the actual capital in the market. we set our symbol to ubtc and establish our initial inputs like position size and sleep timers to keep things moving smoothly
most people get excited and jump straight into the trading logic but that is how you lose everything. before we ever place a trade we have to implement a kill switch that monitors our risk levels in real time. if our position size ever exceeds a certain limit like one thousand dollars the bot will immediately close everything out
this safety feature is what i call a size kill and it protects you from those rare moments when the bot goes crazy or the market moves too fast. i would much rather have ten thousand small trades at five hundred dollars than one massive trade that ruins the account. risk management is the boring part that actually makes you wealthy over the long term
once the safety nets are in place we dive into the bid and ask data to see where the market is actually priced. we pull the highest bid and the lowest ask to determine where our orders should sit on the books. our bot is designed to be patient and wait for the market to come to us instead of chasing price like a desperate amateur
but how do we know if the market is too volatile to trade in the first place. this is where we bring in indicators like the average true range to measure the volatility of the assets we are watching. if the volatility is too high then our bot will simply sit on the sidelines and wait for things to calm down
we set up a no trade rule where if the current atr is higher than our predefined threshold we won't enter the market. this keeps us out of dangerous flash crashes where the spread becomes unpredictable and the risk of loss increases. we are looking for a steady environment where we can safely provide liquidity and collect our profits
every twenty seconds our bot loops through the logic to check our open positions and active orders. it pulls the latest ca[...]
Offshore
Moon Dev The Code Equalizer: Build a Professional Market Maker Bot and Stop Trading Manually imagine a world where the market crashing actually puts more money in your pocket than a moonshot ever could. this isn't some pipe dream but the reality of a market…
ndlestick data including the timestamp and open and high and low and close values. we store all of this in a pandas dataframe so we can perform complex calculations on the fly
the heart of our market maker logic relies on calculating the high and the low of our recent price data. we find the maximum high and the minimum low to establish a range for our trades. we also calculate an average price because that helps us determine where the center of the market is at any given moment
if the distance between the high and the low is too large then we trigger our no trading flag. this is a secondary safety measure that works alongside our atr logic to ensure we are only active in a specific range. if the range exceeds eight hundred units then we call the kill switch and exit the market gracefully
now we get to the loopity loop part of the strategy which is where we check the last seventeen bars of data. we are looking to see if we are making higher highs or lower lows which would indicate a strong trend. market makers generally prefer sideways movement so a strong trend is a signal for us to pause our activity
we convert our price data into a list and iterate through it to see if the current low is bigger than any of the past seventeen lows. if it is then we know we are not in a steady state and we set no trading to true. this level of depth is what separates a profitable algorithm from a simple script that just buys and sells blindly
the bot is constantly giving us feedback in the terminal so we can see exactly why it is or isn't trading. it might tell us that no trading was triggered by the high or the low or that we hit our max profit and loss for the day. seeing these messages in real time is like watching a video game play itself to perfection
i iterate on these versions every single day because i want my systems to do better and be more efficient. treat your trading like an adult game where every day you log on you try to find a better wall to hide behind. as long as you manage your risk and don't use too much money at once you can scale your success over time
many people won't share this level of code or logic on the internet because they want to keep the secrets to themselves. i believe in transparency because the more people who learn to code the more we can all escape the trap of manual trading. success in this game is about constant iteration and never being afraid to show your losing trades
if you appreciate this deep dive into market making then you should know that part two is where things get even more interesting. we are going to finish the bot and show exactly how it handles the order placement and the exit strategy. stay focused and keep building because your fully automated future is closer than you think
tweet
the heart of our market maker logic relies on calculating the high and the low of our recent price data. we find the maximum high and the minimum low to establish a range for our trades. we also calculate an average price because that helps us determine where the center of the market is at any given moment
if the distance between the high and the low is too large then we trigger our no trading flag. this is a secondary safety measure that works alongside our atr logic to ensure we are only active in a specific range. if the range exceeds eight hundred units then we call the kill switch and exit the market gracefully
now we get to the loopity loop part of the strategy which is where we check the last seventeen bars of data. we are looking to see if we are making higher highs or lower lows which would indicate a strong trend. market makers generally prefer sideways movement so a strong trend is a signal for us to pause our activity
we convert our price data into a list and iterate through it to see if the current low is bigger than any of the past seventeen lows. if it is then we know we are not in a steady state and we set no trading to true. this level of depth is what separates a profitable algorithm from a simple script that just buys and sells blindly
the bot is constantly giving us feedback in the terminal so we can see exactly why it is or isn't trading. it might tell us that no trading was triggered by the high or the low or that we hit our max profit and loss for the day. seeing these messages in real time is like watching a video game play itself to perfection
i iterate on these versions every single day because i want my systems to do better and be more efficient. treat your trading like an adult game where every day you log on you try to find a better wall to hide behind. as long as you manage your risk and don't use too much money at once you can scale your success over time
many people won't share this level of code or logic on the internet because they want to keep the secrets to themselves. i believe in transparency because the more people who learn to code the more we can all escape the trap of manual trading. success in this game is about constant iteration and never being afraid to show your losing trades
if you appreciate this deep dive into market making then you should know that part two is where things get even more interesting. we are going to finish the bot and show exactly how it handles the order placement and the exit strategy. stay focused and keep building because your fully automated future is closer than you think
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: 🚨 Holy shit… Stanford just published the most uncomfortable paper on LLM reasoning I’ve read in a long time.
This isn’t a flashy new model or a leaderboard win. It’s a systematic teardown of how and why large language models keep failing at reasoning even when benchmarks say they’re doing great.
The paper does one very smart thing upfront: it introduces a clean taxonomy instead of more anecdotes. The authors split reasoning into non-embodied and embodied.
Non-embodied reasoning is what most benchmarks test and it’s further divided into informal reasoning (intuition, social judgment, commonsense heuristics) and formal reasoning (logic, math, code, symbolic manipulation).
Embodied reasoning is where models must reason about the physical world, space, causality, and action under real constraints.
Across all three, the same failure patterns keep showing up.
> First are fundamental failures baked into current architectures. Models generate answers that look coherent but collapse under light logical pressure. They shortcut, pattern-match, or hallucinate steps instead of executing a consistent reasoning process.
> Second are application-specific failures. A model that looks strong on math benchmarks can quietly fall apart in scientific reasoning, planning, or multi-step decision making. Performance does not transfer nearly as well as leaderboards imply.
> Third are robustness failures. Tiny changes in wording, ordering, or context can flip an answer entirely. The reasoning wasn’t stable to begin with; it just happened to work for that phrasing.
One of the most disturbing findings is how often models produce unfaithful reasoning. They give the correct final answer while providing explanations that are logically wrong, incomplete, or fabricated.
This is worse than being wrong, because it trains users to trust explanations that don’t correspond to the actual decision process.
Embodied reasoning is where things really fall apart. LLMs systematically fail at physical commonsense, spatial reasoning, and basic physics because they have no grounded experience.
Even in text-only settings, as soon as a task implicitly depends on real-world dynamics, failures become predictable and repeatable.
The authors don’t just criticize. They outline mitigation paths: inference-time scaling, analogical memory, external verification, and evaluations that deliberately inject known failure cases instead of optimizing for leaderboard performance.
But they’re very clear that none of these are silver bullets yet.
The takeaway isn’t that LLMs can’t reason.
It’s more uncomfortable than that.
LLMs reason just enough to sound convincing, but not enough to be reliable.
And unless we start measuring how models fail not just how often they succeed we’ll keep deploying systems that pass benchmarks, fail silently in production, and explain themselves with total confidence while doing the wrong thing.
That’s the real warning shot in this paper.
Paper: Large Language Model Reasoning Failures
tweet
RT @godofprompt: 🚨 Holy shit… Stanford just published the most uncomfortable paper on LLM reasoning I’ve read in a long time.
This isn’t a flashy new model or a leaderboard win. It’s a systematic teardown of how and why large language models keep failing at reasoning even when benchmarks say they’re doing great.
The paper does one very smart thing upfront: it introduces a clean taxonomy instead of more anecdotes. The authors split reasoning into non-embodied and embodied.
Non-embodied reasoning is what most benchmarks test and it’s further divided into informal reasoning (intuition, social judgment, commonsense heuristics) and formal reasoning (logic, math, code, symbolic manipulation).
Embodied reasoning is where models must reason about the physical world, space, causality, and action under real constraints.
Across all three, the same failure patterns keep showing up.
> First are fundamental failures baked into current architectures. Models generate answers that look coherent but collapse under light logical pressure. They shortcut, pattern-match, or hallucinate steps instead of executing a consistent reasoning process.
> Second are application-specific failures. A model that looks strong on math benchmarks can quietly fall apart in scientific reasoning, planning, or multi-step decision making. Performance does not transfer nearly as well as leaderboards imply.
> Third are robustness failures. Tiny changes in wording, ordering, or context can flip an answer entirely. The reasoning wasn’t stable to begin with; it just happened to work for that phrasing.
One of the most disturbing findings is how often models produce unfaithful reasoning. They give the correct final answer while providing explanations that are logically wrong, incomplete, or fabricated.
This is worse than being wrong, because it trains users to trust explanations that don’t correspond to the actual decision process.
Embodied reasoning is where things really fall apart. LLMs systematically fail at physical commonsense, spatial reasoning, and basic physics because they have no grounded experience.
Even in text-only settings, as soon as a task implicitly depends on real-world dynamics, failures become predictable and repeatable.
The authors don’t just criticize. They outline mitigation paths: inference-time scaling, analogical memory, external verification, and evaluations that deliberately inject known failure cases instead of optimizing for leaderboard performance.
But they’re very clear that none of these are silver bullets yet.
The takeaway isn’t that LLMs can’t reason.
It’s more uncomfortable than that.
LLMs reason just enough to sound convincing, but not enough to be reliable.
And unless we start measuring how models fail not just how often they succeed we’ll keep deploying systems that pass benchmarks, fail silently in production, and explain themselves with total confidence while doing the wrong thing.
That’s the real warning shot in this paper.
Paper: Large Language Model Reasoning Failures
tweet
Offshore
Video
Startup Archive
Steve Jobs on his strategy for saving Apple from bankruptcy
Apple was on the verge of bankruptcy when Steve Jobs returned to the company in July of 1997. The clip below is from a CNBC interview three months later.
When asked about his strategy for turning the company around, Jobs shared the following advice:
“Somebody taught me a long time ago a very valuable lesson which is if you do the right things on the top line, the bottom line will follow. And what they meant by that was: if you get the right strategy, if you have the right people, and if you have the right culture at your company, you’ll do the right products. You’ll do the right marketing. You’ll do the right things logistically and in manufacturing and distribution. And if you do all those things right, the bottom line will follow.”
Video source: @CNBC (1997)
tweet
Steve Jobs on his strategy for saving Apple from bankruptcy
Apple was on the verge of bankruptcy when Steve Jobs returned to the company in July of 1997. The clip below is from a CNBC interview three months later.
When asked about his strategy for turning the company around, Jobs shared the following advice:
“Somebody taught me a long time ago a very valuable lesson which is if you do the right things on the top line, the bottom line will follow. And what they meant by that was: if you get the right strategy, if you have the right people, and if you have the right culture at your company, you’ll do the right products. You’ll do the right marketing. You’ll do the right things logistically and in manufacturing and distribution. And if you do all those things right, the bottom line will follow.”
Video source: @CNBC (1997)
tweet
Offshore
Video
Brady Long
RT @thisdudelikesAI: This is revolutionary
Being able to access all of these models in the same Figma-like collab screen makes creating marketing videos as a team at least 60% easier.
@TopviewAIhq https://t.co/goKJdDCEOa
tweet
RT @thisdudelikesAI: This is revolutionary
Being able to access all of these models in the same Figma-like collab screen makes creating marketing videos as a team at least 60% easier.
@TopviewAIhq https://t.co/goKJdDCEOa
The @figma for AI Content Creation is finally here.
Meet Topview 4.0: The world’s first collaborative AI video creation board.
- Real-time Collaboration: Create, review, and iterate with your team.
- Seamless Flow: Prompt → Image → Video → Avatar in one tab.
- All Top Models: Seedance, VEO, Sora, Kling, Nano Banana & more.
Use code: LPS3TEDZ for 20% off!!! - TopviewAItweet
Offshore
Video
DAIR.AI
RT @omarsar0: Agentic Video Editing
This is crazy. I just asked Claude Code to build me an entire agent-powered video editing app.
~10K lines of code.
Uses Claude Agent SDK. It's really good.
Runs locally. Highly customizable.
You can just building things. https://t.co/vgkl7BWa7n
tweet
RT @omarsar0: Agentic Video Editing
This is crazy. I just asked Claude Code to build me an entire agent-powered video editing app.
~10K lines of code.
Uses Claude Agent SDK. It's really good.
Runs locally. Highly customizable.
You can just building things. https://t.co/vgkl7BWa7n
tweet
Offshore
Video
DAIR.AI
RT @omarsar0: Agentic Video Editing
This is crazy!
I just asked Claude Code to build me an entire agent-powered video editing app.
~10K lines of code.
Uses Claude Agent SDK + Claude Opus 4.6.
It's really good.
Runs locally. Highly customizable.
You can just build things. https://t.co/P8y6F0uKZK
tweet
RT @omarsar0: Agentic Video Editing
This is crazy!
I just asked Claude Code to build me an entire agent-powered video editing app.
~10K lines of code.
Uses Claude Agent SDK + Claude Opus 4.6.
It's really good.
Runs locally. Highly customizable.
You can just build things. https://t.co/P8y6F0uKZK
tweet
Offshore
Photo
The Few Bets That Matter
$TMDX received unconditional FDA approval for its heart trial a few weeks after the one for lungs.
Practically, this means both next-gen OCS for heart and lungs were approved in their current form; no upgrades required to move forward.
That’s a major green light.
The next phase consists of onboarding patients for the clinical studies. Lungs have already started, under predefined trial rules (patient count, conditions, endpoints), and hearts are about to follow.
This phase is designed to clinically prove, with real cases, that outcomes using OCS are superior to cold storage, with a clean dataset - also to be compared against other technologies even if this isn't the main objective.
Management said this approval was imminent back in January. Now we’re there.
$TMDX remains one of the most overlooked healthcare plays as the market rotates out of tech and toward safer names.
tweet
$TMDX received unconditional FDA approval for its heart trial a few weeks after the one for lungs.
Practically, this means both next-gen OCS for heart and lungs were approved in their current form; no upgrades required to move forward.
That’s a major green light.
The next phase consists of onboarding patients for the clinical studies. Lungs have already started, under predefined trial rules (patient count, conditions, endpoints), and hearts are about to follow.
This phase is designed to clinically prove, with real cases, that outcomes using OCS are superior to cold storage, with a clean dataset - also to be compared against other technologies even if this isn't the main objective.
Management said this approval was imminent back in January. Now we’re there.
$TMDX remains one of the most overlooked healthcare plays as the market rotates out of tech and toward safer names.
tweet