Offshore
Video
Moon Dev
From $20K To $40M: The Python Strategy That Automates Social Arbitrage And Beats Bloomberg

most people are looking for the next big trade in a terminal but the real billion dollar use case for ai is hiding in the brain rot of social media. if you can catch a trend six months before the suits on wall street even hear about it you have an edge that no algorithm can beat

i believe code is the great equalizer because it allows one person to do the work of a thousand analysts. i spent hundreds of thousands on developers in the past because i thought i could not code myself but then i realized that liquidations were my only other option. now i build these systems live and let them work for me twenty four seven while i sleep comfortably knowing i am not over trading

there is a guy named chris who turned twenty thousand dollars into forty million by simply watching what was trending on social media before it ever hit a bloomberg terminal. he is featured in the market wizards books which are the gold standard for verifying the best traders in the world. the problem is that sitting on tiktok all day trying to find the next abercrombie or lululemon is exhausting for a human mind

this is where clawdbot comes into play to handle the heavy lifting of social arbitrage. she scans the platform every five minutes looking for specific signals that indicate a massive shift in consumer behavior. most traders are busy drawing lines on a chart while we are busy identifying products that are literally sold out everywhere

the secret sauce is in the keywords we use to filter through the noise of the internet. if you just look at your recommended feed you are just consuming content but if you search for terms like restock alert or tiktok made me buy you are looking at data. these are the breadcrumbs left by the next billion dollar brand before the stock price reflects the reality

wall street is comprised of an older generation that simply does not understand the speed of digital trends. they find out about things when the data hits their official reports which is usually months after the product has already gone viral. by the time they are buying we are already looking for the next exit or the next play

i recently updated the system to version one because version zero was looking at popular videos that might have been months old already. now the bot only focuses on videos uploaded within the last seven days to ensure the signal is as fresh as possible. this prevents us from chasing trends that have already peaked and allows us to get in while the fire is still just a spark

one example that popped up recently was the walmart birkin which is something i would never have thought about in a million years. it had hundreds of thousands of likes and was driving massive traffic to a specific product line. being a quant is about being a data dog and following where the numbers lead regardless of whether you personally understand the product

the reason i share all of this is because i want to see more people escape the trap of manual trading and liquidations. you have to iterate to success and that means testing a hundred ideas just to find the one that actually has legs. for every hundred brands the bot finds there might only be one that is worth a deep dive into the public company financials

most people will never take the time to learn how to code because they think it is too hard or too expensive. i am proof that you can start from zero and build fully automated systems that trade for you instead of against you. once you have the bot watching the market twenty four seven you have successfully removed the emotional baggage that causes most traders to fail

there is a massive shift happening where one person can run a billion dollar company using ai agents as their staff. sam altman has talked about this and i tend to agree because these bots do not get tired or bored. they will continue to scour the web for social arbitrage opportunities while you are out livin[...]
Offshore
Moon Dev From $20K To $40M: The Python Strategy That Automates Social Arbitrage And Beats Bloomberg most people are looking for the next big trade in a terminal but the real billion dollar use case for ai is hiding in the brain rot of social media. if you…
g your life

if you want to follow this path you need to adopt the rbi process which stands for research backtest and implement. you find the strategy in a book like market wizards and then you build a tool to see if the data supports the thesis. if the backtest looks good you put it into production and let the machine execute the vision without hesitation

the ultimate goal is to bridge the gap between social sentiment and market reality before the rest of the world catches on. it is about finding the starbucks limited cups or the next anti aging skincare line that is flying off the shelves. when you have an ai agent doing the observation for you the only thing left to do is make the decision

coding gave me the freedom to stop staring at charts and start thinking about the bigger picture of the markets. i do not put it past this technology to create the next generation of billionaires who never even worked on a trading floor. all it takes is the willingness to learn and the patience to iterate until the system works

the world is changing fast and the old ways of trading are becoming obsolete for the individual investor. you can either be the one who uses code to level the playing field or you can continue to be the liquidity for those who do. the choice is yours but the bots are already running and they are not waiting for anyone to catch up
tweet
Offshore
Photo
Michael Fritzell (Asian Century Stocks)
RT @HiddenMonopoly: Nice study !

High gross margins and a low volatility in gross margins may point to industries with high pricing power. https://t.co/koxTkBoLLt
tweet
Michael Fritzell (Asian Century Stocks)
After Coupang's leak, the entire company is on the verge of breaking down. In Substack's case, no-one is batting an eye?

Very strange - it's as if everyone's just taking cues from others on when and why they should become outraged

What's missing:
- A press release
- How many contact details were leaked (700,000)
- What else was leaked other than emails/phone numbers: specifically: 1) full names 2) user ID 3) Stripe ID 4) profile pictures 5) account creation dates 6) social media handles
- Why it happened
- Michael Fritzell (Asian Century Stocks)
tweet
Jukan
* Korea Economic Daily: NVIDIA allocated the HBM4 volume it needs to the three memory makers last December, with the split at mid-to-high 50% for SK hynix, mid-20% for Samsung Electronics, and around 20% for Micron.

** This report runs contrary to SemiAnalysis’s estimates.
tweet
Offshore
Photo
Brady Long
RT @thisguyknowsai: 10 Powerful Gemini 3.0 prompts that will help you build a million dollar business (steal them): https://t.co/oL0BVzPIum
tweet
Offshore
Photo
Jukan
Why Did Samsung Only Receive 20% of Nvidia's HBM4 Allocation?- Hankyung

Samsung Electronics is set to begin mass production and shipment of HBM4 (6th-generation High Bandwidth Memory) to Nvidia this month — an industry first. However, Samsung's share of Nvidia's total HBM4 demand is reportedly only in the "mid-20%" range. Samsung's HBM4 boasts the industry's highest performance (operating speed of 11.7 gigabits per second) and was the first to pass the quality (qual) test. So why is Samsung's projected share of Nvidia's allocation still behind SK hynix (mid-50% range), which is still undergoing its final qual test? We sought to answer this question that has been growing in the market recently.

According to semiconductor industry sources on the 9th, Nvidia tentatively allocated HBM4 volumes to Samsung Electronics, SK hynix, and Micron in December of last year — SK hynix at mid-50%, Samsung at mid-20%, and Micron at around 20%. There is a reason Nvidia allocated volumes before qual tests were even completed: since HBM production takes "more than six months," volumes must be assigned to suppliers in advance to ensure stable production of Nvidia's new AI accelerator, Vera Rubin, equipped with HBM4, in the second half of this year. The allocations reportedly took into account years of transaction history with HBM suppliers, each company's HBM4 production capacity, and the likelihood of passing qual tests.

Satisfied with Proving HBM4 Technical Prowess… Samsung's Profitability Maximization Strategy

Samsung's 20%-range share has drawn "disappointing" assessments. But there are reasons behind it. First, there is Samsung's own strategy. Samsung poured everything into proving its competitiveness in the HBM4 market, which is ramping up this year — essentially an "all-in on technology" strategy. This is exemplified by the use of 4nm foundry technology for the base die (the "brain" of HBM4) and 10nm 6th-generation (1c) DRAM for the core die (the basic building block).

While the 4nm base die is not the most cutting-edge like 2nm, it still belongs to the advanced foundry tier. And 1c DRAM is currently the most advanced DRAM product available. Compared to a competitor using an older 12nm foundry process with 10nm 5th-generation (1b) DRAM, Samsung's HBM4 inevitably has higher production costs.

Yield is another factor. While there have been recent reports in the semiconductor industry that yields for HBM4's 1c DRAM have improved significantly, they still fall short of a competitor's 1b DRAM, which has achieved higher process maturity. Depending on contract terms with customers (wafer-level delivery vs. individual chip-level delivery), the impact varies, but generally, lower yields mean lower profitability.

Production capacity is also still insufficient. The production capacity for 1c DRAM — the fundamental building block of HBM4 — is at approximately 70,000 wafers per month, which is about 10% of Samsung's total DRAM capacity. Although the company has recently begun expanding its Pyeongtaek Plant 4, production capacity won't increase to around 190,000 wafers until a year from now. Manufacturing HBM4 also requires "cutting-edge packaging" that makes the GPU and HBM function as a single chip. This process further reduces the number of surviving chips. From Samsung's perspective, even if it wanted to produce more HBM4, it simply cannot at this point.

Commodity DRAM Prices Have Risen to HBM3E Levels… Even Higher Profitability

Demand remains strong for HBM3E 12-layer products, which will serve as the market's mainstream through the first half of this year. Samsung passed Nvidia's HBM3E 12-layer qual test around September–October of last year, but actual shipment volumes are reportedly not large. Instead, Samsung is reportedly receiving a flood of orders for "HBM3E 12-layer" from Broadcom, which co-designs Google's AI accelerator TPU, among others. For Samsung, there is little incentive to cut production capacity for HBM3E 12-layer — which uses 10nm 4th-genera[...]
Offshore
Jukan Why Did Samsung Only Receive 20% of Nvidia's HBM4 Allocation?- Hankyung Samsung Electronics is set to begin mass production and shipment of HBM4 (6th-generation High Bandwidth Memory) to Nvidia this month — an industry first. However, Samsung's share…
tion (1a) DRAM — and go all-in on HBM4.

Prices for high-margin commodity DRAM used in servers, mobile devices, and PCs are surging. According to global investment bank Goldman Sachs, commodity DRAM prices this year are around $1.25 per gigabit (Gb), or about $10 per gigabyte (GB; 1 GB = 8 Gb). This is not far from HBM3E (5th-generation HBM) 12-layer products, the current mainstream in the HBM market. Since commodity DRAM does not require the cutting-edge packaging that HBM does, its profitability (margin) is reportedly incomparably higher than HBM.

For Samsung, which holds a production capacity advantage of over 1.2x compared to competitors in DRAM, a strategy of proving its technological recovery with the title of "first-ever HBM4 shipment to Nvidia" while focusing on the more profitable commodity DRAM business was deemed the rational choice.

Nvidia Also Needs SK hynix

From Nvidia's perspective, Samsung's strong showing is welcome news, but the company may have judged that allocating a larger share to SK hynix — a long-time partner it has worked closely with — is the more "stable" option. SK hynix is reportedly making progress in the qual testing process. For Nvidia, which has committed to deploying large volumes of HBM4 in its new AI accelerator Vera Rubin in the second half of the year, the stable supply from SK hynix — which has the largest HBM production capacity — is of paramount importance.

That said, future variables do exist. If a particular supplier's HBM4 fails to deliver adequate performance, Nvidia simply cannot accept the products. There is precedent: last year, when Samsung repeatedly failed Nvidia's HBM3E 12-layer qual test, the shares allocated to SK hynix and Micron increased. More recently, reports have emerged that Micron is experiencing difficulties with its HBM4 shipments to Nvidia, leading to projections that Samsung's share could rise to as high as 30%.
tweet
Offshore
Photo
Michael Fritzell (Asian Century Stocks)
What's up with 45 HK? https://t.co/Au3RWVEXtl
tweet
Offshore
Video
Brady Long
Great ad. The irony is if GenSpark existed then Ferris Bueller’s Day Off might not have ever been made.

The Monday after the Super Bowl should be a national holiday.

We made a Super Bowl ad about it. Featuring Matthew Broderick and an AI that can actually autopilot your work.

Airing today during the game! 🏈 https://t.co/BbZfDXVx5R
- Genspark
tweet
Offshore
Photo
Jukan
RT @MarkosAAIG: Good post by @jukan05 . Not much to add here. Good to see that Micron also has its share in HBM4. Strengthens NVIDIA’s multi-sourcing strategy to the fullest.

these days there are a lot of headlines being thrown around and we need to see the final confirmation first on what’s exact marketshare and what’s essentially possible within capacity and yields as keypoints.

Having SK hynix with the biggest market share is really obvious because of the capacity and the relationship they have built through the past years with NVIDIA.

Personally, I think that Samsung is the best wildcard bet here, although I own all three of them, because of their heavy capacity capex and their aggressive move on technology with the 1c DRAM. But again, that also needs to be proven first.

It’s going to be an interesting next few years to see what each player can deliver on yield, technological edge, and of course capacity. Those three things are the essence of what drives market share for the three big HBM players.
And of course, as I mentioned a few times, NVIDIA still prefers SK hynix because of their long-standing relationship and their capacity. But in an ideal end-market situation, they want that balanced so they remain in control over the platform and multi-source as much as possible to keep everything competitive and in their control.

Exciting times!

$NVDA
$SSNLF
$HXSCF
$MU

Why Did Samsung Only Receive 20% of Nvidia's HBM4 Allocation?- Hankyung

Samsung Electronics is set to begin mass production and shipment of HBM4 (6th-generation High Bandwidth Memory) to Nvidia this month — an industry first. However, Samsung's share of Nvidia's total HBM4 demand is reportedly only in the "mid-20%" range. Samsung's HBM4 boasts the industry's highest performance (operating speed of 11.7 gigabits per second) and was the first to pass the quality (qual) test. So why is Samsung's projected share of Nvidia's allocation still behind SK hynix (mid-50% range), which is still undergoing its final qual test? We sought to answer this question that has been growing in the market recently.

According to semiconductor industry sources on the 9th, Nvidia tentatively allocated HBM4 volumes to Samsung Electronics, SK hynix, and Micron in December of last year — SK hynix at mid-50%, Samsung at mid-20%, and Micron at around 20%. There is a reason Nvidia allocated volumes before qual tests were even completed: since HBM production takes "more than six months," volumes must be assigned to suppliers in advance to ensure stable production of Nvidia's new AI accelerator, Vera Rubin, equipped with HBM4, in the second half of this year. The allocations reportedly took into account years of transaction history with HBM suppliers, each company's HBM4 production capacity, and the likelihood of passing qual tests.

Satisfied with Proving HBM4 Technical Prowess… Samsung's Profitability Maximization Strategy

Samsung's 20%-range share has drawn "disappointing" assessments. But there are reasons behind it. First, there is Samsung's own strategy. Samsung poured everything into proving its competitiveness in the HBM4 market, which is ramping up this year — essentially an "all-in on technology" strategy. This is exemplified by the use of 4nm foundry technology for the base die (the "brain" of HBM4) and 10nm 6th-generation (1c) DRAM for the core die (the basic building block).

While the 4nm base die is not the most cutting-edge like 2nm, it still belongs to the advanced foundry tier. And 1c DRAM is currently the most advanced DRAM product available. Compared to a competitor using an older 12nm foundry process with 10nm 5th-generation (1b) DRAM, Samsung's HBM4 inevitably has higher production costs.

Yield is another factor. While there have been recent reports in the semiconductor industry that yields for HBM4's 1c DRAM have improved significantly, they still fall short of a competitor's 1b DRAM, which has achieved higher process maturity. Depending on contract terms with cu[...]
Offshore
Jukan RT @MarkosAAIG: Good post by @jukan05 . Not much to add here. Good to see that Micron also has its share in HBM4. Strengthens NVIDIA’s multi-sourcing strategy to the fullest. these days there are a lot of headlines being thrown around and we need to…
stomers (wafer-level delivery vs. individual chip-level delivery), the impact varies, but generally, lower yields mean lower profitability.

Production capacity is also still insufficient. The production capacity for 1c DRAM — the fundamental building block of HBM4 — is at approximately 70,000 wafers per month, which is about 10% of Samsung's total DRAM capacity. Although the company has recently begun expanding its Pyeongtaek Plant 4, production capacity won't increase to around 190,000 wafers until a year from now. Manufacturing HBM4 also requires "cutting-edge packaging" that makes the GPU and HBM function as a single chip. This process further reduces the number of surviving chips. From Samsung's perspective, even if it wanted to produce more HBM4, it simply cannot at this point.

Commodity DRAM Prices Have Risen to HBM3E Levels… Even Higher Profitability

Demand remains strong for HBM3E 12-layer products, which will serve as the market's mainstream through the first half of this year. Samsung passed Nvidia's HBM3E 12-layer qual test around September–October of last year, but actual shipment volumes are reportedly not large. Instead, Samsung is reportedly receiving a flood of orders for "HBM3E 12-layer" from Broadcom, which co-designs Google's AI accelerator TPU, among others. For Samsung, there is little incentive to cut production capacity for HBM3E 12-layer — which uses 10nm 4th-generation (1a) DRAM — and go all-in on HBM4.

Prices for high-margin commodity DRAM used in servers, mobile devices, and PCs are surging. According to global investment bank Goldman Sachs, commodity DRAM prices this year are around $1.25 per gigabit (Gb), or about $10 per gigabyte (GB; 1 GB = 8 Gb). This is not far from HBM3E (5th-generation HBM) 12-layer products, the current mainstream in the HBM market. Since commodity DRAM does not require the cutting-edge packaging that HBM does, its profitability (margin) is reportedly incomparably higher than HBM.

For Samsung, which holds a production capacity advantage of over 1.2x compared to competitors in DRAM, a strategy of proving its technological recovery with the title of "first-ever HBM4 shipment to Nvidia" while focusing on the more profitable commodity DRAM business was deemed the rational choice.

Nvidia Also Needs SK hynix

From Nvidia's perspective, Samsung's strong showing is welcome news, but the company may have judged that allocating a larger share to SK hynix — a long-time partner it has worked closely with — is the more "stable" option. SK hynix is reportedly making progress in the qual testing process. For Nvidia, which has committed to deploying large volumes of HBM4 in its new AI accelerator Vera Rubin in the second half of the year, the stable supply from SK hynix — which has the largest HBM production capacity — is of paramount importance.

That said, future variables do exist. If a particular supplier's HBM4 fails to deliver adequate performance, Nvidia simply cannot accept the products. There is precedent: last year, when Samsung repeatedly failed Nvidia's HBM3E 12-layer qual test, the shares allocated to SK hynix and Micron increased. More recently, reports have emerged that Micron is experiencing difficulties with its HBM4 shipments to Nvidia, leading to projections that Samsung's share could rise to as high as 30%. - Jukan tweet