Offshore
Moon Dev Code Is the Equalizer: Building a $20k/Day Pseudo-Bot Without the Dev Costs building a trading bot in 2026 is less about being a genius coder and more about having the guts to stop letting your emotions drive your portfolio into a ditch. i spent…
bination of these basic building blocks that i have refined over the last five years of building in public
if you are still trading by hand you are essentially bringing a knife to a gunfight in a market that is increasingly dominated by ai and high frequency systems. my goal with the road map and the open source code is to give you the same tools the big players have without you having to spend a fortune on devs. code allows you to backtest your ideas against historical data so you can see if your strategy actually works before you risk a single dollar of your hard earned capital
the journey from a hand trader to a pseudo automated trader is the most important step you will ever take for your financial freedom. as you start to automate your entries and exits you will notice that your life gets better because the machine handles the boredom and the stress. eventually you will find that you are no longer chasing the market but instead you are letting your systems do the work while you live your life the way you want to
i believe that the era of the manual retail trader is coming to an end but the era of the retail coder is just getting started. through the pain of my own liquidations i found the path to automation and i am never going back to the old way of doing things. keep building and keep iterating because the great equalizer is right there in the terminal waiting for you to take control of your future
tweet
if you are still trading by hand you are essentially bringing a knife to a gunfight in a market that is increasingly dominated by ai and high frequency systems. my goal with the road map and the open source code is to give you the same tools the big players have without you having to spend a fortune on devs. code allows you to backtest your ideas against historical data so you can see if your strategy actually works before you risk a single dollar of your hard earned capital
the journey from a hand trader to a pseudo automated trader is the most important step you will ever take for your financial freedom. as you start to automate your entries and exits you will notice that your life gets better because the machine handles the boredom and the stress. eventually you will find that you are no longer chasing the market but instead you are letting your systems do the work while you live your life the way you want to
i believe that the era of the manual retail trader is coming to an end but the era of the retail coder is just getting started. through the pain of my own liquidations i found the path to automation and i am never going back to the old way of doing things. keep building and keep iterating because the great equalizer is right there in the terminal waiting for you to take control of your future
tweet
Jukan
I've been building a bull case for Meta lately. This is a casual memo — would love to hear your thoughts.
---------------------------------
Meta: An Underappreciated AI Platform Strategy (Memo)
-----## 1. Does Meta Really Need the Best LLM?ByteDance’s Doubao offers a telling precedent. Despite not being China’s most powerful model, Doubao leads the Chinese AI chatbot market with 170M MAU — far ahead of DeepSeek (72M) and Tencent’s Yuanbao (73M). The edge isn’t model performance but platform integration — ByteDance embedded AI across its entire ecosystem: Douyin, Jianying, Ola Friend, and even a ZTE Nubia AI-native phone.
Meta could become the Western ByteDance. Facebook, Instagram, WhatsApp, and Messenger combined reach ~4B MAU. Layer even a “good enough” AI on top of that, and Meta holds an overwhelming advantage in mass-market agent AI deployment.
To be clear, Meta hasn’t abandoned the frontier model race. It hired Scale AI’s Alexandr Wang for ~$14.3B and is developing “Avocado,” a proprietary next-gen model targeting H1 2026 release (reportedly achieving 10x+ compute efficiency vs. Llama 4). But even if Avocado falls short of the latest models from OpenAI or Anthropic, Meta’s AI strategy doesn’t fail. Platform-level utilization may matter more than benchmarks.
-----## 2. AI Hardware: Narrative vs. Results
Market attention is fixated on OpenAI’s hardware play (Jony Ive’s io acquisition for $6.4B, Foxconn partnership), but the AI hardware that’s actually selling is Meta’s Ray-Ban smart glasses.
- Ray-Ban Meta: 2M+ cumulative units sold; Q2 2025 sales tripled QoQ
- Ray-Ban Display ($799): US demand exceeded supply → UK, France, Italy, Canada launches postponed; waitlists extend through end of 2026
- EssilorLuxottica reviewing capacity expansion from 10M to 20–30M units/year (Bloomberg)
- All this demand before Avocado (next-gen LLM) has even been integrated
Meanwhile, OpenAI hardware remains at prototype stage; court filings indicate customer shipments delayed to post-February 2027.The key differentiator: Meta’s partnership with EssilorLuxottica — the world’s largest eyewear company — secured a form factor people actually wear in public. Compare this to Google Glass, which failed largely on social acceptability. The Ray-Ban brand itself is a moat.
-----## 3. Ad CPM Upside Potential
Per The Information: OpenAI is pricing ChatGPT ads at ~$60 CPM — roughly 3x Meta’s average of $10–20. OpenAI’s rationale: a “high-intent” environment commands a premium.
But OpenAI still lacks basic ad infrastructure (only high-level metrics like total impressions/clicks; no conversion tracking; $200K minimum commitment). Meta’s decades of precision targeting and conversion attribution remain assets OpenAI simply doesn’t have.
Add AI-driven hyper-personalization → contextual awareness, real-time interest signals, purchase-journey stage optimization → meaningful room to push current $10–20 CPMs higher.
Even without reaching $60, a move to the $30–40 range would be transformative against Meta’s ~$160B annual ad revenue base — potentially unlocking tens of billions in incremental revenue. Ray-Ban glasses could also emerge as an entirely new ad channel via location-based AI ads and gaze-tracking engagement metrics.
$META
tweet
I've been building a bull case for Meta lately. This is a casual memo — would love to hear your thoughts.
---------------------------------
Meta: An Underappreciated AI Platform Strategy (Memo)
-----## 1. Does Meta Really Need the Best LLM?ByteDance’s Doubao offers a telling precedent. Despite not being China’s most powerful model, Doubao leads the Chinese AI chatbot market with 170M MAU — far ahead of DeepSeek (72M) and Tencent’s Yuanbao (73M). The edge isn’t model performance but platform integration — ByteDance embedded AI across its entire ecosystem: Douyin, Jianying, Ola Friend, and even a ZTE Nubia AI-native phone.
Meta could become the Western ByteDance. Facebook, Instagram, WhatsApp, and Messenger combined reach ~4B MAU. Layer even a “good enough” AI on top of that, and Meta holds an overwhelming advantage in mass-market agent AI deployment.
To be clear, Meta hasn’t abandoned the frontier model race. It hired Scale AI’s Alexandr Wang for ~$14.3B and is developing “Avocado,” a proprietary next-gen model targeting H1 2026 release (reportedly achieving 10x+ compute efficiency vs. Llama 4). But even if Avocado falls short of the latest models from OpenAI or Anthropic, Meta’s AI strategy doesn’t fail. Platform-level utilization may matter more than benchmarks.
-----## 2. AI Hardware: Narrative vs. Results
Market attention is fixated on OpenAI’s hardware play (Jony Ive’s io acquisition for $6.4B, Foxconn partnership), but the AI hardware that’s actually selling is Meta’s Ray-Ban smart glasses.
- Ray-Ban Meta: 2M+ cumulative units sold; Q2 2025 sales tripled QoQ
- Ray-Ban Display ($799): US demand exceeded supply → UK, France, Italy, Canada launches postponed; waitlists extend through end of 2026
- EssilorLuxottica reviewing capacity expansion from 10M to 20–30M units/year (Bloomberg)
- All this demand before Avocado (next-gen LLM) has even been integrated
Meanwhile, OpenAI hardware remains at prototype stage; court filings indicate customer shipments delayed to post-February 2027.The key differentiator: Meta’s partnership with EssilorLuxottica — the world’s largest eyewear company — secured a form factor people actually wear in public. Compare this to Google Glass, which failed largely on social acceptability. The Ray-Ban brand itself is a moat.
-----## 3. Ad CPM Upside Potential
Per The Information: OpenAI is pricing ChatGPT ads at ~$60 CPM — roughly 3x Meta’s average of $10–20. OpenAI’s rationale: a “high-intent” environment commands a premium.
But OpenAI still lacks basic ad infrastructure (only high-level metrics like total impressions/clicks; no conversion tracking; $200K minimum commitment). Meta’s decades of precision targeting and conversion attribution remain assets OpenAI simply doesn’t have.
Add AI-driven hyper-personalization → contextual awareness, real-time interest signals, purchase-journey stage optimization → meaningful room to push current $10–20 CPMs higher.
Even without reaching $60, a move to the $30–40 range would be transformative against Meta’s ~$160B annual ad revenue base — potentially unlocking tens of billions in incremental revenue. Ray-Ban glasses could also emerge as an entirely new ad channel via location-based AI ads and gaze-tracking engagement metrics.
$META
tweet
Offshore
Photo
The Transcript
Moody's CEO: "Our 2025 results demonstrate the tremendous demand for Moody’s solutions..."
$MCO: +3% Pre-Market https://t.co/5Fe4Kr0QSI
tweet
Moody's CEO: "Our 2025 results demonstrate the tremendous demand for Moody’s solutions..."
$MCO: +3% Pre-Market https://t.co/5Fe4Kr0QSI
tweet
Offshore
Photo
Jukan
[Exclusive] Jensen Huang: "Show Me Greatness" … SK Hynix HBM4 Supply to NVIDIA Imminent
The guests of honor at a dinner held on the evening of February 14 at '99 Chicken,' a Korean fried chicken restaurant near NVIDIA's headquarters in Santa Clara, were SK Hynix's High Bandwidth Memory (HBM) engineers. NVIDIA CEO Jensen Huang arrived around 5:20 PM and spent roughly two hours going table to table, mixing "somaek" (soju-beer cocktails) for the 30-plus SK Hynix and NVIDIA engineers in attendance. Huang repeatedly told them "we are one team" and "I'm proud of you," expressing his gratitude. He also urged them to "deliver extraordinary results through relentless challenge and effort" in connection with the HBM4 (6th generation) that SK Hynix has committed to supply.
◇ The HBM4 Jensen Huang Is Waiting For
The semiconductor industry views it as highly unusual for Huang to personally host a dinner for a partner's engineers. The dinner was arranged on short notice after Huang instructed NVIDIA employees last week to "organize a dinner to encourage SK Hynix's HBM engineers." This underscores just how critical SK Hynix's HBM4 is to NVIDIA's future.
SK Hynix formally joined NVIDIA's supply chain in July 2020 with HBM2E (3rd generation) shipments. It went on to serve as the de facto sole supplier for HBM3 (4th gen) and HBM3E (5th gen), forging a relationship so tight alongside TSMC that the trio became known as the "AI semiconductor triple alliance."
HBM4 is regarded as the key component that will determine the performance of NVIDIA's next-generation AI accelerator "Vera Rubin," slated for the second half of this year. HBM4 is a high-performance DRAM module made by stacking 12 advanced DRAM dies, responsible for feeding massive volumes of data in a timely manner to the GPU that handles computation inside AI accelerators like Vera Rubin. NVIDIA has demanded HBM4 suppliers deliver operating speeds of "11 gigabits per second (Gbps) or higher" and "bandwidth of 3.0 terabytes per second (TB/s) or more" — specifications exceeding by over 30% what competitors like AMD require for their HBM4. In other words, NVIDIA has positioned HBM4 as Vera Rubin's key differentiator.
◇ "The Timeline Is Tight, but Show Me Greatness"
Unlike HBM3E, which was essentially SK Hynix's one-man show, HBM4 — whose market opens in earnest in the second half — presents a different competitive landscape. Samsung Electronics shipped its first official HBM4 product to NVIDIA on February 12, an industry first, featuring operating speeds of 11.7 Gbps (up to 13 Gbps) and bandwidth of 3.3 TB/s.
SK Hynix has secured HBM4 performance of 11.7 Gbps or above and is currently mass-supplying paid samples to NVIDIA while carrying out performance optimization work. The industry expects SK Hynix to receive NVIDIA's official "go for mass supply" sign-off in the near term.
At the dinner, Huang urged SK Hynix engineers to "deliver top-performing HBM4 without any delays." In a closing toast just before the dinner ended, he said: "AI accelerators and HBM4 represent remarkable, extraordinary, and the world's most challenging technology. I'm proud of all of you working around the clock, and I'm confident you will deliver exceptional results." He added: "I know the development timeline for HBM4 and Vera Rubin is tight, but I believe in you. Now is the time for SK Hynix and NVIDIA to show the world greatness together."
◇ Green Light for SK Hynix HBM4
With Huang personally appearing at the dinner and calling for swift HBM4 delivery, analysts now see a higher likelihood that SK Hynix will maintain its position as the No. 1 supplier for HBM4 as well. NVIDIA tentatively allocated this year's volumes (HBM3E and HBM4) back in December, with SK Hynix reportedly securing over 55%, Samsung in the mid-to-high 20% range, and Micron at roughly 20%.
While Samsung's push forward in the HBM4 technology race has raised the possibility of shifts in supply share, the prevailing view is that SK Hynix stands a strong[...]
[Exclusive] Jensen Huang: "Show Me Greatness" … SK Hynix HBM4 Supply to NVIDIA Imminent
The guests of honor at a dinner held on the evening of February 14 at '99 Chicken,' a Korean fried chicken restaurant near NVIDIA's headquarters in Santa Clara, were SK Hynix's High Bandwidth Memory (HBM) engineers. NVIDIA CEO Jensen Huang arrived around 5:20 PM and spent roughly two hours going table to table, mixing "somaek" (soju-beer cocktails) for the 30-plus SK Hynix and NVIDIA engineers in attendance. Huang repeatedly told them "we are one team" and "I'm proud of you," expressing his gratitude. He also urged them to "deliver extraordinary results through relentless challenge and effort" in connection with the HBM4 (6th generation) that SK Hynix has committed to supply.
◇ The HBM4 Jensen Huang Is Waiting For
The semiconductor industry views it as highly unusual for Huang to personally host a dinner for a partner's engineers. The dinner was arranged on short notice after Huang instructed NVIDIA employees last week to "organize a dinner to encourage SK Hynix's HBM engineers." This underscores just how critical SK Hynix's HBM4 is to NVIDIA's future.
SK Hynix formally joined NVIDIA's supply chain in July 2020 with HBM2E (3rd generation) shipments. It went on to serve as the de facto sole supplier for HBM3 (4th gen) and HBM3E (5th gen), forging a relationship so tight alongside TSMC that the trio became known as the "AI semiconductor triple alliance."
HBM4 is regarded as the key component that will determine the performance of NVIDIA's next-generation AI accelerator "Vera Rubin," slated for the second half of this year. HBM4 is a high-performance DRAM module made by stacking 12 advanced DRAM dies, responsible for feeding massive volumes of data in a timely manner to the GPU that handles computation inside AI accelerators like Vera Rubin. NVIDIA has demanded HBM4 suppliers deliver operating speeds of "11 gigabits per second (Gbps) or higher" and "bandwidth of 3.0 terabytes per second (TB/s) or more" — specifications exceeding by over 30% what competitors like AMD require for their HBM4. In other words, NVIDIA has positioned HBM4 as Vera Rubin's key differentiator.
◇ "The Timeline Is Tight, but Show Me Greatness"
Unlike HBM3E, which was essentially SK Hynix's one-man show, HBM4 — whose market opens in earnest in the second half — presents a different competitive landscape. Samsung Electronics shipped its first official HBM4 product to NVIDIA on February 12, an industry first, featuring operating speeds of 11.7 Gbps (up to 13 Gbps) and bandwidth of 3.3 TB/s.
SK Hynix has secured HBM4 performance of 11.7 Gbps or above and is currently mass-supplying paid samples to NVIDIA while carrying out performance optimization work. The industry expects SK Hynix to receive NVIDIA's official "go for mass supply" sign-off in the near term.
At the dinner, Huang urged SK Hynix engineers to "deliver top-performing HBM4 without any delays." In a closing toast just before the dinner ended, he said: "AI accelerators and HBM4 represent remarkable, extraordinary, and the world's most challenging technology. I'm proud of all of you working around the clock, and I'm confident you will deliver exceptional results." He added: "I know the development timeline for HBM4 and Vera Rubin is tight, but I believe in you. Now is the time for SK Hynix and NVIDIA to show the world greatness together."
◇ Green Light for SK Hynix HBM4
With Huang personally appearing at the dinner and calling for swift HBM4 delivery, analysts now see a higher likelihood that SK Hynix will maintain its position as the No. 1 supplier for HBM4 as well. NVIDIA tentatively allocated this year's volumes (HBM3E and HBM4) back in December, with SK Hynix reportedly securing over 55%, Samsung in the mid-to-high 20% range, and Micron at roughly 20%.
While Samsung's push forward in the HBM4 technology race has raised the possibility of shifts in supply share, the prevailing view is that SK Hynix stands a strong[...]
Offshore
Jukan [Exclusive] Jensen Huang: "Show Me Greatness" … SK Hynix HBM4 Supply to NVIDIA Imminent The guests of honor at a dinner held on the evening of February 14 at '99 Chicken,' a Korean fried chicken restaurant near NVIDIA's headquarters in Santa Clara,…
chance of securing the largest allocation once it completes quality optimization in Q1. A semiconductor industry source explained: "Final HBM4 optimization across all three memory makers won't be complete until around March. Recently, there has also been a trend of prioritizing increased commodity DRAM production to boost profitability over HBM market share."
tweet
tweet
Offshore
Photo
Jukan
A new meme just dropped: Jensen Huang looking happy after getting a birthday cake. https://t.co/N2argPHqHY
[Exclusive] Jensen Huang: "Show Me Greatness" … SK Hynix HBM4 Supply to NVIDIA Imminent
The guests of honor at a dinner held on the evening of February 14 at '99 Chicken,' a Korean fried chicken restaurant near NVIDIA's headquarters in Santa Clara, were SK Hynix's High Bandwidth Memory (HBM) engineers. NVIDIA CEO Jensen Huang arrived around 5:20 PM and spent roughly two hours going table to table, mixing "somaek" (soju-beer cocktails) for the 30-plus SK Hynix and NVIDIA engineers in attendance. Huang repeatedly told them "we are one team" and "I'm proud of you," expressing his gratitude. He also urged them to "deliver extraordinary results through relentless challenge and effort" in connection with the HBM4 (6th generation) that SK Hynix has committed to supply.
◇ The HBM4 Jensen Huang Is Waiting For
The semiconductor industry views it as highly unusual for Huang to personally host a dinner for a partner's engineers. The dinner was arranged on short notice after Huang instructed NVIDIA employees last week to "organize a dinner to encourage SK Hynix's HBM engineers." This underscores just how critical SK Hynix's HBM4 is to NVIDIA's future.
SK Hynix formally joined NVIDIA's supply chain in July 2020 with HBM2E (3rd generation) shipments. It went on to serve as the de facto sole supplier for HBM3 (4th gen) and HBM3E (5th gen), forging a relationship so tight alongside TSMC that the trio became known as the "AI semiconductor triple alliance."
HBM4 is regarded as the key component that will determine the performance of NVIDIA's next-generation AI accelerator "Vera Rubin," slated for the second half of this year. HBM4 is a high-performance DRAM module made by stacking 12 advanced DRAM dies, responsible for feeding massive volumes of data in a timely manner to the GPU that handles computation inside AI accelerators like Vera Rubin. NVIDIA has demanded HBM4 suppliers deliver operating speeds of "11 gigabits per second (Gbps) or higher" and "bandwidth of 3.0 terabytes per second (TB/s) or more" — specifications exceeding by over 30% what competitors like AMD require for their HBM4. In other words, NVIDIA has positioned HBM4 as Vera Rubin's key differentiator.
◇ "The Timeline Is Tight, but Show Me Greatness"
Unlike HBM3E, which was essentially SK Hynix's one-man show, HBM4 — whose market opens in earnest in the second half — presents a different competitive landscape. Samsung Electronics shipped its first official HBM4 product to NVIDIA on February 12, an industry first, featuring operating speeds of 11.7 Gbps (up to 13 Gbps) and bandwidth of 3.3 TB/s.
SK Hynix has secured HBM4 performance of 11.7 Gbps or above and is currently mass-supplying paid samples to NVIDIA while carrying out performance optimization work. The industry expects SK Hynix to receive NVIDIA's official "go for mass supply" sign-off in the near term.
At the dinner, Huang urged SK Hynix engineers to "deliver top-performing HBM4 without any delays." In a closing toast just before the dinner ended, he said: "AI accelerators and HBM4 represent remarkable, extraordinary, and the world's most challenging technology. I'm proud of all of you working around the clock, and I'm confident you will deliver exceptional results." He added: "I know the development timeline for HBM4 and Vera Rubin is tight, but I believe in you. Now is the time for SK Hynix and NVIDIA to show the world greatness together."
◇ Green Light for SK Hynix HBM4
With Huang personally appearing at the dinner and calling for swift HBM4 delivery, analysts now see a higher likelihood that SK Hynix will maintain its position as the No. 1 supplier for HBM4 as well. NVIDIA tentatively allocated this year's volumes (HBM3E and HBM4) back in December, with SK Hynix reportedly securing over 55%, Samsung in the mid-to-high 20% range, and Micron at roughly 20%.
While Samsung's push forward in the HBM4 techno[...]
A new meme just dropped: Jensen Huang looking happy after getting a birthday cake. https://t.co/N2argPHqHY
[Exclusive] Jensen Huang: "Show Me Greatness" … SK Hynix HBM4 Supply to NVIDIA Imminent
The guests of honor at a dinner held on the evening of February 14 at '99 Chicken,' a Korean fried chicken restaurant near NVIDIA's headquarters in Santa Clara, were SK Hynix's High Bandwidth Memory (HBM) engineers. NVIDIA CEO Jensen Huang arrived around 5:20 PM and spent roughly two hours going table to table, mixing "somaek" (soju-beer cocktails) for the 30-plus SK Hynix and NVIDIA engineers in attendance. Huang repeatedly told them "we are one team" and "I'm proud of you," expressing his gratitude. He also urged them to "deliver extraordinary results through relentless challenge and effort" in connection with the HBM4 (6th generation) that SK Hynix has committed to supply.
◇ The HBM4 Jensen Huang Is Waiting For
The semiconductor industry views it as highly unusual for Huang to personally host a dinner for a partner's engineers. The dinner was arranged on short notice after Huang instructed NVIDIA employees last week to "organize a dinner to encourage SK Hynix's HBM engineers." This underscores just how critical SK Hynix's HBM4 is to NVIDIA's future.
SK Hynix formally joined NVIDIA's supply chain in July 2020 with HBM2E (3rd generation) shipments. It went on to serve as the de facto sole supplier for HBM3 (4th gen) and HBM3E (5th gen), forging a relationship so tight alongside TSMC that the trio became known as the "AI semiconductor triple alliance."
HBM4 is regarded as the key component that will determine the performance of NVIDIA's next-generation AI accelerator "Vera Rubin," slated for the second half of this year. HBM4 is a high-performance DRAM module made by stacking 12 advanced DRAM dies, responsible for feeding massive volumes of data in a timely manner to the GPU that handles computation inside AI accelerators like Vera Rubin. NVIDIA has demanded HBM4 suppliers deliver operating speeds of "11 gigabits per second (Gbps) or higher" and "bandwidth of 3.0 terabytes per second (TB/s) or more" — specifications exceeding by over 30% what competitors like AMD require for their HBM4. In other words, NVIDIA has positioned HBM4 as Vera Rubin's key differentiator.
◇ "The Timeline Is Tight, but Show Me Greatness"
Unlike HBM3E, which was essentially SK Hynix's one-man show, HBM4 — whose market opens in earnest in the second half — presents a different competitive landscape. Samsung Electronics shipped its first official HBM4 product to NVIDIA on February 12, an industry first, featuring operating speeds of 11.7 Gbps (up to 13 Gbps) and bandwidth of 3.3 TB/s.
SK Hynix has secured HBM4 performance of 11.7 Gbps or above and is currently mass-supplying paid samples to NVIDIA while carrying out performance optimization work. The industry expects SK Hynix to receive NVIDIA's official "go for mass supply" sign-off in the near term.
At the dinner, Huang urged SK Hynix engineers to "deliver top-performing HBM4 without any delays." In a closing toast just before the dinner ended, he said: "AI accelerators and HBM4 represent remarkable, extraordinary, and the world's most challenging technology. I'm proud of all of you working around the clock, and I'm confident you will deliver exceptional results." He added: "I know the development timeline for HBM4 and Vera Rubin is tight, but I believe in you. Now is the time for SK Hynix and NVIDIA to show the world greatness together."
◇ Green Light for SK Hynix HBM4
With Huang personally appearing at the dinner and calling for swift HBM4 delivery, analysts now see a higher likelihood that SK Hynix will maintain its position as the No. 1 supplier for HBM4 as well. NVIDIA tentatively allocated this year's volumes (HBM3E and HBM4) back in December, with SK Hynix reportedly securing over 55%, Samsung in the mid-to-high 20% range, and Micron at roughly 20%.
While Samsung's push forward in the HBM4 techno[...]
Offshore
Jukan A new meme just dropped: Jensen Huang looking happy after getting a birthday cake. https://t.co/N2argPHqHY [Exclusive] Jensen Huang: "Show Me Greatness" … SK Hynix HBM4 Supply to NVIDIA Imminent The guests of honor at a dinner held on the evening of…
logy race has raised the possibility of shifts in supply share, the prevailing view is that SK Hynix stands a strong chance of securing the largest allocation once it completes quality optimization in Q1. A semiconductor industry source explained: "Final HBM4 optimization across all three memory makers won't be complete until around March. Recently, there has also been a trend of prioritizing increased commodity DRAM production to boost profitability over HBM market share." - Jukan tweet
Offshore
Photo
The Transcript
Jones Lang LaSalle CEO: "We are pleased with our...performance, achieving new highs at year-end across key top- & bottom-line performance metrics as well as FCF...Looking ahead, we see significant runway for healthy growth with continued margin expansion."
$JLL: +4% PM https://t.co/MbDNnUN1Ks
tweet
Jones Lang LaSalle CEO: "We are pleased with our...performance, achieving new highs at year-end across key top- & bottom-line performance metrics as well as FCF...Looking ahead, we see significant runway for healthy growth with continued margin expansion."
$JLL: +4% PM https://t.co/MbDNnUN1Ks
tweet
Offshore
Photo
The Transcript
Wingstop CEO: "In a year marked by uncertainty, the structural advantages of our operating model are reflected in our 15% Adjusted EBITDA growth in 2025."
$WING: +19% Pre-Market 🚀 https://t.co/Hm4PlhfiRJ
tweet
Wingstop CEO: "In a year marked by uncertainty, the structural advantages of our operating model are reflected in our 15% Adjusted EBITDA growth in 2025."
$WING: +19% Pre-Market 🚀 https://t.co/Hm4PlhfiRJ
tweet
Offshore
Photo
Bourbon Capital
$MSCI Projected FCF for the next few years https://t.co/NSUMAoTToY
tweet
$MSCI Projected FCF for the next few years https://t.co/NSUMAoTToY
Fernandez $MSCI CEO keeps buying more and more shares.....
$3.5M today at $523 https://t.co/cQHqujkI6Q - Bourbon Capitaltweet
Offshore
Photo
The Transcript
RT @TheTranscript_: Wednesday's earnings:
Before Open: $ADI $GRMN $SEDG $MCO $FVRR $WING $CSTM $GPN $GLBE $LBTYA $PERI
After Close: $CVNA $KGC $FIG $DASH $PAAS $EBAY $BKNG $BTG $RELY $EQX $OXY $RGLD https://t.co/ycFoqlkQ1m
tweet
RT @TheTranscript_: Wednesday's earnings:
Before Open: $ADI $GRMN $SEDG $MCO $FVRR $WING $CSTM $GPN $GLBE $LBTYA $PERI
After Close: $CVNA $KGC $FIG $DASH $PAAS $EBAY $BKNG $BTG $RELY $EQX $OXY $RGLD https://t.co/ycFoqlkQ1m
tweet