Offshore
Video
Dimitry Nakhla | Babylon Capital®
Dev Kantesaria is famously known for having very little trading activity — and an exceptional long-term track record.
When asked what his “secret sauce” is to staying away from the sell button, his answer was simple:
Focus on 𝐛𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐪𝐮𝐚𝐥𝐢𝐭𝐲.
“It’s a much easier way to make money, it’s more tax efficient, and it’s a more predictable way to make money. So the lack of turnover is really nothing other than a natural outcome of how we think about business quality.”
___
Dev credits the Warren Buffett and Charlie Munger playbook — avoid short-term trading (speculation) and instead own companies that can compound intrinsic value year after year for decades.
This ties closely to Chris Hohn’s emphasis on long-termism.
𝘞𝘩𝘦𝘯 𝘺𝘰𝘶 𝘢𝘯𝘤𝘩𝘰𝘳 𝘰𝘯 𝘲𝘶𝘢𝘭𝘪𝘵𝘺 𝘢𝘯𝘥 𝘥𝘶𝘳𝘢𝘣𝘪𝘭𝘪𝘵𝘺, 𝘥𝘰𝘪𝘯𝘨 𝘯𝘰𝘵𝘩𝘪𝘯𝘨 𝘣𝘦𝘤𝘰𝘮𝘦𝘴 𝘵𝘩𝘦 𝘥𝘦𝘧𝘢𝘶𝘭𝘵 — 𝘢𝘯𝘥 𝘰𝘧𝘵𝘦𝘯 𝘵𝘩𝘦 𝘮𝘰𝘴𝘵 𝘱𝘳𝘰𝘧𝘪𝘵𝘢𝘣𝘭𝘦 𝘢𝘤𝘵𝘪𝘰𝘯.
Video: Good Investing Talks Podcast (10/27/2022)
tweet
Dev Kantesaria is famously known for having very little trading activity — and an exceptional long-term track record.
When asked what his “secret sauce” is to staying away from the sell button, his answer was simple:
Focus on 𝐛𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐪𝐮𝐚𝐥𝐢𝐭𝐲.
“It’s a much easier way to make money, it’s more tax efficient, and it’s a more predictable way to make money. So the lack of turnover is really nothing other than a natural outcome of how we think about business quality.”
___
Dev credits the Warren Buffett and Charlie Munger playbook — avoid short-term trading (speculation) and instead own companies that can compound intrinsic value year after year for decades.
This ties closely to Chris Hohn’s emphasis on long-termism.
𝘞𝘩𝘦𝘯 𝘺𝘰𝘶 𝘢𝘯𝘤𝘩𝘰𝘳 𝘰𝘯 𝘲𝘶𝘢𝘭𝘪𝘵𝘺 𝘢𝘯𝘥 𝘥𝘶𝘳𝘢𝘣𝘪𝘭𝘪𝘵𝘺, 𝘥𝘰𝘪𝘯𝘨 𝘯𝘰𝘵𝘩𝘪𝘯𝘨 𝘣𝘦𝘤𝘰𝘮𝘦𝘴 𝘵𝘩𝘦 𝘥𝘦𝘧𝘢𝘶𝘭𝘵 — 𝘢𝘯𝘥 𝘰𝘧𝘵𝘦𝘯 𝘵𝘩𝘦 𝘮𝘰𝘴𝘵 𝘱𝘳𝘰𝘧𝘪𝘵𝘢𝘣𝘭𝘦 𝘢𝘤𝘵𝘪𝘰𝘯.
Video: Good Investing Talks Podcast (10/27/2022)
tweet
Offshore
Photo
Javier Blas
COLUMN: India, the world's top-3 energy consumer, is typically worried about shorages and prices.
Currently, though, everyone in New Delhi feels relaxed. Ask officials and executives, many are convinced oil and gas have become a buyer’s market.
@Opinion https://t.co/oeAGzz8Lip
tweet
COLUMN: India, the world's top-3 energy consumer, is typically worried about shorages and prices.
Currently, though, everyone in New Delhi feels relaxed. Ask officials and executives, many are convinced oil and gas have become a buyer’s market.
@Opinion https://t.co/oeAGzz8Lip
tweet
The Transcript
RT @TheTranscript_: $META CEO: Facebook video time growing in the U.S.
"On Facebook, video time continued to grow double digits year-over-year in the US, and we're seeing strong results from our ranking and product efforts on both feed and video surfaces."
tweet
RT @TheTranscript_: $META CEO: Facebook video time growing in the U.S.
"On Facebook, video time continued to grow double digits year-over-year in the US, and we're seeing strong results from our ranking and product efforts on both feed and video surfaces."
tweet
Chips & SaaS
RT @jukan05: Rumor: Starting with TPU v8, Google will no longer use HBM?
The incident was triggered by the global capacity shortage of HBM, which will be unable to meet AI growth demands over the next 2 to 3 years. At the same time, traditional HBM is limited by its design of being fixed on the motherboard, resulting in a capacity ceiling.
Accordingly, Google will develop a new solution to be launched in 2027. The physical form involves removing HBM and establishing independent DRAM memory cabinets (containing 16–32 Trays), dynamically allocating memory through photonic technology.
This technology deconstructs the originally single and simple HBM component into three parts:
- Transport Layer: Employs all-optical interconnects, ensuring cross-cabinet communication efficiency through OCS (Optical Circuit Switching) and customized CXL protocols. The CPUs, GPUs, and memory modules of the memory pool share a single set of protocols.
- Storage Layer: Utilizes large-scale DRAM arrays to replace HBM, significantly increasing the addressing space. The memory corresponding to a single TPU can leap from 192GB/256GB to 512GB or even above 768GB.
- Control Layer: Adds dedicated memory-side CPU servers for management.
Compared to the native "TPU+HBM" direct connection, this "three-in-one" split-combination solution results in a calculation efficiency loss of less than 2%.
Regarding this technology, first is OCS, which satisfies high-speed switching in an all-optical environment and achieves bandwidth and latency close to direct connections with HBM or silicon photonic HBM. Traditional Ethernet (via copper) typically has a latency of over 200 nanoseconds, while using an OCS all-optical switching network can reduce latency to below 100 nanoseconds, which is why it is important.
Second, in this architecture, there is a dual-side CPU architecture (Tier-1 and Tier-2 CPUs):
Tier-1 CPU (TPU side): Located on the TPU motherboard, primarily responsible for interconnect communication between TPUs.
Tier-2 CPU (Memory pool side): Most likely deployed on the memory server (DRAM server) side, specifically responsible for communication coordination between TPUs and the distributed memory addressing space.
The Tier-2 CPU is deployed independently because, logically, the original TPU motherboard CPU could still read the memory pool, but using the old CPU would involve complex protocol conversions (such as translation between PCIe signals and CXL-like protocols), creating efficiency bottlenecks.
Third, the interface is completed directly at the chip level through a "photonic packaging interface." This method is similar to CPO (Co-Packaged Optics) technology, integrating optical interfaces directly within the package of chips like the CPU/TPU, replacing traditional external optical modules. The first supplier contacted during the solution design stage was Lightmatter, with multiple suppliers to follow.
This solution, which removes HBM and changes it to an external DRAM memory pool, actually converts what was originally ultra-high-frequency motherboard-level access into "cross-cabinet access." Theoretically, this would generate huge latency and efficiency losses. However, this is not the case. Specifically, complex electrical/optical conversions exist between chips, hosts, and ring networks; these hardware-level protocol conversions and settings generate significant hidden overhead invisible to users. After adopting the DRAM memory pool solution, although CXL translation is introduced, many cumbersome hardware protocol conversion steps from the original architecture are removed.
If HBM prices drop and performance improves due to capacity expansion by manufacturers like Samsung and Hynix over the next two years, Google is unlikely to return to the HBM solution due to cost considerations. Google does not believe that upstream manufacturers like Hynix, Samsung, and Micron will subvert their own main product line pricing or mass production strateg[...]
RT @jukan05: Rumor: Starting with TPU v8, Google will no longer use HBM?
The incident was triggered by the global capacity shortage of HBM, which will be unable to meet AI growth demands over the next 2 to 3 years. At the same time, traditional HBM is limited by its design of being fixed on the motherboard, resulting in a capacity ceiling.
Accordingly, Google will develop a new solution to be launched in 2027. The physical form involves removing HBM and establishing independent DRAM memory cabinets (containing 16–32 Trays), dynamically allocating memory through photonic technology.
This technology deconstructs the originally single and simple HBM component into three parts:
- Transport Layer: Employs all-optical interconnects, ensuring cross-cabinet communication efficiency through OCS (Optical Circuit Switching) and customized CXL protocols. The CPUs, GPUs, and memory modules of the memory pool share a single set of protocols.
- Storage Layer: Utilizes large-scale DRAM arrays to replace HBM, significantly increasing the addressing space. The memory corresponding to a single TPU can leap from 192GB/256GB to 512GB or even above 768GB.
- Control Layer: Adds dedicated memory-side CPU servers for management.
Compared to the native "TPU+HBM" direct connection, this "three-in-one" split-combination solution results in a calculation efficiency loss of less than 2%.
Regarding this technology, first is OCS, which satisfies high-speed switching in an all-optical environment and achieves bandwidth and latency close to direct connections with HBM or silicon photonic HBM. Traditional Ethernet (via copper) typically has a latency of over 200 nanoseconds, while using an OCS all-optical switching network can reduce latency to below 100 nanoseconds, which is why it is important.
Second, in this architecture, there is a dual-side CPU architecture (Tier-1 and Tier-2 CPUs):
Tier-1 CPU (TPU side): Located on the TPU motherboard, primarily responsible for interconnect communication between TPUs.
Tier-2 CPU (Memory pool side): Most likely deployed on the memory server (DRAM server) side, specifically responsible for communication coordination between TPUs and the distributed memory addressing space.
The Tier-2 CPU is deployed independently because, logically, the original TPU motherboard CPU could still read the memory pool, but using the old CPU would involve complex protocol conversions (such as translation between PCIe signals and CXL-like protocols), creating efficiency bottlenecks.
Third, the interface is completed directly at the chip level through a "photonic packaging interface." This method is similar to CPO (Co-Packaged Optics) technology, integrating optical interfaces directly within the package of chips like the CPU/TPU, replacing traditional external optical modules. The first supplier contacted during the solution design stage was Lightmatter, with multiple suppliers to follow.
This solution, which removes HBM and changes it to an external DRAM memory pool, actually converts what was originally ultra-high-frequency motherboard-level access into "cross-cabinet access." Theoretically, this would generate huge latency and efficiency losses. However, this is not the case. Specifically, complex electrical/optical conversions exist between chips, hosts, and ring networks; these hardware-level protocol conversions and settings generate significant hidden overhead invisible to users. After adopting the DRAM memory pool solution, although CXL translation is introduced, many cumbersome hardware protocol conversion steps from the original architecture are removed.
If HBM prices drop and performance improves due to capacity expansion by manufacturers like Samsung and Hynix over the next two years, Google is unlikely to return to the HBM solution due to cost considerations. Google does not believe that upstream manufacturers like Hynix, Samsung, and Micron will subvert their own main product line pricing or mass production strateg[...]
Offshore
Chips & SaaS RT @jukan05: Rumor: Starting with TPU v8, Google will no longer use HBM? The incident was triggered by the global capacity shortage of HBM, which will be unable to meet AI growth demands over the next 2 to 3 years. At the same time, traditional…
ies to accommodate one or two major customers. They might release some profit margin, but they will not cooperate to an extreme degree.
This solution also reduces reliance on CoWoS because HBM is no longer needed. At the same time, the HBM chips originally on the silicon interposer substrate occupied a large area; after removing HBM, the saved CoWoS area can be entirely given to the TPU's Compute Core. Thus, within the same physical dimensions, a TPU chip with stronger performance and a larger area can be made, no longer restricted by the physical size of HBM. Regarding memory, the V7 generation had a single HBM capacity of about 192GB, and V8A is about 256GB, but through memory pooling, the memory per TPU can easily double to 512GB or even reach 768GB or more.
The solution is expected to be implemented next year, with the final route determined before March 5. The initial deployment ratio is about 30%, with 100% replacement expected to be achieved in 3 years.
Sector Beneficiaries:
- OCS (Optical Engine): Lightmatter, as the primary supplier, provides photonic packaging interfaces, integrating optical interfaces within the chip package to replace external modules.
- CXL-like: Requires CXL-like chips (MXC chips) to achieve the interconnect between TPUs and the memory pool, costing $100 per chip. One chip manages two channels for two 256GB memory modules, matching the TPU and memory side synchronously. If it is 512GB, two MXC chips are needed; for 768GB, four chips.
- DRAM Modules: The quantity of GBs increases significantly.
- CPU: Each memory Tray needs to be equipped with a CPU for scheduling; high performance is not required here, and ARM-based CPUs can be used.
- PCB: Independent DRAM cabinets require large, multi-layer PCBs to carry a large number of DIMM slots.
Source: 国泰海通 (Guotai Haitong)
tweet
This solution also reduces reliance on CoWoS because HBM is no longer needed. At the same time, the HBM chips originally on the silicon interposer substrate occupied a large area; after removing HBM, the saved CoWoS area can be entirely given to the TPU's Compute Core. Thus, within the same physical dimensions, a TPU chip with stronger performance and a larger area can be made, no longer restricted by the physical size of HBM. Regarding memory, the V7 generation had a single HBM capacity of about 192GB, and V8A is about 256GB, but through memory pooling, the memory per TPU can easily double to 512GB or even reach 768GB or more.
The solution is expected to be implemented next year, with the final route determined before March 5. The initial deployment ratio is about 30%, with 100% replacement expected to be achieved in 3 years.
Sector Beneficiaries:
- OCS (Optical Engine): Lightmatter, as the primary supplier, provides photonic packaging interfaces, integrating optical interfaces within the chip package to replace external modules.
- CXL-like: Requires CXL-like chips (MXC chips) to achieve the interconnect between TPUs and the memory pool, costing $100 per chip. One chip manages two channels for two 256GB memory modules, matching the TPU and memory side synchronously. If it is 512GB, two MXC chips are needed; for 768GB, four chips.
- DRAM Modules: The quantity of GBs increases significantly.
- CPU: Each memory Tray needs to be equipped with a CPU for scheduling; high performance is not required here, and ARM-based CPUs can be used.
- PCB: Independent DRAM cabinets require large, multi-layer PCBs to carry a large number of DIMM slots.
Source: 国泰海通 (Guotai Haitong)
tweet
Offshore
Video
Lumida Wealth Management
COREWEAVE CEO ON WHY SILICON VALLEY DOESN'T UNDERSTAND INFRASTRUCTURE
Michael Intrator: "Silicon Valley has moved violently to the equity side of the balance sheet.
They've never needed to use debt because tech is so powerful that it throws off enough cash to finance everything through equity.
It's actually incredibly inefficient, but the technology is so world-changing that they can get away with it.
But when you're building physical infrastructure, that's capital-intensive?
You can't do that. The debt markets have ONE rule: pay me my goddamn money back.
Software belongs in Silicon Valley. Infrastructure needs Wall Street. Different problems need different money."
This is why most AI infrastructure companies are
struggling to scale, they're trying to finance physical buildouts like software companies.
Here are highlights from a recent interview with Michael Intrator by @barronsonline
tweet
COREWEAVE CEO ON WHY SILICON VALLEY DOESN'T UNDERSTAND INFRASTRUCTURE
Michael Intrator: "Silicon Valley has moved violently to the equity side of the balance sheet.
They've never needed to use debt because tech is so powerful that it throws off enough cash to finance everything through equity.
It's actually incredibly inefficient, but the technology is so world-changing that they can get away with it.
But when you're building physical infrastructure, that's capital-intensive?
You can't do that. The debt markets have ONE rule: pay me my goddamn money back.
Software belongs in Silicon Valley. Infrastructure needs Wall Street. Different problems need different money."
This is why most AI infrastructure companies are
struggling to scale, they're trying to finance physical buildouts like software companies.
Here are highlights from a recent interview with Michael Intrator by @barronsonline
tweet
Offshore
Photo
Javier Blas
CHART OF THE DAY: US benchmark natural gas prices have fallen back to pre-cold blast levels, hovering just above $3.5 per mBtu.
(The expiring Feb 2026 contract, then the front-month, rose above $7.5 per mBtu less than a week ago. The current front-month contract is March 2026) https://t.co/BDTHeQi0IU
tweet
CHART OF THE DAY: US benchmark natural gas prices have fallen back to pre-cold blast levels, hovering just above $3.5 per mBtu.
(The expiring Feb 2026 contract, then the front-month, rose above $7.5 per mBtu less than a week ago. The current front-month contract is March 2026) https://t.co/BDTHeQi0IU
A **reminder** that the relatively illiquid front-month Feb 2026 Henry Hub nat gas contract **expires Wednesday**.
That contract is up ~40% today (and 125% over last five days) trading >$7 per mBtu. The more liquid March 2026 contract is up 7% today to less than $3.8 per mBtu. - Javier Blastweet
Offshore
Photo
The Transcript
Tyson Foods CEO: "Prepared Foods delivered top & bottom-line growth while Chicken reported its fifth consecutive quarter of Y/Y volume gains...protein demand continues to increase..."
$TSN: +3% PM https://t.co/UQgMk2canB
tweet
Tyson Foods CEO: "Prepared Foods delivered top & bottom-line growth while Chicken reported its fifth consecutive quarter of Y/Y volume gains...protein demand continues to increase..."
$TSN: +3% PM https://t.co/UQgMk2canB
tweet
Offshore
Photo
The Transcript
Disney CEO: "We are pleased with the start to our fiscal year, and our achievements reflect the tremendous progress we’ve made."
$DIS: +1% Pre-Market https://t.co/IFtkTfIa6b
tweet
Disney CEO: "We are pleased with the start to our fiscal year, and our achievements reflect the tremendous progress we’ve made."
$DIS: +1% Pre-Market https://t.co/IFtkTfIa6b
tweet
Offshore
Photo
Javier Blas
BREAKING: US shale oil and gas companies Devon and Coterra merge in an all-stock deal. After the transaction, Devon shareholders would own ~54% of the combined entity and Coterra holders ~46%.
Press release: https://t.co/K98BVTdWY3
Slide deck: https://t.co/73X1KThfl6 https://t.co/NfRo3tLvf5
tweet
BREAKING: US shale oil and gas companies Devon and Coterra merge in an all-stock deal. After the transaction, Devon shareholders would own ~54% of the combined entity and Coterra holders ~46%.
Press release: https://t.co/K98BVTdWY3
Slide deck: https://t.co/73X1KThfl6 https://t.co/NfRo3tLvf5
tweet