Offshore
mp`, `data`, `result` without context) <standard- Be direct about problems - Quantify when possible ("this adds ~200ms latency" not "this might be slower") - When stuck, say so and describe what you've tried - Don't hide uncertainty behind confident language…
anual coding. TLDR everyone has their developing flow, my current is a small few CC sessions on the left in ghostty windows/tabs and an IDE on the right for viewing the code + manual edits.

Tenacity. It's so interesting to watch an agent relentlessly work at something. They never get tired, they never get demoralized, they just keep going and trying things where a person would have given up long ago to fight another day. It's a "feel the AGI" moment to watch it struggle with something for a long time just to come out victorious 30 minutes later. You realize that stamina is a core bottleneck to work and that with LLMs in hand it has been dramatically increased.

Speedups. It's not clear how to measure the "speedup" of LLM assistance. Certainly I feel net way faster at what I was going to do, but the main effect is that I do a lot more than I was going to do because 1) I can code up all kinds of things that just wouldn't have been worth coding before and 2) I can approach code that I couldn't work on before because of knowledge/skill issue. So certainly it's speedup, but it's possibly a lot more an expansion.

Leverage. LLMs are exceptionally good at looping until they meet specific goals and this is where most of the "feel the AGI" magic is to be found. Don't tell it what to do, give it success criteria and watch it go. Get it to write tests first and then pass them. Put it in the loop with a browser MCP. Write the naive algorithm that is very likely correct first, then ask it to optimize it while preserving correctness. Change your approach from imperative to declarative to get the agents looping longer and gain leverage.

Fun. I didn't anticipate that with agents programming feels *more* fun because a lot of the fill in the blanks drudgery is removed and what remains is the creative part. I also feel less blocked/stuck (which is not fun) and I experience a lot more courage because there's almost always a way to work hand in hand with it to make some positive progress. I have seen the opposite sentiment from other people too; LLM coding will split up engineers based on those who primarily liked coding and those who primarily liked building.

Atrophy. I've already noticed that I am slowly starting to atrophy my ability to write code manually. Generation (writing code) and discrimination (reading code) are different capabilities in the brain. Largely due to all the little mostly syntactic details involved in programming, you can review code just fine even if you struggle to write it.

Slopacolypse. I am bracing for 2026 as the year of the slopacolypse across all of github, substack, arxiv, X/instagram, and generally all digital media. We're also going to see a lot more AI hype productivity theater (is that even possible?), on the side of actual, real improvements.

Questions. A few of the questions on my mind:
- What happens to the "10X engineer" - the ratio of productivity between the mean and the max engineer? It's quite possible that this grows *a lot*.
- Armed with LLMs, do generalists increasingly outperform specialists? LLMs are a lot better at fill in the blanks (the micro) than grand strategy (the macro).
- What does LLM coding feel like in the future? Is it like playing StarCraft? Playing Factorio? Playing music?
- How much of society is bottlenecked by digital knowledge work?

TLDR Where does this leave us? LLM agent capabilities (Claude & Codex especially) have crossed some kind of threshold of coherence around December 2025 and caused a phase shift in software engineering and closely related. The intelligence part suddenly feels quite a bit ahead of all the rest of it - integrations (tools, knowledge), the necessity for new organizational workflows, processes, diffusion more generally. 2026 is going to be a high energy year as the industry metabolizes the new capability. - Andrej Karpathy tweet
Offshore
Photo
Jukan
When I first put this material together, a lot of people were skeptical—asking why YMTC would bother with something like LPDDR5… but now it’s finally showing up in media reports. https://t.co/p5CnXO9JPv

China memory chip maker YMTC is sampling LPDDR5 low-power DRAM chips and is developing HBM memory, media report, citing unnamed supply chain sources. YMTC’s Wuhan P3 plant expansion will begin DRAM production in the 2nd half 2026. YMTC is developing hybrid bonding techniques for HBM memory, and LPDDR5 is part of efforts to build DRAM production expertise. In its mainstay NAND memory business, Beijing has tasked YMTC with ensuring stable supplies to consumer electronics and automotive firms. $MU #SKhynix #Samsung $000660 $005930 https://t.co/gyeyu4hSeY
- Dan Nystedt
tweet
Offshore
Photo
Illiquid
Yes, and SG's customers have been telling everyone they are increasing capacity.

Seikoh Giken ($6834) makes precision equipment and consumables used to polish and finish fiber-optic connector parts. As NVIDIA pushes co-packaged optics into its Rubin-era networking, the number of high-precision multi-fiber interfaces that must be manufactured and qualified goes up, which can raise demand for Seikoh's polishing tools, fixtures, and process consumables.

6834 is priced roughly 30% more expensive on NTM p/e ratio compared to Furukawa Electric, as seen in the images below.

My rough heuristic is the market expects 30% profit growth for NTM for 6834, as Furukawa shows little expected growth.

However, new order growth for Seikoh Giken is already at 30% (see image below), and looks to be at an inflection point. This new order growth does not take into account Nvidia's Rubin GPU's, which will further drive demand for Seikoh Giken's products, as Rubin switches to CPO.
- yc
tweet
Michael Fritzell (Asian Century Stocks)
RT @DaBao_: @jay_21_ Infinity 640 Big Sunshine 1475 Matsumoto 4365 Fuso Chemical 4368 Ishihara 4462 Somar 8152
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: The best prompt I ever wrote was telling the AI what NOT to do.

After 2 years using ChatGPT, Claude, and Gemini professionally, I've learned:

Constraints > Instructions

Here are 8 "anti-prompts" that tripled my output quality: https://t.co/VxmQ13erun
tweet
Offshore
Photo
Illiquid
Nice. AYZ is a no brainer subscription. https://t.co/ZOmvirSP1E
tweet
Jukan
RT @eliant_capital: @GordianKnotDev There’s been rumors of around 150-170B in funding for OpenAI in the next raise. They’ll be fully funded through 2030 if that’s confirmed and entire complex will rip
tweet
Jukan
Samsung Galaxy S27 Ultra Expected to Feature Polar ID Facial Recognition Technology

• Reports indicate that Samsung plans to implement the next-generation facial recognition technology called Polar ID in the Galaxy S27 Ultra. Unlike the existing Face ID system that relies on 3D depth modeling, Polar ID uses polarized light to recognize faces. This eliminates the need for large sensor space on the display, making it possible to achieve a true full-screen design without notches, pill-shaped cutouts, or large hole punches (citing Chinese media)
tweet
Offshore
Photo
Michael Fritzell (Asian Century Stocks)
Learnt something new today: "Bloomberg's default index P/E calculation includes the negative earnings of loss-making companies in the denominator. This reduces the total "Earnings" pool, causing the P/E multiple to spike." https://t.co/OI3Rmhu0kf
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: vertex AI just leaked "fennec" in an error log

claude sonnet 5 reportedly drops tomorrow

the rumored specs:
> 50% cheaper than opus 4.5
> still 1M context window but faster
> can spawn parallel sub-agents from terminal
> allegedly hitting 80.9% on SWE-bench

the wildest part: "dev team mode" where you give a brief and agents build the full feature autonomously

treat this as unverified.

but if real, coding agents just changed overnight.
tweet