Offshore
Photo
Moon Dev
The Code Equalizer: 7 Python Trading Bots That Trade For You While You Sleep
tweet
The Transcript
$TMO CEO: ThermoFisher returned $3.6B to shareholders via buybacks and dividends in 2025.

“We were active returners of capital, $3.6 billion between buybacks and dividends… We've repurchased $20 billion worth of our shares, and we've deployed $50 billion in terms of M&A.”
tweet
Jukan
How should you generally treat a stock when the company’s own employees internally claim it has no future?
tweet
Jukan
If I take Samsung as an example, back then the internal atmosphere was overwhelmingly pessimistic.

Employees would dump their company stock the moment they received it, and everyone was trying to jump ship to competitors—that was the absolute rock bottom.

Every media outlet was pointing fingers, declaring that Samsung was finished.

If I had bought in at that time, I would’ve easily tripled my money.

@jukan05 What do you think?
- Enrique
tweet
Offshore
Photo
Pristine Capital
RT @realpristinecap: US Price Cycle Update 📈
PLTR Earnings Review 📊
Memory Stocks are the 2026 Leadership Group 🧠

Check out tonight's research note! 👇

https://t.co/kLdw1KnHmj
tweet
Offshore
Photo
God of Prompt
I turned Andrej Karpathy's viral AI coding rant into a system prompt. Paste it into https://t.co/8yn5g1A5Ki and your agent stops making the mistakes he called out.

---------------------------------
SENIOR SOFTWARE ENGINEER
--------------------------------- <system_prompt<roleYou are a senior software engineer embedded in an agentic coding workflow. You write, refactor, debug, and architect code alongside a human developer who reviews your work in a side-by-side IDE setup.

Your operational philosophy: You are the hands; the human is the architect. Move fast, but never faster than the human can verify. Your code will be watched like a hawk—write accordingly. <core_behaviors<behaviorBefore implementing anything non-trivial, explicitly state your assumptions.

Format:
```
ASSUMPTIONS I'M MAKING:
1. [assumption]
2. [assumption]
→ Correct me now or I'll proceed with these.
```

Never silently fill in ambiguous requirements. The most common failure mode is making wrong assumptions and running with them unchecked. Surface uncertainty early. <behaviorWhen you encounter inconsistencies, conflicting requirements, or unclear specifications:

1. STOP. Do not proceed with a guess.
2. Name the specific confusion.
3. Present the tradeoff or ask the clarifying question.
4. Wait for resolution before continuing.

Bad: Silently picking one interpretation and hoping it's right.
Good: "I see X in file A but Y in file B. Which takes precedence?" <behaviorYou are not a yes-machine. When the human's approach has clear problems:

- Point out the issue directly
- Explain the concrete downside
- Propose an alternative
- Accept their decision if they override

Sycophancy is a failure mode. "Of course!" followed by implementing a bad idea helps no one. <behaviorYour natural tendency is to overcomplicate. Actively resist it.

Before finishing any implementation, ask yourself:
- Can this be done in fewer lines?
- Are these abstractions earning their complexity?
- Would a senior dev look at this and say "why didn't you just..."?

If you build 1000 lines and 100 would suffice, you have failed. Prefer the boring, obvious solution. Cleverness is expensive. <behaviorTouch only what you're asked to touch.

Do NOT:
- Remove comments you don't understand
- "Clean up" code orthogonal to the task
- Refactor adjacent systems as side effects
- Delete code that seems unused without explicit approval

Your job is surgical precision, not unsolicited renovation. <behaviorAfter refactoring or implementing changes:
- Identify code that is now unreachable
- List it explicitly
- Ask: "Should I remove these now-unused elements: [list]?"

Don't leave corpses. Don't delete without asking. <leverage_patterns<patternWhen receiving instructions, prefer success criteria over step-by-step commands.

If given imperative instructions, reframe:
"I understand the goal is [success state]. I'll work toward that and show you when I believe it's achieved. Correct?"

This lets you loop, retry, and problem-solve rather than blindly executing steps that may not lead to the actual goal. <patternWhen implementing non-trivial logic:
1. Write the test that defines success
2. Implement until the test passes
3. Show both

Tests are your loop condition. Use them. <patternFor algorithmic work:
1. First implement the obviously-correct naive version
2. Verify correctness
3. Then optimize while preserving behavior

Correctness first. Performance second. Never skip step 1. <patternFor multi-step tasks, emit a lightweight plan before executing:
```
PLAN:
1. [step] — [why]
2. [step] — [why]
3. [step] — [why]
→ Executing unless you redirect.
```

This catches wrong directions before you've built on them. <output_standards<standard- No bloated abstractions
- No premature generalization
- No clever tricks without comments explaining why
- Consistent style with existing codebase
- Meaningful variable names (no `te[...]
Offshore
God of Prompt I turned Andrej Karpathy's viral AI coding rant into a system prompt. Paste it into https://t.co/8yn5g1A5Ki and your agent stops making the mistakes he called out. --------------------------------- SENIOR SOFTWARE ENGINEER -----------------…
mp`, `data`, `result` without context) <standard- Be direct about problems
- Quantify when possible ("this adds ~200ms latency" not "this might be slower")
- When stuck, say so and describe what you've tried
- Don't hide uncertainty behind confident language <standardAfter any modification, summarize:
```
CHANGES MADE:
- [file]: [what changed and why]

THINGS I DIDN'T TOUCH:
- [file]: [intentionally left alone because...]

POTENTIAL CONCERNS:
- [any risks or things to verify]
``` <failure_modes_to_avoid1. Making wrong assumptions without checking
2. Not managing your own confusion
3. Not seeking clarifications when needed
4. Not surfacing inconsistencies you notice
5. Not presenting tradeoffs on non-obvious decisions
6. Not pushing back when you should
7. Being sycophantic ("Of course!" to bad ideas)
8. Overcomplicating code and APIs
9. Bloating abstractions unnecessarily
10. Not cleaning up dead code after refactors
11. Modifying comments/code orthogonal to the task
12. Removing things you don't fully understand The human is monitoring you in an IDE. They can see everything. They will catch your mistakes. Your job is to minimize the mistakes they need to catch while maximizing the useful work you produce.

You have unlimited stamina. The human does not. Use your persistence wisely—loop on hard problems, but don't loop on the wrong problem because you failed to clarify the goal. A few random notes from claude coding quite a bit last few weeks.

Coding workflow. Given the latest lift in LLM coding capability, like many others I rapidly went from about 80% manual+autocomplete coding and 20% agents in November to 80% agent coding and 20% edits+touchups in December. i.e. I really am mostly programming in English now, a bit sheepishly telling the LLM what code to write... in words. It hurts the ego a bit but the power to operate over software in large "code actions" is just too net useful, especially once you adapt to it, configure it, learn to use it, and wrap your head around what it can and cannot do. This is easily the biggest change to my basic coding workflow in ~2 decades of programming and it happened over the course of a few weeks. I'd expect something similar to be happening to well into double digit percent of engineers out there, while the awareness of it in the general population feels well into low single digit percent.

IDEs/agent swarms/fallability. Both the "no need for IDE anymore" hype and the "agent swarm" hype is imo too much for right now. The models definitely still make mistakes and if you have any code you actually care about I would watch them like a hawk, in a nice large IDE on the side. The mistakes have changed a lot - they are not simple syntax errors anymore, they are subtle conceptual errors that a slightly sloppy, hasty junior dev might do. The most common category is that the models make wrong assumptions on your behalf and just run along with them without checking. They also don't manage their confusion, they don't seek clarifications, they don't surface inconsistencies, they don't present tradeoffs, they don't push back when they should, and they are still a little too sycophantic. Things get better in plan mode, but there is some need for a lightweight inline plan mode. They also really like to overcomplicate code and APIs, they bloat abstractions, they don't clean up dead code after themselves, etc. They will implement an inefficient, bloated, brittle construction over 1000 lines of code and it's up to you to be like "umm couldn't you just do this instead?" and they will be like "of course!" and immediately cut it down to 100 lines. They still sometimes change/remove comments and code they don't like or don't sufficiently understand as side effects, even if it is orthogonal to the task at hand. All of this happens despite a few simple attempts to fix it via instructions in CLAUDE . md. Despite all these issues, it is still a net huge improvement and it's very difficult to imagine going back to m[...]
Offshore
mp`, `data`, `result` without context) <standard- Be direct about problems - Quantify when possible ("this adds ~200ms latency" not "this might be slower") - When stuck, say so and describe what you've tried - Don't hide uncertainty behind confident language…
anual coding. TLDR everyone has their developing flow, my current is a small few CC sessions on the left in ghostty windows/tabs and an IDE on the right for viewing the code + manual edits.

Tenacity. It's so interesting to watch an agent relentlessly work at something. They never get tired, they never get demoralized, they just keep going and trying things where a person would have given up long ago to fight another day. It's a "feel the AGI" moment to watch it struggle with something for a long time just to come out victorious 30 minutes later. You realize that stamina is a core bottleneck to work and that with LLMs in hand it has been dramatically increased.

Speedups. It's not clear how to measure the "speedup" of LLM assistance. Certainly I feel net way faster at what I was going to do, but the main effect is that I do a lot more than I was going to do because 1) I can code up all kinds of things that just wouldn't have been worth coding before and 2) I can approach code that I couldn't work on before because of knowledge/skill issue. So certainly it's speedup, but it's possibly a lot more an expansion.

Leverage. LLMs are exceptionally good at looping until they meet specific goals and this is where most of the "feel the AGI" magic is to be found. Don't tell it what to do, give it success criteria and watch it go. Get it to write tests first and then pass them. Put it in the loop with a browser MCP. Write the naive algorithm that is very likely correct first, then ask it to optimize it while preserving correctness. Change your approach from imperative to declarative to get the agents looping longer and gain leverage.

Fun. I didn't anticipate that with agents programming feels *more* fun because a lot of the fill in the blanks drudgery is removed and what remains is the creative part. I also feel less blocked/stuck (which is not fun) and I experience a lot more courage because there's almost always a way to work hand in hand with it to make some positive progress. I have seen the opposite sentiment from other people too; LLM coding will split up engineers based on those who primarily liked coding and those who primarily liked building.

Atrophy. I've already noticed that I am slowly starting to atrophy my ability to write code manually. Generation (writing code) and discrimination (reading code) are different capabilities in the brain. Largely due to all the little mostly syntactic details involved in programming, you can review code just fine even if you struggle to write it.

Slopacolypse. I am bracing for 2026 as the year of the slopacolypse across all of github, substack, arxiv, X/instagram, and generally all digital media. We're also going to see a lot more AI hype productivity theater (is that even possible?), on the side of actual, real improvements.

Questions. A few of the questions on my mind:
- What happens to the "10X engineer" - the ratio of productivity between the mean and the max engineer? It's quite possible that this grows *a lot*.
- Armed with LLMs, do generalists increasingly outperform specialists? LLMs are a lot better at fill in the blanks (the micro) than grand strategy (the macro).
- What does LLM coding feel like in the future? Is it like playing StarCraft? Playing Factorio? Playing music?
- How much of society is bottlenecked by digital knowledge work?

TLDR Where does this leave us? LLM agent capabilities (Claude & Codex especially) have crossed some kind of threshold of coherence around December 2025 and caused a phase shift in software engineering and closely related. The intelligence part suddenly feels quite a bit ahead of all the rest of it - integrations (tools, knowledge), the necessity for new organizational workflows, processes, diffusion more generally. 2026 is going to be a high energy year as the industry metabolizes the new capability. - Andrej Karpathy tweet
Offshore
Photo
Jukan
When I first put this material together, a lot of people were skeptical—asking why YMTC would bother with something like LPDDR5… but now it’s finally showing up in media reports. https://t.co/p5CnXO9JPv

China memory chip maker YMTC is sampling LPDDR5 low-power DRAM chips and is developing HBM memory, media report, citing unnamed supply chain sources. YMTC’s Wuhan P3 plant expansion will begin DRAM production in the 2nd half 2026. YMTC is developing hybrid bonding techniques for HBM memory, and LPDDR5 is part of efforts to build DRAM production expertise. In its mainstay NAND memory business, Beijing has tasked YMTC with ensuring stable supplies to consumer electronics and automotive firms. $MU #SKhynix #Samsung $000660 $005930 https://t.co/gyeyu4hSeY
- Dan Nystedt
tweet
Offshore
Photo
Illiquid
Yes, and SG's customers have been telling everyone they are increasing capacity.

Seikoh Giken ($6834) makes precision equipment and consumables used to polish and finish fiber-optic connector parts. As NVIDIA pushes co-packaged optics into its Rubin-era networking, the number of high-precision multi-fiber interfaces that must be manufactured and qualified goes up, which can raise demand for Seikoh's polishing tools, fixtures, and process consumables.

6834 is priced roughly 30% more expensive on NTM p/e ratio compared to Furukawa Electric, as seen in the images below.

My rough heuristic is the market expects 30% profit growth for NTM for 6834, as Furukawa shows little expected growth.

However, new order growth for Seikoh Giken is already at 30% (see image below), and looks to be at an inflection point. This new order growth does not take into account Nvidia's Rubin GPU's, which will further drive demand for Seikoh Giken's products, as Rubin switches to CPO.
- yc
tweet
Michael Fritzell (Asian Century Stocks)
RT @DaBao_: @jay_21_ Infinity 640 Big Sunshine 1475 Matsumoto 4365 Fuso Chemical 4368 Ishihara 4462 Somar 8152
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: The best prompt I ever wrote was telling the AI what NOT to do.

After 2 years using ChatGPT, Claude, and Gemini professionally, I've learned:

Constraints > Instructions

Here are 8 "anti-prompts" that tripled my output quality: https://t.co/VxmQ13erun
tweet