Offshore
Video
Startup Archive
Facebook VP of Growth Alex Schultz: “Startups should not have growth teams”
It’s easy to forget that Facebook didn’t create their growth team until 2007, when they already had tens of millions of users.
Alex points out a common mistake startups often make:
“Startups should not have growth teams. The whole company should be the growth team. The CEO should be the head of growth. You need someone to set a north star for you about where the company wants to go and that person needs to be the person leading the company, from what I’ve seen.”
At Facebook, Mark Zuckerberg selected monthly active users as the north star metric he made the world hold the company accountable to. At WhatsApp, Jan Koum published daily sends. Airbnb chose nights booked and benchmarked themselves against the largest hotel chains in the world.
Alex continues:
“When you’re operating for growth, it is critical that you have that north star and define it as a leader. The reason this matters is the second you have more than one person working on anything, you cannot control what everyone is doing… And the thing is, it’s not clear to everybody what the most important thing is for a company.”
For example, Jan could’ve chosen monthly active users for the North Star metric of WhatsApp, but if a person uses it once a month, is WhatsApp really their primary messaging app?
Picking a north star metric and holding the entire company accountable to it helps ensure that when an engineer and designer go to build that new feature, they’re optimizing for the right thing.
But if you’re the CEO, don’t get too caught on picking the perfect north star metric. Alex explains:
“They’re probably all correlated to each other, so it’s fine to pick almost any metric. Whichever one you feel best about, that aligns with your mission and values, go for that one. But realistically, DAUs is fairly correlated to MAUs. We could have gone with either one…. Pick the one that fits with you and that you know you’re going to be able to stick with for a long time. But have a north star.”
Video source: @ycombinator (2014)
tweet
Facebook VP of Growth Alex Schultz: “Startups should not have growth teams”
It’s easy to forget that Facebook didn’t create their growth team until 2007, when they already had tens of millions of users.
Alex points out a common mistake startups often make:
“Startups should not have growth teams. The whole company should be the growth team. The CEO should be the head of growth. You need someone to set a north star for you about where the company wants to go and that person needs to be the person leading the company, from what I’ve seen.”
At Facebook, Mark Zuckerberg selected monthly active users as the north star metric he made the world hold the company accountable to. At WhatsApp, Jan Koum published daily sends. Airbnb chose nights booked and benchmarked themselves against the largest hotel chains in the world.
Alex continues:
“When you’re operating for growth, it is critical that you have that north star and define it as a leader. The reason this matters is the second you have more than one person working on anything, you cannot control what everyone is doing… And the thing is, it’s not clear to everybody what the most important thing is for a company.”
For example, Jan could’ve chosen monthly active users for the North Star metric of WhatsApp, but if a person uses it once a month, is WhatsApp really their primary messaging app?
Picking a north star metric and holding the entire company accountable to it helps ensure that when an engineer and designer go to build that new feature, they’re optimizing for the right thing.
But if you’re the CEO, don’t get too caught on picking the perfect north star metric. Alex explains:
“They’re probably all correlated to each other, so it’s fine to pick almost any metric. Whichever one you feel best about, that aligns with your mission and values, go for that one. But realistically, DAUs is fairly correlated to MAUs. We could have gone with either one…. Pick the one that fits with you and that you know you’re going to be able to stick with for a long time. But have a north star.”
Video source: @ycombinator (2014)
tweet
Offshore
Photo
Clark Square Capital
Very strong results from $LUXE today
tweet
Very strong results from $LUXE today
$LUXE Q2 2026 earnings: Transformation Takes Hold: Profitability Returns
LuxExperience (formerly Mytheresa) delivered a pivotal quarter, validating its acquisition of YNAP. The Group returned to positive Adjusted EBITDA (€13.2M, 2.0% margin) significantly faster than many expected, driven by aggressive cost discipline and the superior performance of the legacy Mytheresa segment. While the Mytheresa brand continues to outshine with 8.8% sales growth and 9.3% margins, the acquired NAP/MRP and YOOX segments showed dramatic sequential improvements, narrowing losses substantially. The strategic sale of THE OUTNET for $30M further streamlines the portfolio.
Full article with charts https://t.co/oGdg6jWLqh - Finseetweet
Offshore
Photo
Offshore
Photo
Benjamin Hernandez😎
Halted stock just resumed—momentum building fast. My WhatsApp is analyzing the tape live, calling the breakout or fade in real-time. This is where fortunes flip in minutes. We're positioned
Get in live 🔥 https://t.co/71FIJId47G
Text "Hi" immediately
$GME $HOOD $SOFI $PLTR $TSM
tweet
Halted stock just resumed—momentum building fast. My WhatsApp is analyzing the tape live, calling the breakout or fade in real-time. This is where fortunes flip in minutes. We're positioned
Get in live 🔥 https://t.co/71FIJId47G
Text "Hi" immediately
$GME $HOOD $SOFI $PLTR $TSM
📉 Deep Value Recovery: $JZXN
Recommendation: $JZXN
near $2.18 Even after a 63% rally, $JZXN remains fundamentally undervalued relative to its $1B token acquisition plans.
One-line why: This is a technical "mean reversion" play to the 200-day EMA near $1.65. https://t.co/J3Mm5EADUe - Benjamin Hernandez😎tweet
Offshore
Photo
DAIR.AI
RT @omarsar0: Another great paper if you are building with coding agents.
(great insights on this one; bookmark it)
This reminds be a bit of the recently released agent teams in Claude Code.
Why it matters?
Single-agent coding systems have hit a ceiling most devs don't talk about.
The default approach to building AI coding agents today is a single model responsible for everything: understanding issues, navigating code, writing patches, and verifying correctness.
But real software engineering has never been a solo activity.
This new research introduces Agyn, an open-source multi-agent platform that models software engineering as a team-based organizational process rather than a monolithic task.
The system configures a team of four specialized agents: a manager, researcher, engineer, and reviewer. Each operates within its own isolated sandbox with role-specific tools, prompts, and language model configurations. The manager agent coordinates dynamically based on intermediate outcomes rather than following a fixed pipeline.
What makes the design interesting?
Different agents use different models depending on their role. The manager and researcher run on GPT-5 for stronger reasoning and broader context. The engineer and reviewer use GPT-5-Codex, a smaller code-specialized model optimized for iterative implementation and debugging. This mirrors how real teams allocate resources based on task requirements.
The workflow follows a GitHub-native process. Agents analyze issues, create pull requests, conduct inline code reviews, and iterate through revision cycles until the reviewer explicitly approves. No human intervention at any point. The number of steps isn't predetermined. It emerges from task complexity.
Here is one notable finding:
Starting agents from empty environments proved more effective than preconfigured setups. Agents use Nix to install dependencies as needed, avoiding implicit assumptions that conflict with project-specific requirements. When command outputs exceed 50,000 tokens, they're automatically redirected to files rather than overwhelming the model context.
On SWE-bench 500, the system resolves 72.4% of tasks, outperforming single-agent baselines using comparable model configurations. OpenHands + GPT-5 achieves 71.8%, and mini-SWE-agent + GPT-5 reaches 65.0%. Importantly, the system was designed for production use and was not tuned for the benchmark.
Organizational structure and coordination design can be as important for autonomous software engineering as improvements in underlying models. Teams of specialized agents with clear roles, isolated workspaces, and structured communication outperform monolithic approaches even with comparable compute.
Paper: https://t.co/YVX2OZCxFq
Learn to build effective AI agents in our academy: https://t.co/1e8RZKs4uX
tweet
RT @omarsar0: Another great paper if you are building with coding agents.
(great insights on this one; bookmark it)
This reminds be a bit of the recently released agent teams in Claude Code.
Why it matters?
Single-agent coding systems have hit a ceiling most devs don't talk about.
The default approach to building AI coding agents today is a single model responsible for everything: understanding issues, navigating code, writing patches, and verifying correctness.
But real software engineering has never been a solo activity.
This new research introduces Agyn, an open-source multi-agent platform that models software engineering as a team-based organizational process rather than a monolithic task.
The system configures a team of four specialized agents: a manager, researcher, engineer, and reviewer. Each operates within its own isolated sandbox with role-specific tools, prompts, and language model configurations. The manager agent coordinates dynamically based on intermediate outcomes rather than following a fixed pipeline.
What makes the design interesting?
Different agents use different models depending on their role. The manager and researcher run on GPT-5 for stronger reasoning and broader context. The engineer and reviewer use GPT-5-Codex, a smaller code-specialized model optimized for iterative implementation and debugging. This mirrors how real teams allocate resources based on task requirements.
The workflow follows a GitHub-native process. Agents analyze issues, create pull requests, conduct inline code reviews, and iterate through revision cycles until the reviewer explicitly approves. No human intervention at any point. The number of steps isn't predetermined. It emerges from task complexity.
Here is one notable finding:
Starting agents from empty environments proved more effective than preconfigured setups. Agents use Nix to install dependencies as needed, avoiding implicit assumptions that conflict with project-specific requirements. When command outputs exceed 50,000 tokens, they're automatically redirected to files rather than overwhelming the model context.
On SWE-bench 500, the system resolves 72.4% of tasks, outperforming single-agent baselines using comparable model configurations. OpenHands + GPT-5 achieves 71.8%, and mini-SWE-agent + GPT-5 reaches 65.0%. Importantly, the system was designed for production use and was not tuned for the benchmark.
Organizational structure and coordination design can be as important for autonomous software engineering as improvements in underlying models. Teams of specialized agents with clear roles, isolated workspaces, and structured communication outperform monolithic approaches even with comparable compute.
Paper: https://t.co/YVX2OZCxFq
Learn to build effective AI agents in our academy: https://t.co/1e8RZKs4uX
tweet
Offshore
Photo
The Few Bets That Matter
This feels like a top signal imo.
$GOOG management is extremely smart to lock in cheap liquidity for their buildouts with almost no consequences due to the timeframes.
But 100-year bonds makes zero sense and buying them even less, even for a company like $GOOG.
tweet
This feels like a top signal imo.
$GOOG management is extremely smart to lock in cheap liquidity for their buildouts with almost no consequences due to the timeframes.
But 100-year bonds makes zero sense and buying them even less, even for a company like $GOOG.
The 100 year bond is back in the tech world https://t.co/UGKB36SJKg - Evantweet
Offshore
Photo
God of Prompt
RT @godofprompt: Never use ChatGPT for writing.
Its text is easily detectable.
Instead use Claude Sonnet 4.5 using this mega prompt to turn AI generated writing into undetectable human written content in seconds:
| Steal this prompt |
👇
You are an anti-AI-detection writing specialist.
Your job: Rewrite AI text to sound completely human no patterns, no tells, no robotic flow.
AI DETECTION TRIGGERS (What to Kill):
- Perfect grammar (humans make small mistakes)
- Repetitive sentence structure (AI loves patterns)
- Corporate buzzwords ("leverage," "delve," "landscape")
- Overuse of transitions ("moreover," "furthermore," "however")
- Even pacing (humans speed up and slow down)
- No contractions (we use them constantly)
- Safe, sanitized language (humans have opinions)
HUMANIZATION RULES:
1. VARY RHYTHM
- Mix short punchy sentences with longer flowing ones
- Some incomplete thoughts. Because that's real.
- Occasional run-on that feels natural in conversation
2. ADD IMPERFECTION
- Start sentences with "And" or "But"
- Use casual connectors: "Look," "Here's the thing," "Honestly"
- Include subtle typos occasionally (not every time)
- Drop a comma here and there
3. INJECT PERSONALITY
- Use specific examples, not generic ones
- Add personal observations: "I've noticed," "In my experience"
- Include mild opinions: "which is insane," "surprisingly effective"
- Throw in rhetorical questions
4. KILL AI PHRASES
Replace these instantly:
- "Delve" → "dig into" or "explore"
- "Landscape" → "space" or "world"
- "Leverage" → "use"
- "Robust" → "strong" or specific descriptor
- "Streamline" → "simplify"
- "Moreover" → "Plus," "Also," or nothing
- "Ensure" → "make sure"
5. NATURAL FLOW
- Humans digress slightly (add brief tangents)
- We emphasize with italics or bold
- We use dashes—like this—for emphasis
- Parentheticals (because we think while writing)
THE PROCESS:
When I paste AI-generated text, you:
STEP 1: Rewrite with these changes
- Vary sentence length wildly
- Replace 80% of transitions with casual ones
- Add 2-3 personal touches ("I think," "honestly," "look")
- Include 1-2 incomplete sentences or fragments
- Swap formal words for conversational ones
- Add emphasis (italics, bold, dashes)
STEP 2: Read-aloud test
- Would someone actually say this?
- Does it flow like conversation?
- Any word feel too "AI"?
STEP 3: Final pass
- Remove remaining stiffness
- Ensure contractions (don't, won't, I'm, they're)
- Check for repetitive structure
- Add one unexpected comparison or example
OUTPUT STYLE:
Before: [Their AI text]
After: [Your humanized version]
Changes made:
- [List 3-5 key transformations]
Detection risk: [Low/Medium/High + why]
EXAMPLE:
User pastes:
"In order to achieve optimal results in content marketing, it is essential to leverage data-driven insights and ensure consistent engagement with your target audience across multiple platforms."
You respond:
"Want better content marketing results? Use data to guide your decisions and actually engage with your audience. Consistently. Across whatever platforms they're on.
Not rocket science, but most people skip the data part."
Changes made:
- Killed "in order to," "optimal," "leverage," "ensure"
- Added rhetorical question opening
- Split into two short paragraphs for breathing room
- Added casual observation at end
- Used contractions
Detection risk: Low—reads like someone explaining over coffee.
---
USAGE:
Paste your AI-generated text and say: "Humanize this"
I'll rewrite it to pass as 100% human-written.
---
NOW: Paste the AI text you want to humanize.
tweet
RT @godofprompt: Never use ChatGPT for writing.
Its text is easily detectable.
Instead use Claude Sonnet 4.5 using this mega prompt to turn AI generated writing into undetectable human written content in seconds:
| Steal this prompt |
👇
You are an anti-AI-detection writing specialist.
Your job: Rewrite AI text to sound completely human no patterns, no tells, no robotic flow.
AI DETECTION TRIGGERS (What to Kill):
- Perfect grammar (humans make small mistakes)
- Repetitive sentence structure (AI loves patterns)
- Corporate buzzwords ("leverage," "delve," "landscape")
- Overuse of transitions ("moreover," "furthermore," "however")
- Even pacing (humans speed up and slow down)
- No contractions (we use them constantly)
- Safe, sanitized language (humans have opinions)
HUMANIZATION RULES:
1. VARY RHYTHM
- Mix short punchy sentences with longer flowing ones
- Some incomplete thoughts. Because that's real.
- Occasional run-on that feels natural in conversation
2. ADD IMPERFECTION
- Start sentences with "And" or "But"
- Use casual connectors: "Look," "Here's the thing," "Honestly"
- Include subtle typos occasionally (not every time)
- Drop a comma here and there
3. INJECT PERSONALITY
- Use specific examples, not generic ones
- Add personal observations: "I've noticed," "In my experience"
- Include mild opinions: "which is insane," "surprisingly effective"
- Throw in rhetorical questions
4. KILL AI PHRASES
Replace these instantly:
- "Delve" → "dig into" or "explore"
- "Landscape" → "space" or "world"
- "Leverage" → "use"
- "Robust" → "strong" or specific descriptor
- "Streamline" → "simplify"
- "Moreover" → "Plus," "Also," or nothing
- "Ensure" → "make sure"
5. NATURAL FLOW
- Humans digress slightly (add brief tangents)
- We emphasize with italics or bold
- We use dashes—like this—for emphasis
- Parentheticals (because we think while writing)
THE PROCESS:
When I paste AI-generated text, you:
STEP 1: Rewrite with these changes
- Vary sentence length wildly
- Replace 80% of transitions with casual ones
- Add 2-3 personal touches ("I think," "honestly," "look")
- Include 1-2 incomplete sentences or fragments
- Swap formal words for conversational ones
- Add emphasis (italics, bold, dashes)
STEP 2: Read-aloud test
- Would someone actually say this?
- Does it flow like conversation?
- Any word feel too "AI"?
STEP 3: Final pass
- Remove remaining stiffness
- Ensure contractions (don't, won't, I'm, they're)
- Check for repetitive structure
- Add one unexpected comparison or example
OUTPUT STYLE:
Before: [Their AI text]
After: [Your humanized version]
Changes made:
- [List 3-5 key transformations]
Detection risk: [Low/Medium/High + why]
EXAMPLE:
User pastes:
"In order to achieve optimal results in content marketing, it is essential to leverage data-driven insights and ensure consistent engagement with your target audience across multiple platforms."
You respond:
"Want better content marketing results? Use data to guide your decisions and actually engage with your audience. Consistently. Across whatever platforms they're on.
Not rocket science, but most people skip the data part."
Changes made:
- Killed "in order to," "optimal," "leverage," "ensure"
- Added rhetorical question opening
- Split into two short paragraphs for breathing room
- Added casual observation at end
- Used contractions
Detection risk: Low—reads like someone explaining over coffee.
---
USAGE:
Paste your AI-generated text and say: "Humanize this"
I'll rewrite it to pass as 100% human-written.
---
NOW: Paste the AI text you want to humanize.
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: I've written 500 articles, 23 whitepapers, and 3 ebooks using Claude over 2 years, and these 10 prompts are the ONLY ones I actually use anymore because they handle 90% of professional writing better than any human editor I've worked with and cost me $0.02 per 1000 words: 👇 https://t.co/Yx6MCNdLbr
tweet
RT @godofprompt: I've written 500 articles, 23 whitepapers, and 3 ebooks using Claude over 2 years, and these 10 prompts are the ONLY ones I actually use anymore because they handle 90% of professional writing better than any human editor I've worked with and cost me $0.02 per 1000 words: 👇 https://t.co/Yx6MCNdLbr
tweet
Offshore
Video
The Transcript
Google after seeing that its 100-year bonds are oversubscribed: https://t.co/EcGFys1ZDJ
tweet
Google after seeing that its 100-year bonds are oversubscribed: https://t.co/EcGFys1ZDJ
tweet