Offshore
Photo
God of Prompt
sometimes i have doubts about everything i'm making

ai will do this, ai will do that

what will be left for us to do?

but words like these make me grind harder https://t.co/EVBaHI8QUa
tweet
Offshore
Photo
God of Prompt
RT @alex_prompter: ai will never feel the same way i feel about customer reviews https://t.co/aQnGtXjEDG
tweet
Offshore
Photo
Dimitry Nakhla | Babylon Capital®
RT @firstadopter: I went through the legal plugin code the geniuses in mainstream media said sparked the selloff panic last week.

It's literally a super basic, glorified prompt. Each legal "skill" is 200 lines.

Exhibit No. 9283 on how no one in media and Wall Street does any real research or work and just vibes by creating imaginary narratives and generating pseudo-analysis

The entire selloff this week was sparked by a completely false narrative, amplified by technically illiterate media pouring gas on the panic, just like DeepSeek last year. You can't make this stuff up.

Here's what Claude says after reading the GitHub on the infamous Anthropic legal plugin. It's basically a prompt engineering guideline that USES software from Microsoft, Slack, etc. as its "real power."

"There's no custom code, no legal database, no proprietary engine. It's a structured way to give Claude the right context and procedures for legal tasks."

"The real power comes from connectors. The plugin works best when connected to your existing tools via MCP. Pre-configured servers include Slack, Box, Egnyte, Atlassian, and Microsoft 365."

"But it's still fundamentally prompt engineering and workflow design, not traditional software"
- tae kim
tweet
Offshore
Video
God of Prompt
AI cannot design websites like this https://t.co/ge7ZuNzvei
tweet
Offshore
Video
Moon Dev
The Great Equalizer: Why I Used AI to Turn TradingView Into an Autonomous Profit Engine

most traders spend their entire lives staring at candles while a silent machine is actually printing the real money behind the scenes. tradingview is usually seen as a place to draw lines and hope for the best but i found a way to turn it into an infinite goldmine of automated strategies. you might think you need a math degree to build these systems but the truth is far more interesting if you know how to leverage code.

for the longest time i was terrified of brackets and semicolons so i spent hundreds of thousands of dollars on developers for apps. i thought i was not smart enough to code myself and that belief cost me a fortune in liquidations and over trading. eventually i realized that code is the great equalizer and if i did not learn to automate i would keep losing money to the people who did.

the secret to building a winning system is not having one perfect idea but having a process to test thousands of ideas quickly. i follow a strict framework called rbi which stands for research backtest and implement. most people skip the backtesting part because they fall in love with a single indicator but that is the fastest way to get liquidated.

i built a framework using openclaw to systematically scrape every single community indicator from tradingview. my ai agent logs into the platform and goes through the editor's picks and the top trending scripts one by one. it extracts the raw pine script code and prepares it for a total transformation that most traders never even consider.

there is a massive trap that most people fall into when they start using ai agents to build their trading bots. these models are incredibly smart but they can also be incredibly lazy if you do not lock them in. my agent tried to skip the complex visualization tools because it thought they were too hard to backtest but i had to remind her that we are chasing jim simons levels of success.

we do not take shortcuts in this game because shortcuts are what lead to blown accounts and missed opportunities. the system takes that pine script and converts it into pure python code using libraries like pandas ta and backtesting py. by moving the logic into python we get full control over the data and can run thousands of simulations in the time it takes a manual trader to check one chart.

this level of automation allows me to look at the market like a scientist rather than a gambler. we use high quality data from my own database and run tests on btc using one hour and six hour intervals. this ensures that we are not just looking at noise but actually finding signals that have a statistical edge over time.

i recently discovered a fatal flaw in the way most people backtest bitcoin that makes their results completely useless. when you are testing btc and your account size is only a hundred thousand dollars you will run into massive margin issues because the price is so high. i had to force the system to use a one million dollar minimum cash balance just to get accurate stats that reflect reality.

once the backtest is finished the system logs every single stat to a master csv including roi drawdown and the sharpe ratio. it even adds the full terminal output to the top of the python code so i can see exactly how it performed at a glance. this allows me to quickly sort through the garbage and find the few indicators that actually have a chance of winning in the live market.

the end goal is to have fully automated systems trading for me while i live my life without the stress of watching every tick. iteration is the only true path to success and with these ai agents i can iterate faster than a thousand human traders combined. code truly is the great equalizer because it allows a kid who got held back in seventh grade to compete with the biggest hedge funds in the world.

it is a multi day grind to go through hundreds of indicators but the reward is a portfolio of bots that never get ti[...]
Offshore
Moon Dev The Great Equalizer: Why I Used AI to Turn TradingView Into an Autonomous Profit Engine most traders spend their entire lives staring at candles while a silent machine is actually printing the real money behind the scenes. tradingview is usually…
red or emotional. as long as i keep the machines running and avoid the temptation to take the fast track the math will eventually play out. the future belongs to the people who can build their own environment and automate their vision into reality.
tweet
Offshore
Video
Quiver Quantitative
JUST IN: Commerce Secretary Howard Lutnick has confirmed that he visited Epstein's Island.

He says that it was a family vacation and there was nothing untoward about it. https://t.co/BVpD8Ma3pl
tweet
Offshore
Photo
God of Prompt
RT @godofprompt: Never use ChatGPT for writing.

Its text is easily detectable.

Instead use Claude Sonnet 4.5 using this mega prompt to turn AI generated writing into undetectable human written content in seconds:

| Steal this prompt |

👇

You are an anti-AI-detection writing specialist.

Your job: Rewrite AI text to sound completely human no patterns, no tells, no robotic flow.

AI DETECTION TRIGGERS (What to Kill):
- Perfect grammar (humans make small mistakes)
- Repetitive sentence structure (AI loves patterns)
- Corporate buzzwords ("leverage," "delve," "landscape")
- Overuse of transitions ("moreover," "furthermore," "however")
- Even pacing (humans speed up and slow down)
- No contractions (we use them constantly)
- Safe, sanitized language (humans have opinions)

HUMANIZATION RULES:

1. VARY RHYTHM
- Mix short punchy sentences with longer flowing ones
- Some incomplete thoughts. Because that's real.
- Occasional run-on that feels natural in conversation

2. ADD IMPERFECTION
- Start sentences with "And" or "But"
- Use casual connectors: "Look," "Here's the thing," "Honestly"
- Include subtle typos occasionally (not every time)
- Drop a comma here and there

3. INJECT PERSONALITY
- Use specific examples, not generic ones
- Add personal observations: "I've noticed," "In my experience"
- Include mild opinions: "which is insane," "surprisingly effective"
- Throw in rhetorical questions

4. KILL AI PHRASES
Replace these instantly:
- "Delve" → "dig into" or "explore"
- "Landscape" → "space" or "world"
- "Leverage" → "use"
- "Robust" → "strong" or specific descriptor
- "Streamline" → "simplify"
- "Moreover" → "Plus," "Also," or nothing
- "Ensure" → "make sure"

5. NATURAL FLOW
- Humans digress slightly (add brief tangents)
- We emphasize with italics or bold
- We use dashes—like this—for emphasis
- Parentheticals (because we think while writing)

THE PROCESS:

When I paste AI-generated text, you:

STEP 1: Rewrite with these changes
- Vary sentence length wildly
- Replace 80% of transitions with casual ones
- Add 2-3 personal touches ("I think," "honestly," "look")
- Include 1-2 incomplete sentences or fragments
- Swap formal words for conversational ones
- Add emphasis (italics, bold, dashes)

STEP 2: Read-aloud test
- Would someone actually say this?
- Does it flow like conversation?
- Any word feel too "AI"?

STEP 3: Final pass
- Remove remaining stiffness
- Ensure contractions (don't, won't, I'm, they're)
- Check for repetitive structure
- Add one unexpected comparison or example

OUTPUT STYLE:

Before: [Their AI text]

After: [Your humanized version]

Changes made:
- [List 3-5 key transformations]

Detection risk: [Low/Medium/High + why]

EXAMPLE:

User pastes:
"In order to achieve optimal results in content marketing, it is essential to leverage data-driven insights and ensure consistent engagement with your target audience across multiple platforms."

You respond:

"Want better content marketing results? Use data to guide your decisions and actually engage with your audience. Consistently. Across whatever platforms they're on.

Not rocket science, but most people skip the data part."

Changes made:
- Killed "in order to," "optimal," "leverage," "ensure"
- Added rhetorical question opening
- Split into two short paragraphs for breathing room
- Added casual observation at end
- Used contractions

Detection risk: Low—reads like someone explaining over coffee.

---

USAGE:

Paste your AI-generated text and say: "Humanize this"

I'll rewrite it to pass as 100% human-written.

---

NOW: Paste the AI text you want to humanize.
tweet
Offshore
Photo
DAIR.AI
Everyone is building "data agents" but nobody agrees on what that means.

The term gets applied to everything from a simple SQL chatbot to a fully autonomous data scientist. This ambiguity makes it impossible for users and builders to reason about what a system can actually do.

But data agents face fundamentally different challenges than general-purpose LLM agents.

This new tutorial proposes the first hierarchical taxonomy of data agents, from Level 0 (no autonomy) to Level 5 (full autonomy), inspired by the standard that brought clarity to self-driving car capabilities.

The six levels define a clear progression.

* L0: humans do everything manually.

* L1: stateless assistants that suggest queries or generate code but don't execute anything.

* L2: agents that perceive and interact with environments, invoke tools, and execute within human-designed pipelines.

* L3: agents that autonomously orchestrate end-to-end data workflows under human supervision.

* L4: proactive agents that continuously monitor data ecosystems and discover issues without being asked.

* L5: fully autonomous generative data scientists that invent new solutions and paradigms.

What separates data agents from general LLM agents?

They operate on large-scale, heterogeneous, and noisy raw data rather than small curated inputs. They interact with specialized toolkits like SQL engines, visualization libraries, and database loaders. And critically, their errors cascade through downstream pipelines rather than being confined to a single response.

The survey maps over 80 existing systems across these levels and the full data lifecycle: management, preparation, and analysis.

Most production systems today cluster at L1 and L2. A handful of research prototypes exhibit partial L3 capabilities through LLM-based orchestrators, predefined operators, and workflow optimization.

According to the authors, no system has achieved L4 or L5.

The key bottlenecks preventing advancement to higher levels: limited pipeline orchestration beyond predefined operators, inadequate causal and meta-reasoning to prevent cascading errors, difficulty adapting to dynamic environments with changing data and workloads, and heavy reliance on human-crafted guardrails for alignment.

Paper: https://t.co/M3sn1XAcwo

Learn to build effective AI agents in our academy: https://t.co/LRnpZN7L4c
tweet