SATOSHI ° NOSTR ° AI LLM ML RL ° LINUX ° MESH IoT ° BUSINESS ° OFFGRID ° LIFESTYLE | HODLER TUTORIAL
#Article #LLM #Artificial_Intelligence #Cybersecurity #DataLeakage #OpenSource #Prompt #Injection
source
source
Towards Data Science
The Hidden Security Risks of LLMs
And why self-hosting might be the safer bet
SATOSHI ° NOSTR ° AI LLM ML RL ° LINUX ° MESH IoT ° BUSINESS ° OFFGRID ° LIFESTYLE | HODLER TUTORIAL
Embrace The Red
How Prompt Injection Exposed Manus' VS Code Server to the Internet
This post shows how an indirect prompt injection can trick Manus to expose the VS code server and at the same time leak its connection password, allowing an adversary to connect over the internet and gain full access to Manus's development machine