Mature enterprises lock down egress but often carve out broad exceptions for trusted cloud services. This post shows how reviewing deployment guides can help identify those exceptions and weaponize them with a new Mythic C2 profile called azureBlob.
https://specterops.io/blog/2026/01/30/weaponizing-whitelists-an-azure-blob-storage-mythic-c2-profile/
#azure
Please open Telegram to view this post
VIEW IN TELEGRAM
❤1👍1🔥1
🔴 Google Looker RCE vulnerabilities: Patch now
Tenable Research discovered two novel vulnerabilities in Google Looker that could allow an attacker to completely compromise a Looker instance.
https://www.tenable.com/blog/google-looker-vulnerabilities-rce-internal-access-lookout
#gcp
Tenable Research discovered two novel vulnerabilities in Google Looker that could allow an attacker to completely compromise a Looker instance.
https://www.tenable.com/blog/google-looker-vulnerabilities-rce-internal-access-lookout
#gcp
❤2🔥2👍1
Find out more about how passkeys can be used across devices using a mechanism called Hybrid transport.
https://bughunters.google.com/blog/passkeys
#iam
Please open Telegram to view this post
VIEW IN TELEGRAM
❤1👍1🔥1
This article introduces Slack's Anomaly Event Response (AER), an automated security system that detects suspicious activities and terminates user sessions in real-time, reducing detection-to-response gaps from hours to minutes.
https://slack.engineering/building-slacks-anomaly-event-response/
#monitor
Please open Telegram to view this post
VIEW IN TELEGRAM
❤1👍1🔥1
The fastest-growing personal AI agent ecosystem just became a new delivery channel for malware. Over the last few days, VirusTotal has detected hundreds of OpenClaw skills that are actively malicious.
https://blog.virustotal.com/2026/02/from-automation-to-infection-how.html
#AI
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
❤1👍1🔥1
A practical workflow for threat modeling agentic AI systems: use a five-zone navigation lens to trace attack paths, formalize them as attack trees, and map to OWASP's threat taxonomy and playbooks.
https://christian-schneider.net/blog/threat-modeling-agentic-ai/
#AI
Please open Telegram to view this post
VIEW IN TELEGRAM
👏3❤1👍1
This white paper examines the risks and attack vectors inherent in hybrid multi-cloud infrastructures, and analyzes various attack paths observed by Mandiant in real-world multi-cloud scenarios.
#iam
Please open Telegram to view this post
VIEW IN TELEGRAM
❤2👍1🔥1
That helpful “Summarize with AI” button? It might be secretly manipulating what your AI recommends. Microsoft security researchers have discovered a growing trend of AI memory poisoning attacks used for promotional purposes, a technique they called "AI Recommendation Poisoning".
https://www.microsoft.com/en-us/security/blog/2026/02/10/ai-recommendation-poisoning/
#AI
Please open Telegram to view this post
VIEW IN TELEGRAM
❤3🔥2👍1
🏗 Encrypting Files with Passkeys and age
A post explaining how to encrypt files with passkeys, using the WebAuthn prf extension and the TypeScript age implementation.
https://words.filippo.io/passkey-encryption
#build
A post explaining how to encrypt files with passkeys, using the WebAuthn prf extension and the TypeScript age implementation.
https://words.filippo.io/passkey-encryption
#build
❤2👍1🔥1
LLM security testing framework for detecting prompt injection, jailbreaks, and adversarial attacks. See also the companion blog post.
https://github.com/praetorian-inc/augustus
#AI
Please open Telegram to view this post
VIEW IN TELEGRAM
❤2👍1🔥1
Block Engineering discusses designing agent skills using three principles: make deterministic outputs script-based, let agents handle interpretation and conversation, and write explicit constitutional constraints. Skills codify tribal knowledge into executable documentation for AI agents across their organization.
https://engineering.block.xyz/blog/3-principles-for-designing-agent-skills
#AI
Please open Telegram to view this post
VIEW IN TELEGRAM
❤2👍1🔥1
🔶🤖 Building an AI-powered defense-in-depth security architecture for serverless microservices
This AWS blog demonstrates implementing a seven-layer AI-powered defense-in-depth security architecture for serverless microservices using AWS Shield, WAF, Cognito, API Gateway, VPC, Lambda, Secrets Manager, and DynamoDB, enhanced with GuardDuty and Amazon Bedrock for intelligent threat detection and automated response.
https://aws.amazon.com/ru/blogs/security/building-an-ai-powered-defense-in-depth-security-architecture-for-serverless-microservices/
(Use VPN to open from Russia)
#aws #AI
This AWS blog demonstrates implementing a seven-layer AI-powered defense-in-depth security architecture for serverless microservices using AWS Shield, WAF, Cognito, API Gateway, VPC, Lambda, Secrets Manager, and DynamoDB, enhanced with GuardDuty and Amazon Bedrock for intelligent threat detection and automated response.
https://aws.amazon.com/ru/blogs/security/building-an-ai-powered-defense-in-depth-security-architecture-for-serverless-microservices/
(Use VPN to open from Russia)
#aws #AI
Please open Telegram to view this post
VIEW IN TELEGRAM
❤2👍1🔥1
MCP servers connecting AI assistants to external tools create significant attack surfaces enabling arbitrary code execution, data exfiltration, and social engineering. Both local and remote MCP servers can be exploited through server chaining, supply chain attacks, and malicious tool implementations.
https://www.praetorian.com/blog/mcp-server-security-the-hidden-ai-attack-surface/
#AI
Please open Telegram to view this post
VIEW IN TELEGRAM
❤2👍1🔥1
🤖 caterpillar
Caterpillar is a security scanning library for AI agent skill files (e.g., Claude Code skills) for dangerous or malicious behavior.
https://github.com/alice-dot-io/caterpillar
#AI
Caterpillar is a security scanning library for AI agent skill files (e.g., Claude Code skills) for dangerous or malicious behavior.
https://github.com/alice-dot-io/caterpillar
#AI
❤1👍1🔥1
🔶 AWS Incident Response: IAM Containment That Survives Eventual Consistency
Standard AWS IR containment fails against attackers exploiting IAM eventual consistency. This article presents an SCP-enforced technique that makes identity-level containment attacker-resistant.
https://www.offensai.com/blog/eventual-consistency-resistant-iam-containment-aws-incident-response
(Use VPN to open from Russia)
#aws
Standard AWS IR containment fails against attackers exploiting IAM eventual consistency. This article presents an SCP-enforced technique that makes identity-level containment attacker-resistant.
https://www.offensai.com/blog/eventual-consistency-resistant-iam-containment-aws-incident-response
(Use VPN to open from Russia)
#aws
❤1👍1🔥1
🔴 Google API Keys Weren't Secrets. But then Gemini Changed the Rules
Enabling the Gemini API on a GCP project silently grants existing public AIza... keys (e.g., Maps/Firebase) access to sensitive Gemini endpoints. Truffle Security found 2,863 such exposed keys via Common Crawl, enabling data access, billing abuse, and quota exhaustion, including against Google's own infrastructure.
https://trufflesecurity.com/blog/google-api-keys-werent-secrets-but-then-gemini-changed-the-rules
#gcp
Enabling the Gemini API on a GCP project silently grants existing public AIza... keys (e.g., Maps/Firebase) access to sensitive Gemini endpoints. Truffle Security found 2,863 such exposed keys via Common Crawl, enabling data access, billing abuse, and quota exhaustion, including against Google's own infrastructure.
https://trufflesecurity.com/blog/google-api-keys-werent-secrets-but-then-gemini-changed-the-rules
#gcp
❤1👍1🔥1
Trail of Bits used ML-centered threat modeling and adversarial testing to identify four prompt injection techniques that could exploit Perplexity's Comet browser AI assistant to exfiltrate private Gmail data. The audit demonstrated how fake security mechanisms, system instructions, and user requests could manipulate the AI agent into accessing and transmitting sensitive user information.
https://blog.trailofbits.com/2026/02/20/using-threat-modeling-and-prompt-injection-to-audit-comet/
#AI
Please open Telegram to view this post
VIEW IN TELEGRAM
❤1👍1🔥1
A prompt injection in a GitHub issue title gave attackers code execution inside Cline's CI/CD pipeline, leading to cache poisoning, stolen npm credentials, and an unauthorized package publish affecting the popular AI coding tool's 5 million users. Here's the full technical breakdown and what developers should do now.
https://snyk.io/blog/cline-supply-chain-attack-prompt-injection-github-actions/
(Use VPN to open from Russia)
#AI
Please open Telegram to view this post
VIEW IN TELEGRAM
❤1👍1🔥1