DevTestSecOps
138 subscribers
469 photos
29 videos
37 files
695 links
Forwards and notes on development, testing, security, and operations from @q587p.

About me: studied as System Architect, worked as a SysAdmin, working now as an Test Automation Engineer. Also, I'm interested in hacking (and everything related to it).

జ్
Download Telegram
#Gruntwork Production #Deployment Guides

The other day, Gruntwork released a section with tutorials (judging by existing materials, it will be even more conceptual than DigitalOcen) – there are sections for different clouds, as well as for different tasks: https://gruntwork.io/guides/

Here's an article on how to deploy #Kubernetes to #AWS: https://gruntwork.io/guides/kubernetes/how-to-deploy-production-grade-kubernetes-cluster-aws/
#AWS #cloud

An interesting post about Amazon appeared on the corporate blog of #Cloudflare: https://blog.cloudflare.com/aws-egregious-egress/

Editors CEO Matthew Prince and Global Infrastructure Leader Nitin Rao say AWS traffic margins for US and European users are at their most conservative estimate of 7,959%. 🤯

It is noteworthy that Amazon charges fees only for outbound traffic. This is due to Amazon's desire to block customers from their system and force all data to be stored in AWS. 👌
😁4👌1👻1
DevTestSecOps
#DNS
#postmortem #AWS #Amazon

We wanted to provide you with some additional information about the service disruption that occurred in the N. Virginia (us-east-1) Region on October 19 and 20, 2025. While the event started at 11:48 PM PDT on October 19 and ended at 2:20 PM PDT on October 20, there were three distinct periods of impact to customer applications. First, between 11:48 PM on October 19 and 2:40 AM on October 20, Amazon DynamoDB experienced increased API error rates in the N. Virginia (us-east-1) Region. Second, between 5:30 AM and 2:09 PM on October 20, Network Load Balancer (NLB) experienced increased connection errors for some load balancers in the N. Virginia (us-east-1) Region. This was caused by health check failures in the NLB fleet, which resulted in increased connection errors on some NLBs. Third, between 2:25 AM and 10:36 AM on October 20, new EC2 instance launches failed and, while instance launches began to succeed from 10:37 AM, some newly launched instances experienced connectivity issues which were resolved by 1:50 PM.


https://aws.amazon.com/message/101925/
🫡3