CatOps
5.68K subscribers
94 photos
4 videos
19 files
2.27K links
DevOps and other issues by Yurii Rochniak (@grem1in) - SRE @ Preply && Maksym Vlasov (@MaxymVlasov) - Engineer @ Star. Opinions on our own.

We do not post ads including event announcements. Please, do not bother us with such requests!
Download Telegram
I don’t work with the databases much lately. Moreover, I haven’t worked with MySQL/MariaDB for a long time. Thus, I am not 100% sure how useful is this tool, but I found it in the reliable source.

mariabak is a CLI for mysqlsump that eases certain operations. So, you don’t have to pass multiple mysqldump commands for certain jobs.

#toolz #databases #mysql
pgdump-aws-lambda is a ready-to-use Lambda function that creates a dump of your PostgreSQL database and streams it it S3.

There is already a native way to backup RDS databases. However, I can see a couple of use cases for this tool. For example:
- Backup databases that run on plain EC2 machines. I’m not sure if anyone does it today, but I worked in a company that did.
- Backup databases located outside AWS in case of hybrid setups. Obviously, it’s going to be challenging to configure such interconnection in a secure and reliable way, but if you’re using a hybrid setup, you already know what am I talking about.
- Use this Lambda function as a blueprint and extend its functionality. For example, obfuscate certain fields to create a non-production DB for tests, etc.

TBH, I’m not sure how it’s going to work with the 15 minutes hard limit for execution time for Lambdas, but you won’t find out unless you try, I guess.

So overall, an interesting project that I won’t likely use myself, but it might be fun to play with.

#databases #aws #serverless
Long time no posts about databases! So, here’s a short story of how Retool migrated their 4TB Postgres database from version 9.6 to 13.

There are a couple of interesting moments in this story:
- “Lift and shift” migrations are still the case. Sometimes it’s better to have a brief period of downtime than risk a migration to fail mid-way
- Cloud solution might not suit or even fail you. Running things in the cloud doesn’t mean that you don’t need to take care of operations whatsoever (especially when it comes to DBs)
- Test using representative workload be it number of requests or the size of DB.
- Even if there’s a tool for a job, it may require some tweaking. Also, sometimes you need to be creative (it’s in the article, they’ve wrote a script to migrate a pair of particularly large tables)
- Write run books :)

I don’t know, how many of you manage databases, but these points are applicable not only to DB migrations I must say.

#databases
Database trends spotted by Redis at KubeCon.

In nutshell:
- Running databases is hard.
- Running databases in Kubernetes = all the complexity of running databases + all the complexity of running Kubernetes.
- Yet, Data on Kubernetes community exists and has quite a few success stories.
- One of the problems is that there are no standard. Frequently, there are at least a couple of different operators and charts to run %dbname%. So, it might be hard for users to decide what tools to use in which case.
- Another problem is the lack of people, who are experts in both running databases and running Kubernetes.

So, if you want to be in demand on the market, get yourself familiar with data operations. That thing is getting momentum right now.

#databases #kubernetes
Learn from memes is a working strategy!

Therefore, I’d like to share with you this article (quite a long one) that describes a Postgres meme.

So, you could learn its concept broken down by a “level of depths”.

#databases #postgres
The Guardian tells a story of their migration into AWS Aurora Serverless.

This article doesn’t go too deep into technical aspects, but provides a nice overview of the issues one may encounter when trying to move to Aurora.

A couple of things that I found interesting:

- Whatever cloud migration tools are there, pg_dump and pg_restore are your trusted friends.

- This paragraph:


We’re spending roughly $220/month for storage and compute for the database. For the same price we could have rented a db.m7g.xlarge (16GB RAM, 4 vCPUs) Postgres instance along with 100GB of EBS storage or a db.r7g.large (16GB RAM, 2 vCPUs) Aurora instance. I suspect both of these options would have done the job for us, and maybe not have suffered from the same cold start problems as our serverless database, but after 3 migrations, it’s probably time to get back to doing some feature work!


#databases #postgres #aws
If you want to learn SQL, or you know somebody who wants (or should, lol), or you want to refresh your SQL skills, you can use interactive lessons on SQL Bolt.

They're simple, but good enough to get up to speed with the basics.

#databases
​​**Database Fundamentals.**

Because it just fundamentals, it can take a few hours to read and understand + mandatory breaks :)

It's one of the best articles I've seen on general DB topics, with a huge amount of links and notes to go deeper into Rabbit Hole. Definitely recommend it to read.

#databases
Do you run databases in Kubernetes?

Even if you don't, I bet you may run database migrations there. How do you do that?

This article on "The New Stack" makes a case for GitOps approach to the database migrations in Kubernetes.

*tl;dr*: It's Atlas Operator, there's no alternative.

#kubernetes #databases
Resend had a 12 hour outage on the 21st of February.

tl;dr:
> The database migration accidentally deleted data from production servers…
> … we performed a database migration command locally, but it incorrectly pointed to the production environment instead…

You can read it in more details is the article, but here are some of the action items from this postmortem:

- No accessible user role should have write privileges on the production database.
- Improve local development to reduce risks related to database migrations.
- Create redundancy to preserve sending function even during a database outage.
- Increase cadence for disaster recovery tests.
- Implement incident banner on Resend dashboard to inform users quickly.

So, I dunno, check your database. Maybe, you have such a risk as well.

Also, it’s kinda strange that people rarely talk about network isolation not only between their production and non-production environments, but also between their local environment and production. Make production access conscious. Put it on a separate role/VPN. Add some friction accessing it.

Moreover, for the love of god, validate your DB backups.

#postmortem #databases