This is the most basic level of disaster recovery readiness. It involves regular backups of data and systems, often stored offsite or in the cloud. In this context, "Back" likely refers to backing up data and systems to restore them in case of data loss or system failure.
This refers to a disaster recovery setup where essential systems are kept in a minimal operational state
In the event of a disaster, additional resources can be quickly provisioned to bring the system to full operational capacity. It's a step up from basic backup, offering a faster recovery time.
A warm standby site is a disaster recovery setup where duplicate hardware and infrastructure are maintained, but they are not actively processing data or serving users.
The infrastructure is configured and ready to take over in case the primary site fails. This setup typically involves periodic synchronization of data and configurations to reduce recovery time.
A hot site is a fully operational secondary data center or environment that mirrors the primary production environment. It is continuously updated and synchronized with the primary site in real-time or near real-time.
In the event of a disaster, operations can seamlessly switch to the hot site with minimal disruption, offering the shortest recovery time objective (RTO) and recovery point objective (RPO).
Please open Telegram to view this post
VIEW IN TELEGRAM
- Automates the release process.
- Ensures readiness for deployment at any time.
- Allows manual deployment when needed.
- Automates deployment of every successful code change.
- Directly deploys to production without human intervention.
- Requires high confidence in automated testing.
Please open Telegram to view this post
VIEW IN TELEGRAM
1707395410428.gif
1.3 MB
Jenkins is a popular automation server that can be used to automate the CI/CD pipeline. In this post we will learn how to use Jenkins to automate the following steps:
Please open Telegram to view this post
VIEW IN TELEGRAM
Docker has revolutionized the world of containerization, enabling scalable and efficient application deployment.
To make the most of this powerful tool, here are 10 essential Docker best practices:
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
We face this issue when the image is not present in registry or the given image tag is wrong.
Make sure you provide correct registry url, image name and image tag.
We might face authentication failures, when image is being stored in a private registry, make sure to create secret with private registry credentials and add created secret in Kubernetes Deployment File to pull docker image.
We face this issue when the process deployed inside container not running then the POD will be moved to CrashLoopBackOff.
POD might be running out of CPU or memory, POD should get enough resources allocated that’s cpu and memory for an application to be up and running, to fix that check in Resources Requests and Resources Limits.
We face this issue when PODs tries to utilise more memory than the limits we have set.
We can resolve it by setting appropriate resource request and resource limit.
When nodes might not be ready and required resources like CPU and Memory may not be available in nodes for the PODs to be up and running.
POD will be scheduled to a node but POD won’t be running in scheduled node.
We can fix this by providing correct image name, image tag and authentication to registry.
We can fix this by creating appropriate service.
If service is already created and application is still not accessible, make sure application and service are deployed in same namespace.
We can resolve this by setting appropriate resource requests and resource limits for the PODs and having enough resources in worker nodes.
Please open Telegram to view this post
VIEW IN TELEGRAM
1-10 years of exp in DevOps (AWS/ Azure/ GCP).
Hands-on exp in deploying Kubernetes cluster using ELK/ GKE environment.
Creating CI/CD pipeline using Jenkins.
Using Monitoring tools like Prometheus/ Grafana/ Stack driver.
Docker
Infra Automation scripting
Please open Telegram to view this post
VIEW IN TELEGRAM
A production-ready Kubernetes cluster is vastly complex. There are many non-negotiable such as High Availability, Fault Tolerance, data backups and durability requirements.
Its architecture is divided into the Control Plane and Data Plane.
This is what they do
➡️ DATA PLANE
The part of the cluster where all compute resources reside. This is where ultimately all your container applications run.
1️⃣ Nodes
The worker machines that actually run container workloads. These could be EC2 servers (or other cloud provider equivalents), bare-metal servers or even just your personal computer.
2️⃣ Pods
The smallest unit of compute that you can deploy in K8s. A Pod contains 1 or more containers running your application(s) and helper processes. A Pod runs inside a Node.
3️⃣ Kubelet
An agent that runs on every Node. It takes Pod specifications provided by the user and ensures that the Containers described in them are running and healthy.
4️⃣ Kube-proxy
Runs on every Node and manages network rules on the system to ensure network communication works smoothly between Pods and the outside world.
5️⃣ Container Runtime
Runs on all nodes and manages the lifecycle of container(s) deployed on them. Eg- Docker, CRI-O, etc.
➡️ CONTROL PLANE
Does the administrative tasks of managing worker nodes, Pods and the cluster in general.
It is basically the “brains” of the cluster that makes all decisions like scheduling, always steering the cluster towards the desired state (eg- spin up new pods in response to some pods going down to maintain the desired number of them).
🔢 Kube-apiserver
Exposes the Kubernetes API to the user. When you make an API request to Kubernetes or use a client like Kubectl, you request is handled by kube-apiserver and passed on for further processing.
🔢 Etcd
A consistent and highly available Key-value store used by Kubernetes for storing all cluster data. You should have a strong backup strategy for this datastore as it tracks all state of the cluster.
🔢 Scheduler
Responsible for placing Pods on to Nodes in the most optimal way possible. When a new Pod is requested, the scheduler looks for a suitable Node to run it in.
Takes many different factors into consideration while scheduling, such as resource requirements, priority, user-specified criteria, etc.
🔢 Controller Manager
Runs Controllers. A Controller is a process that always steers the system toward a desired state. Eg- A Node controller monitors and responds when nodes go down.
Please open Telegram to view this post
VIEW IN TELEGRAM
𝐓𝐨𝐩 𝐔𝐬𝐞𝐝 𝐃𝐨𝐜𝐤𝐞𝐫 𝐂𝐨𝐦𝐦𝐚𝐧𝐝𝐬 🐋
🐳 𝐃𝐨𝐜𝐤𝐞𝐫 𝐁𝐚𝐬𝐢𝐜𝐬:
•🏁
•📦
•🔍
•🗑
•🏗
•🛑
•♻️
•💡
🐋 𝐃𝐨𝐜𝐤𝐞𝐫 𝐍𝐞𝐭𝐰𝐨𝐫𝐤𝐢𝐧𝐠:
•🌐
•🔗
•🛠
•🔄
📁 𝐃𝐨𝐜𝐤𝐞𝐫 𝐕𝐨𝐥𝐮𝐦𝐞𝐬:
•📂
•🔌
•🔄
•🗑
⚙️ 𝐃𝐨𝐜𝐤𝐞𝐫 𝐂𝐨𝐦𝐩𝐨𝐬𝐞:
•📋
•🗄
•🔧
•📊
•🔄
•📈
🔵 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
•
docker run: Run a container from an image.•
docker build: Build an image from a Dockerfile.•
docker images: List all images on the system.•
docker rmi: Remove one or more images.•
docker-compose up: Start services defined in a Compose file.•
docker stop: Stop a running container.•
docker rm: Remove one or more containers.•
docker ps: List running containers.•
docker network create: Create a network.•
docker network connect: Connect a container to a network.•
docker network inspect: Inspect a network.•
docker network disconnect: Disconnect a container from a network.•
docker volume create: Create a volume.•
docker volume ls: List volumes.•
docker volume inspect: Inspect a volume.•
docker volume rm: Remove one or more volumes.•
docker-compose up: Start services defined in a Compose file.•
docker-compose down: Stop and remove services defined in a Compose file.•
docker-compose build: Build or rebuild services.•
docker-compose logs: View output logs from services.•
docker-compose restart: Restart services.•
docker-compose scale: Scale services to a specified number.Please open Telegram to view this post
VIEW IN TELEGRAM
1710845721105.gif
2.2 MB
DevOps 👾 Life Cycle Overview 🔥
1️⃣ . Code: Developers create the software code, using tools like Git to collaborate and manage changes.
2️⃣ . Build: Converts code for computer understanding, with tools like Jenkins for efficiency.
3️⃣ . Test: Ensures software quality with tools like JUnit for bug-free performance.
4️⃣ . Release: Deploys tested software via CI/CD for user access.
5️⃣ . Monitor: Maintains software performance post-release with tools like Prometheus.
6️⃣ . Operate: Manages real-time software functioning with automation.
7️⃣ . Plan: DevOps planning with tools like Jira for agile adaptability.
8️⃣ . Deploy: Scales software for more users with Infrastructure as Code.
9️⃣ . Scale: Expands software capabilities for growing needs.
1️⃣ 0️⃣ . Feedback Loop: Continuous improvement through user and ops feedback.
✈️ 𝐅𝐨𝐥𝐥𝐨𝐰 @prodevopsguy 𝐟𝐨𝐫 𝐦𝐨𝐫𝐞 𝐬𝐮𝐜𝐡 𝐜𝐨𝐧𝐭𝐞𝐧𝐭 𝐚𝐫𝐨𝐮𝐧𝐝 𝐜𝐥𝐨𝐮𝐝 & 𝐃𝐞𝐯𝐎𝐩𝐬!!! // 𝐉𝐨𝐢𝐧 𝐟𝐨𝐫 𝐃𝐞𝐯𝐎𝐩𝐬 𝐃𝐎𝐂𝐬: @devopsdocs
Please open Telegram to view this post
VIEW IN TELEGRAM
But the most critical element?
Making security a habit, not just a step.
Please open Telegram to view this post
VIEW IN TELEGRAM
#devopsengineer Job Alert 🗣 !!
📣 Attention DevOps Engineers! New job opportunity available.
Apply now for a chance to join our dynamic team!⚡️
✔️ HR Bitcot
📧 sonalimoyal@bitcot.com
😎 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!! // Join for DevOps DOCs: @devopsdocs
Apply now for a chance to join our dynamic team!
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
This media is not supported in your browser
VIEW IN TELEGRAM
1708757232781.gif
285.8 KB
The below illustration shows some common container commands and their syntax 👇
1.
docker run -it --name nginx nginx
2.
docker start nginx
3.
docker restart nginx
4.
docker pause nginx
5.
docker unpause nginx
6.
docker stop nginx
7.
docker kill nginx
8.
docker ps
9.
docker exec -it nginx /bin/bash
10.
docker attach nginx
11.
docker logs nginx
12.
docker rename old-name new-name
13.
docker inspect nginx
14.
docker cp nginx:/container-path/file.txt /local-path
15.
docker rm nginx
These container commands are essential for managing containerized applications, whether for development, testing, or production deployment, as they enable efficient control and manipulation of container instances.
Please open Telegram to view this post
VIEW IN TELEGRAM
Here to Here
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
Kubernetes addons are optional components and features that extend the functionality of your Kubernetes cluster beyond its core capabilities. These addons provide additional functionalities such as monitoring, logging, networking, and security, allowing users to tailor their Kubernetes deployments to their specific needs and preferences.
Key Features of Kubernetes Addons:
Please open Telegram to view this post
VIEW IN TELEGRAM
From 'It works on my machine!' to 'It works on my container!' but hey at least it works!
✔️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
Please open Telegram to view this post
VIEW IN TELEGRAM
Are you ready to elevate your development process to new heights?
Please open Telegram to view this post
VIEW IN TELEGRAM