DevOps & Cloud (AWS, AZURE, GCP) Tech Free Learning
15.9K subscribers
1.33K photos
14 videos
501 files
1.28K links
https://projects.prodevopsguytech.com // https://blog.prodevopsguytech.com

• We post Daily Trending DevOps/Cloud content
• All DevOps related Code & Scripts uploaded
• DevOps/Cloud Job Related Posts
• Real-time Interview questions & preparation guides
Download Telegram
📢 Sidecar container in kubernetes:

Sidecar container is a design pattern where an additional container is deployed alongside the main container within the same Pod. The sidecar container runs in the same execution environment and shares the same resources (network namespace, IPC namespace, etc.) with the main container. Sidecar containers are often used to extend or enhance the functionality of the main application container without modifying its codebase directly.
Here are some common use cases for sidecar containers:


📢 Logging: A sidecar container can be used to collect, format, and forward logs generated by the main application container to a centralized logging system.

📢 Monitoring: Sidecar containers can be used to collect metrics, health checks, or other telemetry data from the main application container and expose it to monitoring systems like Prometheus.

📢 Security: A sidecar container can handle tasks such as managing SSL certificates, providing authentication, or enforcing security policies independently of the main application.

📢 Data Processing: Sidecar containers can be used for tasks like data transformation, caching, or pre-processing data before it's consumed by the main application.


✈️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!! // Join for DevOps DOCs: @devopsdocs
Please open Telegram to view this post
VIEW IN TELEGRAM
🎙 Kubernetes is an open-source 𝗰𝗼𝗻𝘁𝗮𝗶𝗻𝗲𝗿 𝗼𝗿𝗰𝗵𝗲𝘀𝘁𝗿𝗮𝘁𝗶𝗼𝗻 system for automating software deployment, scaling, and management.

➡️ Features:
Load balancing
Self-healing
High availability / Ensure no downtime / Maintain fault tolerance
Performance enhancement
Auto-scaling

Several key components of Kubernetes are important to understand:

𝗣𝗼𝗱 ➡️ Represents one or more containers running in a cluster.
𝗦𝗲𝗿𝘃𝗶𝗰𝗲 ➡️ An abstract way to access pod/application.
𝗡𝗮𝗺𝗲𝘀𝗽𝗮𝗰𝗲 ➡️ Used to remove name collision within a cluster. It supports multiple virtual clusters on the same physical cluster.
𝗡𝗼𝗱𝗲 ➡️ Kubernetes worker machine.
𝗖𝗹𝘂𝘀𝘁𝗲𝗿 ➡️ Consisting of a group of nodes running containerized applications on Kubernetes.
𝗥𝗲𝗽𝗹𝗶𝗰𝗮𝗦𝗲𝘁 ➡️ Several replicas of running pods. It helps in achieving high availability and scalability.
𝗟𝗮𝗯𝗲𝗹 ➡️ Giving a name to Kubernetes objects so that they can be identified across the system.
𝗞𝘂𝗯𝗲𝗹𝗲𝘁 ➡️ Agent that runs on each node and checks if the containers are running in the pods.
𝗞𝘂𝗯𝗲𝗰𝘁𝗹 ➡️ Command-line utility to interact with the Kubernetes API server.
𝗞𝘂𝗯𝗲-𝗽𝗿𝗼𝘅𝘆 ➡️ Network proxy which contains all the network rules on each node in the cluster.


✈️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
Please open Telegram to view this post
VIEW IN TELEGRAM
1705241983880.gif
1.8 MB
Apache Kafka has become increasingly popular in recent years.

It's used by companies like Netflix, LinkedIn, and Uber to handle high-volume data streams.
🔥 I have created this handy diagram that breaks down the key concepts of Kafka in a simple and easy-to-understand way.

🔴 𝗣𝗿𝗼𝗱𝘂𝗰𝗲𝗿:
A Kafka producer is an entity that publishes data to topics within the Kafka cluster. In essence, producers are the sources of data streams, which might originate from various applications, systems, or sensors. They push records into Kafka topics, and each record consists of a key, a value, and a timestamp.


🔴 𝗖𝗼𝗻𝘀𝘂𝗺𝗲𝗿:
A Kafka consumer pulls data from Kafka topics to which it subscribes. Consumers process the data and often are part of a consumer group. In a group, multiple consumers can read from a topic in parallel, with each consumer responsible for reading from certain partitions, ensuring efficient data processing.


🔴 𝗧𝗼𝗽𝗶𝗰:
A topic is a category or feed name to which records are published. Topics in Kafka are multi-subscriber; they can be consumed by multiple consumers and consumer groups. Topics are divided into partitions to allow for data scalability and parallel processing.


🔴 𝗣𝗮𝗿𝘁𝗶𝘁𝗶𝗼𝗻:
A topic can be divided into partitions, which are essentially subsets of a topic's data. Each partition is an ordered, immutable sequence of records that is continually appended to. Partitions allow topics to be parallelized by splitting the data across multiple brokers.


🔴 𝗕𝗿𝗼𝗸𝗲𝗿:
A broker is a single Kafka server that forms part of the Kafka cluster. Brokers are responsible for maintaining the published data. Each broker may have zero or more partitions per topic and can handle data for multiple topics.


🔴 𝗖𝗹𝘂𝘀𝘁𝗲𝗿:
A Kafka cluster comprises one or more brokers. The cluster is the physical grouping of one or more brokers that work together to provide scalability, fault tolerance, and load balancing. The Kafka cluster manages the persistence and replication of message data.


🔴 𝗥𝗲𝗽𝗹𝗶𝗰𝗮:
A replica is a copy of a partition. Kafka replicates partitions across multiple brokers to ensure data is not lost if a broker fails. Replicas are classified as either leader replicas or follower replicas.


🔴 𝗟𝗲𝗮𝗱𝗲𝗿 𝗥𝗲𝗽𝗹𝗶𝗰𝗮:
For each partition, one broker is designated as the leader. The leader replica handles all read and write requests for the partition. Other replicas simply copy the data from the leader.


🔴 𝗙𝗼𝗹𝗹𝗼𝘄𝗲𝗿 𝗥𝗲𝗽𝗹𝗶𝗰𝗮:
Follower replicas are copies of the leader replica for a partition. They replicate the leader's log and do not serve client requests. Instead, their purpose is to provide redundancy and to take over as the leader if the current leader fails.



✈️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
Please open Telegram to view this post
VIEW IN TELEGRAM
😀😀 10 DevOps Real time Scenarios. 😀😀
🚀 Issues as well as their resolutions: 🚀

🔢. Continuous Integration Pipeline Failure and its Resolution.
🔗 https://lnkd.in/g9nBb79u

🔢. Application experiences performance degradation and becomes slow during high-traffic periods and its resolution.
🔗 https://lnkd.in/g9nBb79u

🔢. Deployments are error-prone and inconsistent across different environments and its resolution.
🔗 https://lnkd.in/gE6FYcBz

🔢. The application goes down in production due to an unforeseen issue and its resolution.
🔗 https://lnkd.in/gE6FYcBz

🔢. A security vulnerability is discovered in a component of the application stack and its resolution.
🔗 https://lnkd.in/gPtZ9_Ge

🔢. Production environments start to deviate from their desired configurations over time and its resolution.
🔗 https://lnkd.in/gPtZ9_Ge

🔢. A critical service experiences an outage, impacting users and business operations and its resolution.
🔗 https://lnkd.in/gvTtGYC7

🔢. Communication breakdowns between development and operations teams lead to misunderstandings and delays and its resolution.
🔗 https://lnkd.in/gvTtGYC7

🔢. A major release causes unexpected issues in the production environment.
🔗 https://lnkd.in/gYbFKPrv

🔢🔢. Cloud resource costs are increasing beyond budgeted limits.
🔗 https://lnkd.in/gYbFKPrv


🎄 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
Please open Telegram to view this post
VIEW IN TELEGRAM
1710377319227.gif
453.9 KB
☄️ One picture is worth a thousand words - Typical AWS Network Architecture in one diagram.

Amazon Web Services (AWS) offers a comprehensive suite of networking services designed to provide businesses with secure, scalable, and highly available network infrastructure. AWS's network architecture components enable seamless connectivity between the internet, remote workers, corporate data centers, and within the AWS ecosystem itself.


✈️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!! // Join for DevOps DOCs: @devopsdocs
Please open Telegram to view this post
VIEW IN TELEGRAM
https://harshhaa.hashnode.dev/deployment-of-super-mario-on-kubernetes-using-terraform

Follow 🍩 Like 👍 Share 👍 Comment Your thoughts 💬

🌟 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
Please open Telegram to view this post
VIEW IN TELEGRAM
https://prodevopsguy.github.io/2024/Ultimate-DevOps-Bootcamp-2024-Pack/

⚠️ Note: Anyone Interested, can open the Blog 🌐, share it to your friends and colleagues.


🔵 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥When becoming a DevOps Engineer, prioritize learning fundamental concepts before technologies.

Just learning technologies and adding them to your resume won't cut it.

✔️ You need to understand the basic concepts.

➡️ For example, before learning docker, learn about Linux Kernel cgroups and namespaces.

✔️ Learn the basics, then learn the technology.


❤️ Follow for more: @prodevopsguy
Please open Telegram to view this post
VIEW IN TELEGRAM
⚙️ Most Common Kubernetes Basic Errors ⚙️

1️⃣. ImageBackPullOff
We face this issue when the image is not present in registry or the given image tag is wrong.
Make sure you provide correct registry url, image name and image tag.

We might face authentication failures, when image is being stored in a private registry, make sure to create secret with private registry credentials and add created secret in Kubernetes Deployment File to pull docker image.


2️⃣. CrashLoopBackOff
We face this issue when the process deployed inside container not running then the POD will be moved to CrashLoopBackOff.
POD might be running out of CPU or memory, POD should get enough resources allocated that’s cpu and memory for an application to be up and running, to fix that check in Resources Requests and Resources Limits.


3️⃣. OOM Killed - Out Of Memory
We face this issue when PODs tries to utilise more memory than the limits we have set.
We can resolve it by setting appropriate resource request and resource limit.


4️⃣. POD Status - Pending
When nodes might not be ready and required resources like CPU and Memory may not be available in nodes for the PODs to be up and running.


5️⃣. POD Status - Waiting
POD will be scheduled to a node but POD won’t be running in scheduled node.
We can fix this by providing correct image name, image tag and authentication to registry.


6️⃣. POD will be up and running and application is not accessible.
We can fix this by creating appropriate service.
If service is already created and application is still not accessible, make sure application and service are deployed in same namespace.


7️⃣. POD Status - Evicted
We can resolve this by setting appropriate resource requests and resource limits for the PODs and having enough resources in worker nodes.



✈️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!! // Join for DevOps DOCs: @devopsdocs
Please open Telegram to view this post
VIEW IN TELEGRAM
➡️ Prometheus Architecture Explained ~ 🔬

Prometheus is an open-source monitoring and alerting toolkit designed for reliability and scalability. Below is a an overview of the Prometheus architecture:

🏮 Components:
📚 Prometheus Server:
- Core for collecting, storing, and querying time-series data.
- It’s pull-based and scrapes metrics from targets at regular intervals.
- Stores data in a local time-series database.

📚 Metrics (Targets/Exporters):
- Apps or services expose metrics.
- Prometheus scrapes metrics from these targets.

📚 Data Model:
- Time-series data with metric names and labels.
- Example: `http_requests_total{method="GET", status="200"}`.

📚 PromQL:
- Query language for time-series data.
- Allows filtering, grouping, and math operations on metrics.

📚 Alertmanager:
- Handles alerts from Prometheus.
- Manages notifications and integrates with third-party channels.

📚 Storage:
- Uses local on-disk storage.
- Data retention policies.
- Data is organized in blocks and compacted over time.


🏮 Workflow:
📚 Configuration:
- Targets and scrape intervals defined in Prometheus config files.
- Relabeling allows modifying or filtering metrics before storage.

📚 Scraping:
- Prometheus Server scrapes metrics from configured targets.
- Targets expose metrics typically at /metrics endpoint.

📚 Storage:
- Scraped metrics stored in the local time-series database.
- Data organized by metric name and labels.

📚 Querying:
- Users utilize PromQL to query and analyze stored metrics.
- Grafana or Prometheus's UI visualizes query results.

📚 Alerting:
- Prometheus evaluates alerting rules based on queries.
- Alerts sent to Alertmanager if conditions are met.

📚 Alertmanager Handling:
- Alertmanager receives alerts and manages their lifecycle.
- Handles deduplication, grouping, and sends notifications to configured channels.


🏮 Advantages:
- Simple configuration for monitoring targets.
- Powerful query language (PromQL).
- Effective alerting and notification handling.
- Seamless integration with visualization tools.



✈️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!! // Join for DevOps DOCs: @devopsdocs
Please open Telegram to view this post
VIEW IN TELEGRAM
1699280774061.pdf
236.7 KB
Top Programming Languages for Streamlining DevOps Workflows 🚀 :

Python: Python is often considered the Swiss army knife of programming languages for DevOps. It's known for its readability and simplicity, making it an ideal choice for various automation tasks. Python is frequently used for scripting, infrastructure as code (IaC) with tools like Ansible.

Bash/Shell Scripting: Shell scripting is a fundamental skill for DevOps engineers. Bash and other shell scripts are used for automating system tasks, managing server configurations, and orchestrating complex processes in Unix-based environments.

PowerShell: In Windows-centric DevOps environments, PowerShell is the go-to scripting language. It's used for tasks like Windows server management, automation, and configuration.

Ruby: Ruby is known for its elegance and developer-friendliness. DevOps engineers often use Ruby in tools like Capistrano for automating deployment and configuration management tasks.

Go (Golang): Go is a statically typed language developed by Google, designed for simplicity, efficiency, and speed. It's commonly used in DevOps for building microservices, creating lightweight applications


❤️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!! // Join for DevOps DOCs: @devopsdocs
Please open Telegram to view this post
VIEW IN TELEGRAM
1705900428288.gif
1 MB
Version control with 🧑‍💻 GIT has become an essential skill for developers.

In this post, I'll provide a quick overview of some core GIT concepts and commands.

Key concepts:
➡️ Repository - Where your project files and commit history are stored
➡️ Commit - A snapshot of changes, like a version checkpoint
➡️ Branch - A timeline of commits that lets you work on parallel versions
➡️ Merge - To combine changes from separate branches
➡️ Pull request - Propose & review changes before merging branches

Key commands:
➡️ git init - Initialize a new repo
➡️ git status - View changed files not staged for commit
➡️ git add - Stage files for commit
➡️ git commit - Commit staged snapshot
➡️ git branch - List, create, or delete branches
➡️ git checkout - Switch between branches
➡️ git merge - Join two development histories (branches)
➡️ git push/pull - Send/receive commits to remote repo


✈️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
Please open Telegram to view this post
VIEW IN TELEGRAM
🆘 Here is a list of some 𝗥𝗲𝘀𝘂𝗺𝗲-𝗥𝗲𝗮𝗱𝘆 DevOps projects.

➡️ 𝗗𝗲𝘃𝗦𝗲𝗰𝗢𝗽𝘀 𝗣𝗶𝗽𝗲𝗹𝗶𝗻𝗲 𝗣𝗿𝗼𝗷𝗲𝗰𝘁: 𝗗𝗲𝗽𝗹𝗼𝘆 𝗡𝗲𝘁𝗳𝗹𝗶𝘅 𝗖𝗹𝗼𝗻𝗲 𝗼𝗻 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀
🔗 https://lnkd.in/gUpEqDuG

➡️ 𝗗𝗲𝗽𝗹𝗼𝘆𝗶𝗻𝗴 𝗮 𝗬𝗼𝘂𝗧𝘂𝗯𝗲 𝗖𝗹𝗼𝗻𝗲 𝗔𝗽𝗽 𝘄𝗶𝘁𝗵 𝗗𝗲𝘃𝗦𝗲𝗰𝗢𝗽𝘀
🔗 https://lnkd.in/gvvzwW2A

➡️ 𝗚𝗶𝘁𝗟𝗮𝗯 𝗖𝗜/𝗖𝗗 𝘂𝘀𝗶𝗻𝗴 𝗧𝗲𝗿𝗿𝗮𝗳𝗼𝗿𝗺 - 𝗠𝗮𝗻𝗮𝗴𝗶𝗻𝗴 𝗜𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲
🔗 https://lnkd.in/gJUfnjHx

➡️ 𝗧𝗛𝗘 𝗨𝗟𝗧𝗜𝗠𝗔𝗧𝗘 𝗖𝗜/𝗖𝗗 𝗣𝗜𝗣𝗘𝗟𝗜𝗡𝗘
🔗 https://lnkd.in/gVUDtZBF

➡️ 𝗖𝗼𝗺𝗽𝗹𝗲𝘁𝗲 𝗥𝗲𝗮𝗹-𝘁𝗶𝗺𝗲 𝗗𝗲𝘃𝗢𝗽𝘀 𝗖𝗜/𝗖𝗗 𝗣𝗿𝗼𝗷𝗲𝗰𝘁
🔗 https://lnkd.in/gn_tMBfi

➡️ 𝗝𝗲𝗻𝗸𝗶𝗻𝘀 𝗖𝗜/𝗖𝗗 𝗣𝗶𝗽𝗲𝗹𝗶𝗻𝗲 - 𝗦𝗼𝗻𝗮𝗿𝗤𝘂𝗯𝗲, 𝗗𝗼𝗰𝗸𝗲𝗿, 𝗚𝗶𝘁𝗵𝘂𝗯 𝗪𝗲𝗯𝗵𝗼𝗼𝗸𝘀 𝗼𝗻 𝗔𝗪𝗦
🔗 https://lnkd.in/gzwdXM3y

➡️ 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀 𝗖𝗜/𝗖𝗗 𝗣𝗶𝗽𝗲𝗹𝗶𝗻𝗲 𝗨𝘀𝗶𝗻𝗴 𝗝𝗲𝗻𝗸𝗶𝗻𝘀
🔗 https://lnkd.in/gC4Zs_H9

➡️ 𝗠𝗮𝘀𝘁𝗲𝗿𝗶𝗻𝗴 𝗗𝗲𝘃𝗦𝗲𝗰𝗢𝗽𝘀 𝗖𝗜/𝗖𝗗: 𝟯-𝗧𝗶𝗲𝗿 𝗔𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗣𝗶𝗽𝗲𝗹𝗶𝗻𝗲
🔗 https://lnkd.in/grVg76Dw


🔵 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥 Basic Kubectl commands which are essential for a DevOps engineer to manage within a Kubernetes cluster.

💠 Pods:
Create a Pod: kubectl create -f pod.yaml
Get Pods: kubectl get pods
Describe Pod: kubectl describe pod <pod_name>
Logs: kubectl logs <pod_name>
Exec into Pod: kubectl exec -it <pod_name> -- <command>
Delete Pod: kubectl delete pod <pod_name>

💠 Deployments:
Create a Deployment: kubectl create -f deployment.yaml
Get Deployments: kubectl get deployments
Describe Deployment: kubectl describe deployment <deployment_name>
Scale Deployment: kubectl scale --replicas=3 deployment/<deployment_name>
Rollout Status: kubectl rollout status deployment/<deployment_name>
Rollout History: kubectl rollout history deployment/<deployment_name>

💠 Services:
Create a Service: kubectl create -f service.yaml
Get Services: kubectl get services
Describe Service: kubectl describe service <service_name>
Delete Service: kubectl delete service <service_name>

💠 ConfigMaps:
Create a ConfigMap: kubectl create configmap <configmap_name> --from-file=<file_path>
Get ConfigMaps: kubectl get configmaps
Describe ConfigMap: kubectl describe configmap <configmap_name>
Delete ConfigMap: kubectl delete configmap <configmap_name>

💠 Secrets:
Create a Secret: kubectl create secret generic <secret_name> --from-literal=<key>=<value>
Get Secrets: kubectl get secrets
Describe Secret: kubectl describe secret <secret_name>
Delete Secret: kubectl delete secret <secret_name>

💠 Nodes:
Get Nodes: kubectl get nodes
Describe Node: kubectl describe node <node_name>

💠 Namespaces:
Get Namespaces: kubectl get namespaces
Describe Namespace: kubectl describe namespace <namespace_name>

💠 PersistentVolumes (PV) and PersistentVolumeClaims (PVC):
Get PVs/PVCs: kubectl get pv / kubectl get pvc
Describe PV/PVC: kubectl describe pv <pv_name> / kubectl describe pvc <pvc_name>
Delete PV/PVC: kubectl delete pv <pv_name> / kubectl delete pvc <pvc_name>


😎 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!! // Join for DevOps DOCs: @devopsdocs
Please open Telegram to view this post
VIEW IN TELEGRAM
🐳 𝗗𝗼𝗰𝗸𝗲𝗿 𝗕𝗲𝘀𝘁 𝗣𝗿𝗮𝗰𝘁𝗶𝗰𝗲𝘀! 🐳

Docker has revolutionized the world of containerization, enabling scalable and efficient application deployment.

To make the most of this powerful tool, here are 10 essential Docker best practices:

✔️ 𝗦𝘁𝗮𝗿𝘁 𝘄𝗶𝘁𝗵 𝗮 𝗟𝗶𝗴𝗵𝘁𝘄𝗲𝗶𝗴𝗵𝘁 𝗕𝗮𝘀𝗲 𝗜𝗺𝗮𝗴𝗲: Use minimalist base images to reduce container size and vulnerabilities.

✔️ 𝗦𝗶𝗻𝗴𝗹𝗲 𝗣𝗿𝗼𝗰𝗲𝘀𝘀 𝗽𝗲𝗿 𝗖𝗼𝗻𝘁𝗮𝗶𝗻𝗲𝗿: Keep it simple - one process per container for better isolation and maintainability.

✔️ 𝗨𝘀𝗲 𝗗𝗼𝗰𝗸𝗲𝗿 𝗖𝗼𝗺𝗽𝗼𝘀𝗲: Define multi-container applications in a YAML file for easy management.

✔️ 𝗩𝗼𝗹𝘂𝗺𝗲 𝗠𝗼𝘂𝗻𝘁𝗶𝗻𝗴: Store data outside the container to preserve it, even if the container is removed.

✔️ 𝗖𝗼𝗻𝘁𝗮𝗶𝗻𝗲𝗿 𝗢𝗿𝗰𝗵𝗲𝘀𝘁𝗿𝗮𝘁𝗶𝗼𝗻: Consider Kubernetes or Docker Swarm for managing containers at scale.

✔️ 𝗩𝗲𝗿𝘀𝗶𝗼𝗻𝗶𝗻𝗴 𝗮𝗻𝗱 𝗧𝗮𝗴𝗴𝗶𝗻𝗴: Always tag images with version numbers to ensure reproducibility.

✔️ 𝗛𝗲𝗮𝗹𝘁𝗵 𝗖𝗵𝗲𝗰𝗸𝘀: Implement health checks to monitor container status and reliability.

✔️ 𝗥𝗲𝘀𝗼𝘂𝗿𝗰𝗲 𝗟𝗶𝗺𝗶𝘁𝘀: Set resource constraints to prevent one container from hogging resources.

✔️ 𝗗𝗼𝗰𝗸𝗲𝗿𝗳𝗶𝗹𝗲 𝗕𝗲𝘀𝘁 𝗣𝗿𝗮𝗰𝘁𝗶𝗰𝗲𝘀: Optimize Dockerfiles by minimizing layers and using caching effectively.

✔️ 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆: Regularly update images, scan for vulnerabilities, and follow security best practices.


🌐𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!! // Join for DevOps DOCs: @devopsdocs
Please open Telegram to view this post
VIEW IN TELEGRAM
➡️ Here are some common Ansible commands that DevOps Engineers use:

🐠 Inventory Management:
♦️ ansible-inventory: To view the current inventory.
♦️ ansible-inventory --graph: To visualize inventory as a graph.
♦️ ansible-inventory --list: To list all hosts in the inventory.

🐠 Ad-Hoc Commands:
♦️ ansible: Run a single command on one or more managed nodes.
Example: ansible all -m ping (ping all hosts).
♦️ ansible <group_name> -m <module_name> -a "<module_arguments>": Execute a module on a specific group of hosts. Example: ansible web_servers -m shell -a "uptime"

🐠 Playbook Execution:
♦️ ansible-playbook: Run a playbook. Example: ansible-playbook deploy.yml.
♦️ ansible-playbook --syntax-check: Check syntax of playbook.
♦️ ansible-playbook --list-tasks: List tasks in a playbook without executing them.

🐠 Roles:
♦️ ansible-galaxy init <role_name>: Initialize a new role.
♦️ ansible-galaxy install <role_name>: Install a role from Ansible Galaxy.
♦️ ansible-galaxy remove <role_name>: Remove a role.
♦️ ansible-galaxy list: List installed roles.

🐠 Vault:
♦️ ansible-vault create <filename>: Create a new encrypted file.
♦️ ansible-vault edit <filename>: Edit an encrypted file.
♦️ ansible-vault encrypt <filename>: Encrypt an existing file.
♦️ ansible-vault decrypt <filename>: Decrypt an encrypted file.

🐠 Dynamic Inventory Management:
♦️ ansible-inventory --refresh: Refresh dynamic inventory.
♦️ ansible-inventory --graph: Visualize dynamic inventory as a graph.

🐠 Tags:
Use tags in playbooks to execute specific tasks. Example: ansible-playbook deploy.yml --tags "nginx,php"


✈️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!! // Join for DevOps DOCs: @devopsdocs
Please open Telegram to view this post
VIEW IN TELEGRAM
🐳 𝐀 𝐪𝐮𝐢𝐜𝐤 𝐃𝐨𝐜𝐤𝐞𝐫 𝐦𝐚𝐠𝐢𝐜 𝐭𝐢𝐩! 🚀

Ever struggled with deploying multi-container applications? Enter 𝗱𝗼𝗰𝗸𝗲𝗿-𝗰𝗼𝗺𝗽𝗼𝘀𝗲 𝘂𝗽! ⬆️

One command to rule them all - orchestrating your containers seamlessly.

Spin up your dev environment with ease, define services, and voila! But wait, there's more - when it's time to call it a day, simply do a graceful exit with 𝗱𝗼𝗰𝗸𝗲𝗿-𝗰𝗼𝗺𝗽𝗼𝘀𝗲 𝗱𝗼𝘄𝗻. ⬇️

Clean, efficient, and a game-changer for simplifying your development workflow.🚀


❤️ Follow for more: @prodevopsguy
Please open Telegram to view this post
VIEW IN TELEGRAM
🚨 𝗚𝘂𝗶𝗱𝗲 𝘁𝗼 𝗗𝗲𝘃𝗢𝗽𝘀 𝗠𝗮𝘀𝘁𝗲𝗿𝘆 ❤️

𝑱𝒐𝒊𝒏 𝑶𝒖𝒓 𝑻𝒆𝒄𝒉 𝑪𝒐𝒎𝒎𝒖𝒏𝒊𝒕𝒚 -> 𝑮𝒖𝒊𝒅𝒆 𝑶𝒕𝒉𝒆𝒓’𝒔


➡️ 𝗟𝗲𝘁'𝘀 𝗯𝗿𝗲𝗮𝗸 𝗶𝘁 𝗱𝗼𝘄𝗻, 𝘀𝘁𝗲𝗽 𝗯𝘆 𝘀𝘁𝗲𝗽!

➡️ 𝗕𝗮𝘀𝗶𝗰𝘀

➡️ 𝗚𝗶𝘁:
- Control your code with Git. It keeps track of changes and helps you work together on projects.

➡️ 𝗟𝗶𝗻𝘂𝘅:
- Get comfy with Linux basics. It's like the home for your code, and knowing your way around is a big plus.

➡️ 𝗣𝗿𝗼𝗴𝗿𝗮𝗺𝗺𝗶𝗻𝗴 (𝗣𝘆𝘁𝗵𝗼𝗻, 𝗚𝗢):
- Learn to talk to computers! Python and GO are like your special languages for making things happen in the digital world.

➡️ 𝗗𝗮𝘁𝗮𝗯𝗮𝘀𝗲𝘀:
- Understand databases - they're where you store and fetch data. Knowing how they work is super important.

➡️ 𝗡𝗲𝘁𝘄𝗼𝗿𝗸𝗶𝗻𝗴:
- Imagine the internet as a giant highway. Networking helps you build and navigate the roads for your digital traffic.

➡️ 𝗖𝗜/𝗖𝗗

➡️ 𝗝𝗲𝗻𝗸𝗶𝗻𝘀:
- Meet Jenkins, your automation buddy. It helps you put code together, test it, and deliver it smoothly.

➡️ 𝗚𝗶𝘁𝗵𝘂𝗯 𝗔𝗰𝘁𝗶𝗼𝗻𝘀:
- Workflows made easy! GitHub Actions automates tasks like testing and deploying, right from your GitHub space.

➡️ 𝗚𝗶𝘁𝗹𝗮𝗯 𝗖𝗜:
- GitLab CI is another cool friend. It makes sure your code is always in tip-top shape with continuous integration and delivery.

➡️ 𝗖𝗶𝗿𝗰𝗹𝗲 𝗖𝗜:
- Think of Circle CI as your helper in the cloud. It makes sure your code gets where it needs to go without a hitch.

➡️ 𝗖𝗼𝗻𝘁𝗮𝗶𝗻𝗲𝗿𝗶𝘇𝗮𝘁𝗶𝗼𝗻

➡️ 𝗗𝗼𝗰𝗸𝗲𝗿:
- Docker is like a magic box. It helps you pack your software in a way that it runs the same everywhere.

➡️ 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀:
- Imagine having a tiny helper organizing all your software containers. That's Kubernetes – making sure everything runs smoothly.

➡️ 𝗛𝗘𝗟𝗠:
- HELM is like your toolkit for managing and releasing your software on Kubernetes. It makes your job way easier.

➡️ 𝗖𝗹𝗼𝘂𝗱 + 𝗜𝗔𝗖 + 𝗦𝗖𝗠

➡️ 𝗔𝗪𝗦, 𝗚𝗼𝗼𝗴𝗹𝗲 𝗖𝗹𝗼𝘂𝗱, 𝗔𝘇𝘂𝗿𝗲:
- These are like three big playgrounds for your digital creations. Pick one (or all) and learn how to play!

➡️ 𝗧𝗲𝗿𝗿𝗮𝗳𝗼𝗿𝗺:
- Terraform is your digital construction worker. It builds and manages your online world without breaking a sweat.

➡️ 𝗔𝗻𝘀𝗶𝗯𝗹𝗲:
- Meet Ansible, your automation genie. It makes sure everything in your digital kingdom is in order.

➡️ 𝗠𝗼𝗻𝗶𝘁𝗼𝗿𝗶𝗻𝗴 & 𝗟𝗼𝗴𝗴𝗶𝗻𝗴

➡️ 𝗚𝗿𝗮𝗳𝗮𝗻𝗮:
- Grafana is like your digital eyes. It helps you see and understand what's happening in your digital world with cool dashboards.

➡️ 𝗘𝗹𝗮𝘀𝘁𝗶𝗰 𝗦𝘁𝗮𝗰𝗸:
- Elastic Stack is your superhero trio – Elasticsearch, Logstash, and Kibana. They work together to manage and analyze your digital logs.

➡️ 𝗣𝗿𝗼𝗺𝗲𝘁𝗵𝗲𝘂𝘀:
- Prometheus is your guard dog. It keeps watch and warns you if anything is going wrong in your digital space.


🔵 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
Please open Telegram to view this post
VIEW IN TELEGRAM