DevOps & Cloud (AWS, AZURE, GCP) Tech Free Learning
15.9K subscribers
1.33K photos
14 videos
501 files
1.28K links
https://projects.prodevopsguytech.com // https://blog.prodevopsguytech.com

• We post Daily Trending DevOps/Cloud content
• All DevOps related Code & Scripts uploaded
• DevOps/Cloud Job Related Posts
• Real-time Interview questions & preparation guides
Download Telegram
➡️Docker 🐬 and Kubernetes Free Videos 🟩 :

Link: https://drive.google.com/drive/folders/162YOHhybk_pYemCfKmKSGbdSjJDeuAYR?usp=sharing


❤️ Follow for more: @prodevopsguy
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
🟩 Ansible 🆓 Videos 🔴

🔗Link : https://drive.google.com/drive/folders/1p35HHSamOyL1Rta8hK5--4k1mPWYAXaV?usp=sharing


❤️ Follow for more: @prodevopsguy
Please open Telegram to view this post
VIEW IN TELEGRAM
🟩 🌐 Git/GitHub Free Videos:- 🟩

🔥 ➡️ https://drive.google.com/drive/folders/1vhSsxz9oAtSh136JVo3gryaDPJAYWteF?usp=sharing

❤️ Follow for more: @prodevopsguy
Please open Telegram to view this post
VIEW IN TELEGRAM
📢 Sidecar container in kubernetes:

Sidecar container is a design pattern where an additional container is deployed alongside the main container within the same Pod. The sidecar container runs in the same execution environment and shares the same resources (network namespace, IPC namespace, etc.) with the main container. Sidecar containers are often used to extend or enhance the functionality of the main application container without modifying its codebase directly.
Here are some common use cases for sidecar containers:


📢 Logging: A sidecar container can be used to collect, format, and forward logs generated by the main application container to a centralized logging system.

📢 Monitoring: Sidecar containers can be used to collect metrics, health checks, or other telemetry data from the main application container and expose it to monitoring systems like Prometheus.

📢 Security: A sidecar container can handle tasks such as managing SSL certificates, providing authentication, or enforcing security policies independently of the main application.

📢 Data Processing: Sidecar containers can be used for tasks like data transformation, caching, or pre-processing data before it's consumed by the main application.


✈️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!! // Join for DevOps DOCs: @devopsdocs
Please open Telegram to view this post
VIEW IN TELEGRAM
🎙 Kubernetes is an open-source 𝗰𝗼𝗻𝘁𝗮𝗶𝗻𝗲𝗿 𝗼𝗿𝗰𝗵𝗲𝘀𝘁𝗿𝗮𝘁𝗶𝗼𝗻 system for automating software deployment, scaling, and management.

➡️ Features:
Load balancing
Self-healing
High availability / Ensure no downtime / Maintain fault tolerance
Performance enhancement
Auto-scaling

Several key components of Kubernetes are important to understand:

𝗣𝗼𝗱 ➡️ Represents one or more containers running in a cluster.
𝗦𝗲𝗿𝘃𝗶𝗰𝗲 ➡️ An abstract way to access pod/application.
𝗡𝗮𝗺𝗲𝘀𝗽𝗮𝗰𝗲 ➡️ Used to remove name collision within a cluster. It supports multiple virtual clusters on the same physical cluster.
𝗡𝗼𝗱𝗲 ➡️ Kubernetes worker machine.
𝗖𝗹𝘂𝘀𝘁𝗲𝗿 ➡️ Consisting of a group of nodes running containerized applications on Kubernetes.
𝗥𝗲𝗽𝗹𝗶𝗰𝗮𝗦𝗲𝘁 ➡️ Several replicas of running pods. It helps in achieving high availability and scalability.
𝗟𝗮𝗯𝗲𝗹 ➡️ Giving a name to Kubernetes objects so that they can be identified across the system.
𝗞𝘂𝗯𝗲𝗹𝗲𝘁 ➡️ Agent that runs on each node and checks if the containers are running in the pods.
𝗞𝘂𝗯𝗲𝗰𝘁𝗹 ➡️ Command-line utility to interact with the Kubernetes API server.
𝗞𝘂𝗯𝗲-𝗽𝗿𝗼𝘅𝘆 ➡️ Network proxy which contains all the network rules on each node in the cluster.


✈️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
Please open Telegram to view this post
VIEW IN TELEGRAM
1705241983880.gif
1.8 MB
Apache Kafka has become increasingly popular in recent years.

It's used by companies like Netflix, LinkedIn, and Uber to handle high-volume data streams.
🔥 I have created this handy diagram that breaks down the key concepts of Kafka in a simple and easy-to-understand way.

🔴 𝗣𝗿𝗼𝗱𝘂𝗰𝗲𝗿:
A Kafka producer is an entity that publishes data to topics within the Kafka cluster. In essence, producers are the sources of data streams, which might originate from various applications, systems, or sensors. They push records into Kafka topics, and each record consists of a key, a value, and a timestamp.


🔴 𝗖𝗼𝗻𝘀𝘂𝗺𝗲𝗿:
A Kafka consumer pulls data from Kafka topics to which it subscribes. Consumers process the data and often are part of a consumer group. In a group, multiple consumers can read from a topic in parallel, with each consumer responsible for reading from certain partitions, ensuring efficient data processing.


🔴 𝗧𝗼𝗽𝗶𝗰:
A topic is a category or feed name to which records are published. Topics in Kafka are multi-subscriber; they can be consumed by multiple consumers and consumer groups. Topics are divided into partitions to allow for data scalability and parallel processing.


🔴 𝗣𝗮𝗿𝘁𝗶𝘁𝗶𝗼𝗻:
A topic can be divided into partitions, which are essentially subsets of a topic's data. Each partition is an ordered, immutable sequence of records that is continually appended to. Partitions allow topics to be parallelized by splitting the data across multiple brokers.


🔴 𝗕𝗿𝗼𝗸𝗲𝗿:
A broker is a single Kafka server that forms part of the Kafka cluster. Brokers are responsible for maintaining the published data. Each broker may have zero or more partitions per topic and can handle data for multiple topics.


🔴 𝗖𝗹𝘂𝘀𝘁𝗲𝗿:
A Kafka cluster comprises one or more brokers. The cluster is the physical grouping of one or more brokers that work together to provide scalability, fault tolerance, and load balancing. The Kafka cluster manages the persistence and replication of message data.


🔴 𝗥𝗲𝗽𝗹𝗶𝗰𝗮:
A replica is a copy of a partition. Kafka replicates partitions across multiple brokers to ensure data is not lost if a broker fails. Replicas are classified as either leader replicas or follower replicas.


🔴 𝗟𝗲𝗮𝗱𝗲𝗿 𝗥𝗲𝗽𝗹𝗶𝗰𝗮:
For each partition, one broker is designated as the leader. The leader replica handles all read and write requests for the partition. Other replicas simply copy the data from the leader.


🔴 𝗙𝗼𝗹𝗹𝗼𝘄𝗲𝗿 𝗥𝗲𝗽𝗹𝗶𝗰𝗮:
Follower replicas are copies of the leader replica for a partition. They replicate the leader's log and do not serve client requests. Instead, their purpose is to provide redundancy and to take over as the leader if the current leader fails.



✈️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
Please open Telegram to view this post
VIEW IN TELEGRAM
😀😀 10 DevOps Real time Scenarios. 😀😀
🚀 Issues as well as their resolutions: 🚀

🔢. Continuous Integration Pipeline Failure and its Resolution.
🔗 https://lnkd.in/g9nBb79u

🔢. Application experiences performance degradation and becomes slow during high-traffic periods and its resolution.
🔗 https://lnkd.in/g9nBb79u

🔢. Deployments are error-prone and inconsistent across different environments and its resolution.
🔗 https://lnkd.in/gE6FYcBz

🔢. The application goes down in production due to an unforeseen issue and its resolution.
🔗 https://lnkd.in/gE6FYcBz

🔢. A security vulnerability is discovered in a component of the application stack and its resolution.
🔗 https://lnkd.in/gPtZ9_Ge

🔢. Production environments start to deviate from their desired configurations over time and its resolution.
🔗 https://lnkd.in/gPtZ9_Ge

🔢. A critical service experiences an outage, impacting users and business operations and its resolution.
🔗 https://lnkd.in/gvTtGYC7

🔢. Communication breakdowns between development and operations teams lead to misunderstandings and delays and its resolution.
🔗 https://lnkd.in/gvTtGYC7

🔢. A major release causes unexpected issues in the production environment.
🔗 https://lnkd.in/gYbFKPrv

🔢🔢. Cloud resource costs are increasing beyond budgeted limits.
🔗 https://lnkd.in/gYbFKPrv


🎄 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
Please open Telegram to view this post
VIEW IN TELEGRAM
1710377319227.gif
453.9 KB
☄️ One picture is worth a thousand words - Typical AWS Network Architecture in one diagram.

Amazon Web Services (AWS) offers a comprehensive suite of networking services designed to provide businesses with secure, scalable, and highly available network infrastructure. AWS's network architecture components enable seamless connectivity between the internet, remote workers, corporate data centers, and within the AWS ecosystem itself.


✈️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!! // Join for DevOps DOCs: @devopsdocs
Please open Telegram to view this post
VIEW IN TELEGRAM
https://harshhaa.hashnode.dev/deployment-of-super-mario-on-kubernetes-using-terraform

Follow 🍩 Like 👍 Share 👍 Comment Your thoughts 💬

🌟 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
Please open Telegram to view this post
VIEW IN TELEGRAM
https://prodevopsguy.github.io/2024/Ultimate-DevOps-Bootcamp-2024-Pack/

⚠️ Note: Anyone Interested, can open the Blog 🌐, share it to your friends and colleagues.


🔵 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥When becoming a DevOps Engineer, prioritize learning fundamental concepts before technologies.

Just learning technologies and adding them to your resume won't cut it.

✔️ You need to understand the basic concepts.

➡️ For example, before learning docker, learn about Linux Kernel cgroups and namespaces.

✔️ Learn the basics, then learn the technology.


❤️ Follow for more: @prodevopsguy
Please open Telegram to view this post
VIEW IN TELEGRAM
⚙️ Most Common Kubernetes Basic Errors ⚙️

1️⃣. ImageBackPullOff
We face this issue when the image is not present in registry or the given image tag is wrong.
Make sure you provide correct registry url, image name and image tag.

We might face authentication failures, when image is being stored in a private registry, make sure to create secret with private registry credentials and add created secret in Kubernetes Deployment File to pull docker image.


2️⃣. CrashLoopBackOff
We face this issue when the process deployed inside container not running then the POD will be moved to CrashLoopBackOff.
POD might be running out of CPU or memory, POD should get enough resources allocated that’s cpu and memory for an application to be up and running, to fix that check in Resources Requests and Resources Limits.


3️⃣. OOM Killed - Out Of Memory
We face this issue when PODs tries to utilise more memory than the limits we have set.
We can resolve it by setting appropriate resource request and resource limit.


4️⃣. POD Status - Pending
When nodes might not be ready and required resources like CPU and Memory may not be available in nodes for the PODs to be up and running.


5️⃣. POD Status - Waiting
POD will be scheduled to a node but POD won’t be running in scheduled node.
We can fix this by providing correct image name, image tag and authentication to registry.


6️⃣. POD will be up and running and application is not accessible.
We can fix this by creating appropriate service.
If service is already created and application is still not accessible, make sure application and service are deployed in same namespace.


7️⃣. POD Status - Evicted
We can resolve this by setting appropriate resource requests and resource limits for the PODs and having enough resources in worker nodes.



✈️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!! // Join for DevOps DOCs: @devopsdocs
Please open Telegram to view this post
VIEW IN TELEGRAM
➡️ Prometheus Architecture Explained ~ 🔬

Prometheus is an open-source monitoring and alerting toolkit designed for reliability and scalability. Below is a an overview of the Prometheus architecture:

🏮 Components:
📚 Prometheus Server:
- Core for collecting, storing, and querying time-series data.
- It’s pull-based and scrapes metrics from targets at regular intervals.
- Stores data in a local time-series database.

📚 Metrics (Targets/Exporters):
- Apps or services expose metrics.
- Prometheus scrapes metrics from these targets.

📚 Data Model:
- Time-series data with metric names and labels.
- Example: `http_requests_total{method="GET", status="200"}`.

📚 PromQL:
- Query language for time-series data.
- Allows filtering, grouping, and math operations on metrics.

📚 Alertmanager:
- Handles alerts from Prometheus.
- Manages notifications and integrates with third-party channels.

📚 Storage:
- Uses local on-disk storage.
- Data retention policies.
- Data is organized in blocks and compacted over time.


🏮 Workflow:
📚 Configuration:
- Targets and scrape intervals defined in Prometheus config files.
- Relabeling allows modifying or filtering metrics before storage.

📚 Scraping:
- Prometheus Server scrapes metrics from configured targets.
- Targets expose metrics typically at /metrics endpoint.

📚 Storage:
- Scraped metrics stored in the local time-series database.
- Data organized by metric name and labels.

📚 Querying:
- Users utilize PromQL to query and analyze stored metrics.
- Grafana or Prometheus's UI visualizes query results.

📚 Alerting:
- Prometheus evaluates alerting rules based on queries.
- Alerts sent to Alertmanager if conditions are met.

📚 Alertmanager Handling:
- Alertmanager receives alerts and manages their lifecycle.
- Handles deduplication, grouping, and sends notifications to configured channels.


🏮 Advantages:
- Simple configuration for monitoring targets.
- Powerful query language (PromQL).
- Effective alerting and notification handling.
- Seamless integration with visualization tools.



✈️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!! // Join for DevOps DOCs: @devopsdocs
Please open Telegram to view this post
VIEW IN TELEGRAM
1699280774061.pdf
236.7 KB
Top Programming Languages for Streamlining DevOps Workflows 🚀 :

Python: Python is often considered the Swiss army knife of programming languages for DevOps. It's known for its readability and simplicity, making it an ideal choice for various automation tasks. Python is frequently used for scripting, infrastructure as code (IaC) with tools like Ansible.

Bash/Shell Scripting: Shell scripting is a fundamental skill for DevOps engineers. Bash and other shell scripts are used for automating system tasks, managing server configurations, and orchestrating complex processes in Unix-based environments.

PowerShell: In Windows-centric DevOps environments, PowerShell is the go-to scripting language. It's used for tasks like Windows server management, automation, and configuration.

Ruby: Ruby is known for its elegance and developer-friendliness. DevOps engineers often use Ruby in tools like Capistrano for automating deployment and configuration management tasks.

Go (Golang): Go is a statically typed language developed by Google, designed for simplicity, efficiency, and speed. It's commonly used in DevOps for building microservices, creating lightweight applications


❤️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!! // Join for DevOps DOCs: @devopsdocs
Please open Telegram to view this post
VIEW IN TELEGRAM
1705900428288.gif
1 MB
Version control with 🧑‍💻 GIT has become an essential skill for developers.

In this post, I'll provide a quick overview of some core GIT concepts and commands.

Key concepts:
➡️ Repository - Where your project files and commit history are stored
➡️ Commit - A snapshot of changes, like a version checkpoint
➡️ Branch - A timeline of commits that lets you work on parallel versions
➡️ Merge - To combine changes from separate branches
➡️ Pull request - Propose & review changes before merging branches

Key commands:
➡️ git init - Initialize a new repo
➡️ git status - View changed files not staged for commit
➡️ git add - Stage files for commit
➡️ git commit - Commit staged snapshot
➡️ git branch - List, create, or delete branches
➡️ git checkout - Switch between branches
➡️ git merge - Join two development histories (branches)
➡️ git push/pull - Send/receive commits to remote repo


✈️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
Please open Telegram to view this post
VIEW IN TELEGRAM
🆘 Here is a list of some 𝗥𝗲𝘀𝘂𝗺𝗲-𝗥𝗲𝗮𝗱𝘆 DevOps projects.

➡️ 𝗗𝗲𝘃𝗦𝗲𝗰𝗢𝗽𝘀 𝗣𝗶𝗽𝗲𝗹𝗶𝗻𝗲 𝗣𝗿𝗼𝗷𝗲𝗰𝘁: 𝗗𝗲𝗽𝗹𝗼𝘆 𝗡𝗲𝘁𝗳𝗹𝗶𝘅 𝗖𝗹𝗼𝗻𝗲 𝗼𝗻 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀
🔗 https://lnkd.in/gUpEqDuG

➡️ 𝗗𝗲𝗽𝗹𝗼𝘆𝗶𝗻𝗴 𝗮 𝗬𝗼𝘂𝗧𝘂𝗯𝗲 𝗖𝗹𝗼𝗻𝗲 𝗔𝗽𝗽 𝘄𝗶𝘁𝗵 𝗗𝗲𝘃𝗦𝗲𝗰𝗢𝗽𝘀
🔗 https://lnkd.in/gvvzwW2A

➡️ 𝗚𝗶𝘁𝗟𝗮𝗯 𝗖𝗜/𝗖𝗗 𝘂𝘀𝗶𝗻𝗴 𝗧𝗲𝗿𝗿𝗮𝗳𝗼𝗿𝗺 - 𝗠𝗮𝗻𝗮𝗴𝗶𝗻𝗴 𝗜𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲
🔗 https://lnkd.in/gJUfnjHx

➡️ 𝗧𝗛𝗘 𝗨𝗟𝗧𝗜𝗠𝗔𝗧𝗘 𝗖𝗜/𝗖𝗗 𝗣𝗜𝗣𝗘𝗟𝗜𝗡𝗘
🔗 https://lnkd.in/gVUDtZBF

➡️ 𝗖𝗼𝗺𝗽𝗹𝗲𝘁𝗲 𝗥𝗲𝗮𝗹-𝘁𝗶𝗺𝗲 𝗗𝗲𝘃𝗢𝗽𝘀 𝗖𝗜/𝗖𝗗 𝗣𝗿𝗼𝗷𝗲𝗰𝘁
🔗 https://lnkd.in/gn_tMBfi

➡️ 𝗝𝗲𝗻𝗸𝗶𝗻𝘀 𝗖𝗜/𝗖𝗗 𝗣𝗶𝗽𝗲𝗹𝗶𝗻𝗲 - 𝗦𝗼𝗻𝗮𝗿𝗤𝘂𝗯𝗲, 𝗗𝗼𝗰𝗸𝗲𝗿, 𝗚𝗶𝘁𝗵𝘂𝗯 𝗪𝗲𝗯𝗵𝗼𝗼𝗸𝘀 𝗼𝗻 𝗔𝗪𝗦
🔗 https://lnkd.in/gzwdXM3y

➡️ 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀 𝗖𝗜/𝗖𝗗 𝗣𝗶𝗽𝗲𝗹𝗶𝗻𝗲 𝗨𝘀𝗶𝗻𝗴 𝗝𝗲𝗻𝗸𝗶𝗻𝘀
🔗 https://lnkd.in/gC4Zs_H9

➡️ 𝗠𝗮𝘀𝘁𝗲𝗿𝗶𝗻𝗴 𝗗𝗲𝘃𝗦𝗲𝗰𝗢𝗽𝘀 𝗖𝗜/𝗖𝗗: 𝟯-𝗧𝗶𝗲𝗿 𝗔𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗣𝗶𝗽𝗲𝗹𝗶𝗻𝗲
🔗 https://lnkd.in/grVg76Dw


🔵 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥 Basic Kubectl commands which are essential for a DevOps engineer to manage within a Kubernetes cluster.

💠 Pods:
Create a Pod: kubectl create -f pod.yaml
Get Pods: kubectl get pods
Describe Pod: kubectl describe pod <pod_name>
Logs: kubectl logs <pod_name>
Exec into Pod: kubectl exec -it <pod_name> -- <command>
Delete Pod: kubectl delete pod <pod_name>

💠 Deployments:
Create a Deployment: kubectl create -f deployment.yaml
Get Deployments: kubectl get deployments
Describe Deployment: kubectl describe deployment <deployment_name>
Scale Deployment: kubectl scale --replicas=3 deployment/<deployment_name>
Rollout Status: kubectl rollout status deployment/<deployment_name>
Rollout History: kubectl rollout history deployment/<deployment_name>

💠 Services:
Create a Service: kubectl create -f service.yaml
Get Services: kubectl get services
Describe Service: kubectl describe service <service_name>
Delete Service: kubectl delete service <service_name>

💠 ConfigMaps:
Create a ConfigMap: kubectl create configmap <configmap_name> --from-file=<file_path>
Get ConfigMaps: kubectl get configmaps
Describe ConfigMap: kubectl describe configmap <configmap_name>
Delete ConfigMap: kubectl delete configmap <configmap_name>

💠 Secrets:
Create a Secret: kubectl create secret generic <secret_name> --from-literal=<key>=<value>
Get Secrets: kubectl get secrets
Describe Secret: kubectl describe secret <secret_name>
Delete Secret: kubectl delete secret <secret_name>

💠 Nodes:
Get Nodes: kubectl get nodes
Describe Node: kubectl describe node <node_name>

💠 Namespaces:
Get Namespaces: kubectl get namespaces
Describe Namespace: kubectl describe namespace <namespace_name>

💠 PersistentVolumes (PV) and PersistentVolumeClaims (PVC):
Get PVs/PVCs: kubectl get pv / kubectl get pvc
Describe PV/PVC: kubectl describe pv <pv_name> / kubectl describe pvc <pvc_name>
Delete PV/PVC: kubectl delete pv <pv_name> / kubectl delete pvc <pvc_name>


😎 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!! // Join for DevOps DOCs: @devopsdocs
Please open Telegram to view this post
VIEW IN TELEGRAM