If you want to become a Certified Kubernetes Administrator, or you want to become an EXPERT in Kubernetes, learn Kubernetes from scratch and understand everything, this repo is a good choice.
1. Kubernetes
2. Helm
3. Operator
4. Prometheus
5. EKS
Please open Telegram to view this post
VIEW IN TELEGRAM
When you're dealing with an instance in an Amazon Web Services (AWS) environment that is connected via a NAT (Network Address Translation) Gateway, it's important to understand the specific roles and configurations involved, which affect how network traffic is managed. A NAT Gateway in AWS primarily allows instances within a private subnet to connect to the Internet or other AWS services while preventing the Internet from initiating a connection with those instances. Here’s how it works:
A NAT Gateway enables instances in a private subnet to send outbound traffic to the internet, allowing for updates, downloads, and other internet-dependent activities. It also allows the instances to receive the responses from this outbound traffic.
However, the NAT Gateway does not enable inbound connections from the internet to the instances behind it. This is a security feature designed to protect instances in private subnets from unwanted external access.
Instances in the private subnet do not have public IP addresses. Instead, they are assigned private IP addresses that are not routable on the internet.
When an instance in a private subnet communicates with the internet, the NAT Gateway translates the private IP address of the instance to the public IP address of the NAT Gateway. This translation is part of why the process is called Network Address Translation.
The translation setup of the NAT Gateway only maintains the state of active connections initiated from the private subnet. Since the NAT Gateway maps multiple private IPs to a single public IP, it uses a combination of the port number and the source IP to distinguish between different connections.
When a connection is initiated from outside (the internet) without a prior corresponding internal request, the NAT Gateway has no rules or states to match this incoming connection to an internal private IP; thus, it blocks/drops such requests.
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
www.prodevopsguy.site
Simple CI/CD pipeline Integrating Jenkins with Maven and GitHub to Build a job on a Tomcat server.
A company would use a CI/CD pipeline integrated with Jenkins, Maven, GitHub, and Apache Tomcat to streamline and automate the software development and deployment processes.
𝑓𝑜𝑟 𝑚𝑜𝑟𝑒 𝑖𝑛𝑓𝑜, 𝑦𝑜𝑢 𝑐𝑎𝑛 𝑐ℎ𝑒𝑐𝑘 𝑡ℎ𝑖𝑠 𝑙𝑖𝑛𝑘:
Please open Telegram to view this post
VIEW IN TELEGRAM
𝗦𝗼𝗺𝗲 𝗼𝗳 𝘁𝗵𝗲 𝗕𝗲𝘀𝘁 𝗣𝗿𝗮𝗰𝘁𝗶𝗰𝗲𝘀 𝗳𝗼𝗿 𝗗𝗼𝗰𝗸𝗲𝗿𝗳𝗶𝗹𝗲 🐬 :-
✅ Use only and only 𝐨𝐟𝐟𝐢𝐜𝐢𝐚𝐥 𝐚𝐧𝐝 𝐯𝐞𝐫𝐢𝐟𝐢𝐞𝐝 𝐢𝐦𝐚𝐠𝐞𝐬 𝐚𝐬 𝐛𝐚𝐬𝐞 𝐢𝐦𝐚𝐠𝐞.
✅ Use 𝐥𝐢𝐠𝐡𝐭 𝐰𝐞𝐢𝐠𝐡𝐭 𝐝𝐨𝐜𝐤𝐞𝐫 𝐢𝐦𝐚𝐠𝐞𝐬 for base image like alpine linux image distribution.
✅ Use 𝐝𝐨𝐜𝐤𝐞𝐫 𝐢𝐦𝐚𝐠𝐞 𝐯𝐞𝐫𝐬𝐢𝐨𝐧 instead of using latest version.
✅ Do not have multiple layers of RUN instructions in a Dockerfile, instead have 𝐬𝐢𝐧𝐠𝐥𝐞 𝐑𝐔𝐍 𝐢𝐧𝐬𝐭𝐫𝐮𝐜𝐭𝐢𝐨𝐧 𝐭𝐨 𝐢𝐧𝐬𝐭𝐚𝐥𝐥 all necessary packages into a docker image using && 𝐨𝐩𝐞𝐫𝐚𝐭𝐨𝐫.
✅ When Dockerfile contains too many layers, try to 𝐮𝐬𝐞 𝐦𝐮𝐥𝐭𝐢 𝐬𝐭𝐚𝐠𝐞 𝐝𝐨𝐜𝐤𝐞𝐫 𝐟𝐢𝐥𝐞.
✅ Use .𝐝𝐨𝐜𝐤𝐞𝐫𝐢𝐠𝐧𝐨𝐫𝐞 𝐟𝐢𝐥𝐞 to exclude unnecessary files and directories from docker image in order to reduce docker image size.
✅ 𝐃𝐨 𝐧𝐨𝐭 𝐮𝐬𝐞 𝐫𝐨𝐨𝐭 𝐮𝐬𝐞𝐫 to start docker container, use non root user with least privileges.
✅ Once docker image is built, make sure 𝐝𝐨𝐜𝐤𝐞𝐫 𝐢𝐦𝐚𝐠𝐞 𝐢𝐬 𝐬𝐜𝐚𝐧𝐧𝐞𝐝 before pushing to Docker Registry.
🛒 𝐅𝐨𝐥𝐥𝐨𝐰 @prodevopsguy 𝐟𝐨𝐫 𝐦𝐨𝐫𝐞 𝐬𝐮𝐜𝐡 𝐜𝐨𝐧𝐭𝐞𝐧𝐭 𝐚𝐫𝐨𝐮𝐧𝐝 𝐜𝐥𝐨𝐮𝐝 & 𝐃𝐞𝐯𝐎𝐩𝐬!!! // 𝐉𝐨𝐢𝐧 𝐟𝐨𝐫 𝐃𝐞𝐯𝐎𝐩𝐬 𝐃𝐎𝐂𝐬: @devopsdocs
Please open Telegram to view this post
VIEW IN TELEGRAM
Connect
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
DevOps & Cloud (AWS, AZURE, GCP) Tech Free Learning
Media is too big
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
𝟏. 𝐒𝐡𝐨𝐫𝐭-𝐥𝐢𝐯𝐞𝐝 𝐂𝐨𝐧𝐭𝐚𝐢𝐧𝐞𝐫𝐬 :-
These containers are designed to perform a specific task or job and exit.
𝟐. 𝐋𝐨𝐧𝐠-𝐫𝐮𝐧𝐧𝐢𝐧𝐠 𝐂𝐨𝐧𝐭𝐚𝐢𝐧𝐞𝐫𝐬 :-
These containers are designed to run continuously for extended periods, hosting services or your applications. Examples include web servers, databases, or other services that need to remain operational as long as the application is running.
𝟑. 𝐈𝐧𝐭𝐞𝐫𝐚𝐜𝐭𝐢𝐯𝐞 𝐂𝐨𝐧𝐭𝐚𝐢𝐧𝐞𝐫𝐬 :-
Containers can also be used interactively for debugging or testing purposes. In this case, the container may run as long as the user keeps the interactive session open.
𝟒. 𝐎𝐫𝐜𝐡𝐞𝐬𝐭𝐫𝐚𝐭𝐞𝐝 𝐂𝐨𝐧𝐭𝐚𝐢𝐧𝐞𝐫𝐬 :-
𝐊𝐮𝐛𝐞𝐫𝐧𝐞𝐭𝐞𝐬 orchestrated 𝐂𝐨𝐧𝐭𝐚𝐢𝐧𝐞𝐫𝐬 may be continuously monitored and automatically restarted if they fail or crash.
In these cases, the containers can run for a long time as they are automatically managed by the orchestrator unless interrupted externally.
𝟓. 𝐃𝐚𝐞𝐦𝐨𝐧 𝐂𝐨𝐧𝐭𝐚𝐢𝐧𝐞𝐫𝐬 :-
The last on this list are 𝐃𝐚𝐞𝐦𝐨𝐧 𝐂𝐨𝐧𝐭𝐚𝐢𝐧𝐞𝐫𝐬. Docker containers can be run as background daemons, serving a specific purpose and running as long as the system is active or until manually stopped.
htop as a daemon.Please open Telegram to view this post
VIEW IN TELEGRAM
The DevOps 👾 Engineer role is not for freshers.
🔖 Here are some of the tasks that you will be required to do as a DevOps engineer:
⏩ 𝗬𝗼𝘂 𝘄𝗼𝗻’𝘁 𝗯𝗲 𝗮𝘀𝗸𝗲𝗱 𝘁𝗼 𝗰𝗿𝗲𝗮𝘁𝗲 𝗽𝗶𝗽𝗲𝗹𝗶𝗻𝗲𝘀 on the day of joining. Most of the companies already have everything in place.
⏩ . 𝗬𝗼𝘂 𝘄𝗶𝗹𝗹 𝗮𝘀𝗸𝗲𝗱 𝘁𝗼 𝗽𝗲𝗿𝗳𝗼𝗿𝗺 𝗣𝗢𝗖𝘀(𝗣𝗿𝗼𝗼𝗳 𝗼𝗳 𝗖𝗼𝗻𝗰𝗲𝗽𝘁) on a variety of tools, starting from CI/CD to secret management. For instance the difference in build time between Jenkins and GitLab.
⏩ . 𝗕𝘂𝗶𝗹𝗱 𝗰𝘂𝘀𝘁𝗼𝗺 𝗺𝗼𝗻𝗶𝘁𝗼𝗿𝗶𝗻𝗴 𝗺𝗲𝘁𝗿𝗶𝗰𝘀 for better business growth and process efficiency.
⏩ . Work closely with the Cloud and Development team to 𝗶𝗺𝗽𝗿𝗼𝘃𝗲 𝗹𝗮𝘁𝗲𝗻𝗰𝘆, 𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 𝗮𝗻𝗱 𝗦𝗟𝗔.
⏩ . Spend crazy time in finding “𝗪𝗵𝗮𝘁 𝗲𝗹𝘀𝗲 𝗰𝗮𝗻 𝘄𝗲 𝗮𝘂𝘁𝗼𝗺𝗮𝘁𝗲”.
⏩ . 𝗪𝗿𝗶𝘁𝗲 𝗰𝘂𝘀𝘁𝗼𝗺 𝗰𝗼𝗺𝗽𝗼𝗻𝗲𝗻𝘁𝘀 like controllers for Kubernetes.
⏩ . 𝗛𝗲𝗹𝗽 𝘀𝗵𝗶𝗳𝘁 𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗹𝗲𝗳𝘁 by Writing security policies and rules for tools like OPA and Falco.
⏩ . Play the most active role 𝗱𝘂𝗿𝗶𝗻𝗴 𝗿𝗼𝗹𝗹𝗼𝘂𝘁𝘀, 𝗯𝗹𝗶𝗻𝗱𝘀 𝗮𝗻𝗱 𝘂𝗽𝗱𝗮𝘁𝗶𝗼𝗻 of components.
⏩ . Work with APIs, optimize and troubleshoot them.
⏩ . 𝗖𝗼𝗻𝘁𝗿𝗶𝗯𝘂𝘁𝗲 𝗮 𝘁𝗼𝗻 𝘁𝗼 𝘁𝗵𝗲 𝗼𝗿𝗴'𝘀 𝗗𝗼𝗰𝘂𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻.
😎 𝐅𝐨𝐥𝐥𝐨𝐰 @prodevopsguy 𝐟𝐨𝐫 𝐦𝐨𝐫𝐞 𝐬𝐮𝐜𝐡 𝐜𝐨𝐧𝐭𝐞𝐧𝐭 𝐚𝐫𝐨𝐮𝐧𝐝 𝐜𝐥𝐨𝐮𝐝 & 𝐃𝐞𝐯𝐎𝐩𝐬!!! // 𝐉𝐨𝐢𝐧 𝐟𝐨𝐫 𝐃𝐞𝐯𝐎𝐩𝐬 𝐃𝐎𝐂𝐬: @devopsdocs
Please open Telegram to view this post
VIEW IN TELEGRAM
𝑓𝑜𝑟 𝑚𝑜𝑟𝑒 𝑖𝑛𝑓𝑜, 𝑦𝑜𝑢 𝑐𝑎𝑛 𝑐ℎ𝑒𝑐𝑘 𝑡ℎ𝑖𝑠 𝑙𝑖𝑛𝑘:
Please open Telegram to view this post
VIEW IN TELEGRAM
In today's fast-paced world of software development and deployment, Docker has emerged as a game-changer, revolutionizing the way we build, ship, and run applications.
Docker is an open-source platform that simplifies the process of building, shipping, and running applications within containers. It provides a lightweight, portable, and scalable environment for deploying applications across different computing environments, from development to production. With Docker, developers can package their applications and all their dependencies into a single unit called a container, ensuring consistency and reproducibility across various deployment targets.
A container is a lightweight, standalone, and executable package that contains everything needed to run a piece of software, including the application code, runtime, system tools, libraries, and settings. Containers leverage operating system-level virtualization to isolate the application environment from the underlying infrastructure, making them highly portable and efficient. They provide a consistent runtime environment across different platforms, enabling developers to build once and run anywhere.
An image is a read-only template used to create containers. It serves as a blueprint for defining the filesystem and configuration of a containerized application. Docker images encapsulate all the necessary components, including the operating system, runtime, libraries, dependencies, and application code, in a standardized format. Images can be shared, versioned, and distributed via Docker registries, making it easy to collaborate and deploy applications across diverse environments.
Please open Telegram to view this post
VIEW IN TELEGRAM
𝘾𝙧𝙮𝙞𝙣𝙜 𝙞𝙣 𝙔𝘼𝙈𝙇 🥲
➡️ 𝟭𝟭 𝘄𝗮𝘆𝘀 𝘁𝗼 𝗲𝗳𝗳𝗲𝗰𝘁𝗶𝘃𝗲𝗹𝘆 𝗗𝗲𝗯𝘂𝗴 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀 𝗶𝘀𝘀𝘂𝗲𝘀:
1.🛠 Utilize kubectl commands for quick diagnostics.
2.🖥 Leverage the Kubernetes Dashboard for visual debugging.
3.🚀 Use ephemeral containers for troubleshooting without modifying pod state.
4.📜 Explore logs with stern for efficient log monitoring.
5.🚪 Use kubectl port-forward for direct access to services.
6.⚙️ Implement probes for automated health checks.
7.🗓 Analyze cluster events with kubectl get events.
8.🌐 Network troubleshooting with netshoot.
9.📊 Performance monitoring with Prometheus and Grafana.
10.💻 Inspect container filesystems with kubectl exec.
11.📈 Analyze resource usage with Metrics Server.
Share this to help other DevOps Engineers ♻️🤝
✔️ 𝐅𝐨𝐥𝐥𝐨𝐰 @prodevopsguy 𝐟𝐨𝐫 𝐦𝐨𝐫𝐞 𝐬𝐮𝐜𝐡 𝐜𝐨𝐧𝐭𝐞𝐧𝐭 𝐚𝐫𝐨𝐮𝐧𝐝 𝐜𝐥𝐨𝐮𝐝 & 𝐃𝐞𝐯𝐎𝐩𝐬!!! // 𝐉𝐨𝐢𝐧 𝐟𝐨𝐫 𝐃𝐞𝐯𝐎𝐩𝐬 𝐃𝐎𝐂𝐬: @devopsdocs
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
Share this to help other DevOps Engineers ♻️
Please open Telegram to view this post
VIEW IN TELEGRAM
Lesson 1
Lesson 2
Lesson 3
Lesson 4
Please open Telegram to view this post
VIEW IN TELEGRAM
CICD 👾 with Jenkins Multibranch pipeline ⚙️
➡️ What is Jenkins Multibranch pipeline ❓
According to official documentation, multibranch pipeline job type lets you define a job where from a single git repository Jenkins will detect multiple branches and create nested jobs when it finds a Jenkinsfile
𝑓𝑜𝑟 𝑚𝑜𝑟𝑒 𝑖𝑛𝑓𝑜, 𝑦𝑜𝑢 𝑐𝑎𝑛 𝑐ℎ𝑒𝑐𝑘 𝑡ℎ𝑖𝑠 𝑙𝑖𝑛𝑘:
🖥 https://prodevopsguy.site/cicd-jenkins-multibranch-pipeline
😎 𝐅𝐨𝐥𝐥𝐨𝐰 @prodevopsguy 𝐟𝐨𝐫 𝐦𝐨𝐫𝐞 𝐬𝐮𝐜𝐡 𝐜𝐨𝐧𝐭𝐞𝐧𝐭 𝐚𝐫𝐨𝐮𝐧𝐝 𝐜𝐥𝐨𝐮𝐝 & 𝐃𝐞𝐯𝐎𝐩𝐬!!! // 𝐉𝐨𝐢𝐧 𝐟𝐨𝐫 𝐃𝐞𝐯𝐎𝐩𝐬 𝐃𝐎𝐂𝐬: @devopsdocs
According to official documentation, multibranch pipeline job type lets you define a job where from a single git repository Jenkins will detect multiple branches and create nested jobs when it finds a Jenkinsfile
𝑓𝑜𝑟 𝑚𝑜𝑟𝑒 𝑖𝑛𝑓𝑜, 𝑦𝑜𝑢 𝑐𝑎𝑛 𝑐ℎ𝑒𝑐𝑘 𝑡ℎ𝑖𝑠 𝑙𝑖𝑛𝑘:
Please open Telegram to view this post
VIEW IN TELEGRAM
𝑓𝑜𝑟 𝑚𝑜𝑟𝑒 𝑖𝑛𝑓𝑜, 𝑦𝑜𝑢 𝑐𝑎𝑛 𝑐ℎ𝑒𝑐𝑘 𝑡ℎ𝑖𝑠 𝑙𝑖𝑛𝑘:
Please open Telegram to view this post
VIEW IN TELEGRAM
As a DevOps Engineer, you must be aware of 𝐃𝐨𝐜𝐤𝐞𝐫𝐟𝐢𝐥𝐞 𝐁𝐞𝐬𝐭 𝐏𝐫𝐚𝐜𝐭𝐢𝐜𝐞𝐬:
🔍 Use only and only official and verified images as base image.
🔍 Use light weight docker images for base image like alpine linux image distribution.
🔍 Use docker image version instead of using latest version.
🔍 Install only required packages and softwares into Docker Image.
🔍 Do not have multiple layers of RUN instructions in a Dockerfile, instead have single RUN instruction to install all necessary packages into a docker image using && operator.
🔍 When Dockerfile contains too many layers, try to use multi stage docker file.
🔍 Use .dockerignore file to exclude unnecessary files and directories from docker image in order to reduce docker image size.
🔍 Do not use root user to start docker container, use non root user with least privileges.
🔍 Once docker image is built, make sure docker image is scanned before pushing to Docker Registry.
✔️ 𝐅𝐨𝐥𝐥𝐨𝐰 @prodevopsguy 𝐟𝐨𝐫 𝐦𝐨𝐫𝐞 𝐬𝐮𝐜𝐡 𝐜𝐨𝐧𝐭𝐞𝐧𝐭 𝐚𝐫𝐨𝐮𝐧𝐝 𝐜𝐥𝐨𝐮𝐝 & 𝐃𝐞𝐯𝐎𝐩𝐬!!! // 𝐉𝐨𝐢𝐧 𝐟𝐨𝐫 𝐃𝐞𝐯𝐎𝐩𝐬 𝐃𝐎𝐂𝐬: @devopsdocs
Please open Telegram to view this post
VIEW IN TELEGRAM
Immediate joiners preferred
Please open Telegram to view this post
VIEW IN TELEGRAM