DevOps & Cloud (AWS, AZURE, GCP) Tech Free Learning
15.9K subscribers
1.33K photos
14 videos
501 files
1.28K links
https://projects.prodevopsguytech.com // https://blog.prodevopsguytech.com

• We post Daily Trending DevOps/Cloud content
• All DevOps related Code & Scripts uploaded
• DevOps/Cloud Job Related Posts
• Real-time Interview questions & preparation guides
Download Telegram
🏔 Understanding Kubernetes Primitives

Hello engineers, I hope you're having a great day! Here is part two of our Kubernetes recipes. Grab your coffee ☕️ and enjoy:

Resources:

⚛️ Ingress: Manages external access to services within the cluster, enabling routing based on hostnames and paths.

⚛️ NetworkPolicy: Defines communication rules between groups of pods and network endpoints.

⚛️ HorizontalPodAutoscaler: Automatically adjusts the number of pod replicas based on resource utilization metrics.

⚛️ Job: Executes tasks until completion, often used for batch processing.

⚛️ CronJob: Schedules jobs to run at specified intervals using cron notation.

⚛️ ResourceQuota: Enforces constraints on resource consumption within a namespace.

⚛️ LimitRange: Defines resource limits and ranges for compute resources.

Infrastructure Components

⚛️ Kubelet: The primary node agent, responsible for the execution and management of containers on each node.

⚛️ Kube-proxy: The network proxy that facilitates the exposure of Kubernetes networking services on individual nodes.

⚛️ Container Runtime: The underlying software responsible for executing containers, which could be docker, containerd, or a compatible runtime.

⚛️ CNI Plugins: Container Network Interface plugins that configure network interfaces within pods to enable network communication.

⚛️ Node: Worker unit in a Kubernetes cluster, responsible for running containerized applications within pods. Think of node as the machineries or the base engines. 🚂


✉️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
Please open Telegram to view this post
VIEW IN TELEGRAM
1704041490928.gif
583.5 KB
⚙️ Explaining 8 Popular Network Protocols in 1 Diagram.


✈️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
Please open Telegram to view this post
VIEW IN TELEGRAM
🌐 𝗔𝗱𝘃𝗮𝗻𝗰𝗲𝗱 𝗸𝘂𝗯𝗲𝗰𝘁𝗹 𝗜𝗻𝘀𝗶𝗴𝗵𝘁 𝗼𝗳 𝘁𝗵𝗲 𝗗𝗮𝘆! 🌐

Dive deep into your Kubernetes pods with this nifty command! It elegantly extracts and displays the name and status of each pod in your current namespace. Perfect for a quick status check or for integrating into your monitoring scripts.


Embrace the power of JSONPath with kubectl to tailor your Kubernetes data exactly how you need it. The possibilities are endless!


⭐️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
Please open Telegram to view this post
VIEW IN TELEGRAM
Here's your typical Docker Workflow 🐳

If you understand this, you understand enough to accomplish 80% of your Docker tasks.

1⃣ After developing your application, Create a Dockerfile to capture all the assets like code, executables & dependencies.

🔢 Use “docker build” to build an Image from your Dockerfile. You’d normally also use the “--tag” option to give your Image a name & tag (eg- “hello_world:latest”).

🔢 At this point, Docker pulls the Base Image (eg- Alpine, Ubuntu) from a Registry (this is Docker Hub by default). If you’re using a private registry instead, this step might perform authentication as well.

🔢 Run the Container from your newly baked Image using “docker run”. A container goes through various states throughout its lifecycle, depending on the processes running inside it and what you do with it from outside.

🔢 Your image is now ready to be distributed to other users so you “docker push” it to the registry.

🔢 Continuously monitor the performance of your container(s) using "docker stats”. Debug a live container using “docker exec” and “docker inspect”.

🔢 Get back to building 🚀


✈️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
Please open Telegram to view this post
VIEW IN TELEGRAM
Continuous Integration vs Continuous Delivery vs Continuous Deployment


Developers today face increasing demands to deliver software updates and new features at a rapid pace.

Adopting modern development practices like continuous integration (CI), continuous delivery (CD), and continuous deployment can help teams meet these demands and ship software more frequently.

➡️ But what's the difference between these three approaches?

➡️ 𝗖𝗼𝗻𝘁𝗶𝗻𝘂𝗼𝘂𝘀 𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻👇
Continuous integration is the practice of merging developer working copies to shared repositories multiple times per day.

With CI, developers frequently commit their code changes to a shared version control repository.

Each commit triggers an automated build and test process to catch integration errors as early as possible.

CI helps teams avoid "integration hell" that can happen when developers work in isolation for too long before merging their changes.


➡️ 𝗖𝗼𝗻𝘁𝗶𝗻𝘂𝗼𝘂𝘀 𝗗𝗲𝗹𝗶𝘃𝗲𝗿𝘆 👇
Continuous delivery takes CI a step further with automated releases.

CD means that at any point, you can push a button to release the latest app version to users.

The CD pipeline deploys each code change to a testing/staging environment and runs automated tests to confirm the app is production ready.

This ensures developers always have a releasable artifact that has passed tests.

While CD enables releasing often, someone still needs to manually push the button to promote changes to production.


➡️ 𝗖𝗼𝗻𝘁𝗶𝗻𝘂𝗼𝘂𝘀 𝗗𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁👇
Continuous deployment fully automates the release process.

Every code commit that passes the automated tests triggers an immediate production deployment.

This enables teams to ship features as fast as developers write code.

However, the business may not want to release daily since this could overwhelm users with constant changes.

Many teams use feature flags so developers can deploy new features, but limit their exposure until the business is ready for the public launch.

Adopting CI, CD, and CD practices can accelerate a team's ability to safely deliver innovation.

The key is automating repetitive processes to limit manual errors, provide rapid feedback, and reduce risk.

This frees up developers to focus their energy on writing great code rather than building and deploying it.
The outcome is faster time-to-market and more frequent delivery of customer value.



✈️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
Please open Telegram to view this post
VIEW IN TELEGRAM
🔥Most Useful DevOps/Cloud GitHub Repositories to Learning and Become a DevOps Engineer


1️⃣. DevOps Realtime Projects (Beginner to Experienced): Link

2️⃣. Into The DevOps of Every tools: Link

3️⃣. DevOps Setup-Installations Guides: Link

4️⃣. Roadmap to learn Kubernetes so easy: Link

5️⃣. List of Best DevOps Tools with Detailed: Link

6️⃣. End to End CI/CD Pipeline Deployment on AWS EKS: Link

7️⃣. Becoming a Kubernetes Administrator Learning path: Link

8️⃣. Azure All-in-one Guide: Link

9️⃣. Terraform: Deploy an EKS Cluster-Like a Boss!: Link

1️⃣0️⃣. All In one Buddle of Kubernetes: Link

1️⃣1️⃣. Kubernetes Dashboard with integrated Health checks: Link

1️⃣2️⃣. AWS Billing Alert terraform module: Link


♥️Credits: @NotHarshhaa

❤️ Follow for more: @prodevopsguy
Please open Telegram to view this post
VIEW IN TELEGRAM
☄️ Project Title: Deploy a 3 Tier Architecture On AWS - End to End Project

Project Overview:
Tier 1: Presentation Layer
Create a web application using a framework like React, Angular, or Vue.js.
Host the frontend on Amazon S3 or use AWS Amplify for a serverless frontend deployment.

Tier 2: Application Layer
Develop a server-side application using a technology like Node.js, Python, or Java.
Deploy the application on AWS Elastic Beanstalk or AWS Lambda for serverless applications.
Use Amazon API Gateway for creating RESTful APIs or AWS App Runner for containerized applications.

Tier 3: Data Layer
Choose a database solution like Amazon RDS (Relational Database Service), Amazon DynamoDB (NoSQL), or Amazon Aurora (MySQL/PostgreSQL).
Configure database security groups and access controls.
Ensure data backup and redundancy as per your application's needs.

Check for full details 👇

https://github.com/NotHarshhaa/DevOps-Projects/tree/master/DevOps%20Project-01


Connect for more Learning connect 👍
@prodevopsguy
Please open Telegram to view this post
VIEW IN TELEGRAM
🧑‍💻 Git/GitHub 🆓 Videos :-

〰️ https://drive.google.com/drive/folders/1vhSsxz9oAtSh136JVo3gryaDPJAYWteF?usp=sharing


✈️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
Please open Telegram to view this post
VIEW IN TELEGRAM
🪙 Openshift 🆓 Videos :-

➡️ https://drive.google.com/drive/folders/1jBbTglBbFOp4bEO08HEuhUjcE18qXuZo?usp=sharing


✈️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
Please open Telegram to view this post
VIEW IN TELEGRAM
💳 Ansible 🆓 Videos :-

➡️ https://drive.google.com/drive/folders/1p35HHSamOyL1Rta8hK5--4k1mPWYAXaV?usp=sharing


✈️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
Please open Telegram to view this post
VIEW IN TELEGRAM
1705211045837.gif
329.3 KB
Kubernetes: You need to know this 👇

When you do a port farward to a nginx service,

you happily create a tunnel to a single pod 😚

kubectl port-forward svc/nginx 8080:80

Now. Here's a problem:

1. Wonder what happens if the traffic serving pod is terminated?
2. The browser returns "refused to connect" error.

Why?

Because the tunnel is broken.

✔️ To re-establish connection:

"You need to run port-forward command again."
"Port forwarding is useful for testing only."
"For production use cases, always use deployments"

Hope you happily learned something 😎


✈️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
Please open Telegram to view this post
VIEW IN TELEGRAM
Wishing you a blessed Makar Sankranti 🪴. May the bright colours of kites paint this day with smiles and joy for you and your loved ones.

Celebrating the festival of kites with a heart full of joy! 🪁🪁🪁


✈️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
Please open Telegram to view this post
VIEW IN TELEGRAM
1705241983880.gif
1.8 MB
Apache Kafka has become increasingly popular in recent years.

It's used by companies like Netflix, LinkedIn, and Uber to handle high-volume data streams.
🔥 I have created this handy diagram that breaks down the key concepts of Kafka in a simple and easy-to-understand way.

🔴 𝗣𝗿𝗼𝗱𝘂𝗰𝗲𝗿:
A Kafka producer is an entity that publishes data to topics within the Kafka cluster. In essence, producers are the sources of data streams, which might originate from various applications, systems, or sensors. They push records into Kafka topics, and each record consists of a key, a value, and a timestamp.


🔴 𝗖𝗼𝗻𝘀𝘂𝗺𝗲𝗿:
A Kafka consumer pulls data from Kafka topics to which it subscribes. Consumers process the data and often are part of a consumer group. In a group, multiple consumers can read from a topic in parallel, with each consumer responsible for reading from certain partitions, ensuring efficient data processing.


🔴 𝗧𝗼𝗽𝗶𝗰:
A topic is a category or feed name to which records are published. Topics in Kafka are multi-subscriber; they can be consumed by multiple consumers and consumer groups. Topics are divided into partitions to allow for data scalability and parallel processing.


🔴 𝗣𝗮𝗿𝘁𝗶𝘁𝗶𝗼𝗻:
A topic can be divided into partitions, which are essentially subsets of a topic's data. Each partition is an ordered, immutable sequence of records that is continually appended to. Partitions allow topics to be parallelized by splitting the data across multiple brokers.


🔴 𝗕𝗿𝗼𝗸𝗲𝗿:
A broker is a single Kafka server that forms part of the Kafka cluster. Brokers are responsible for maintaining the published data. Each broker may have zero or more partitions per topic and can handle data for multiple topics.


🔴 𝗖𝗹𝘂𝘀𝘁𝗲𝗿:
A Kafka cluster comprises one or more brokers. The cluster is the physical grouping of one or more brokers that work together to provide scalability, fault tolerance, and load balancing. The Kafka cluster manages the persistence and replication of message data.


🔴 𝗥𝗲𝗽𝗹𝗶𝗰𝗮:
A replica is a copy of a partition. Kafka replicates partitions across multiple brokers to ensure data is not lost if a broker fails. Replicas are classified as either leader replicas or follower replicas.


🔴 𝗟𝗲𝗮𝗱𝗲𝗿 𝗥𝗲𝗽𝗹𝗶𝗰𝗮:
For each partition, one broker is designated as the leader. The leader replica handles all read and write requests for the partition. Other replicas simply copy the data from the leader.


🔴 𝗙𝗼𝗹𝗹𝗼𝘄𝗲𝗿 𝗥𝗲𝗽𝗹𝗶𝗰𝗮:
Follower replicas are copies of the leader replica for a partition. They replicate the leader's log and do not serve client requests. Instead, their purpose is to provide redundancy and to take over as the leader if the current leader fails.



✈️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
Please open Telegram to view this post
VIEW IN TELEGRAM
🚀 𝗡𝗼𝗱𝗲𝗣𝗼𝗿𝘁 𝘃𝘀 𝗟𝗼𝗮𝗱𝗕𝗮𝗹𝗮𝗻𝗰𝗲𝗿 - 𝗠𝗮𝗸𝗶𝗻𝗴 𝘁𝗵𝗲 𝗥𝗶𝗴𝗵𝘁 𝗖𝗵𝗼𝗶𝗰𝗲 🚀

Navigating Kubernetes services? Understanding when to use NodePort 🆚 LoadBalancer is crucial!

🔖 NodePort is your go-to for development, testing, or smaller-scale environments. It's simple and universal, exposing services on each node's IP at a specific port. It is ideal when external load balancers are overkill.
🔖 LoadBalancer steps in for production-grade needs, especially in cloud environments. It leverages cloud-provider capabilities for robust load balancing, offering advanced features like SSL termination and consistent external IPs.

💡 Choose wisely:
- NodePort for simplicity and cost-effectiveness.
- LoadBalancer for scalability and advanced features.

🌐 Whether you're a DevOps pro or a Kubernetes newcomer, making the right choice between NodePort and LoadBalancer can streamline your deployments and optimize resource usage.


✈️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
Please open Telegram to view this post
VIEW IN TELEGRAM
🟥 75+ DevOps & Cloud Documents 📇 Uploaded

Here to Here


✉️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
Please open Telegram to view this post
VIEW IN TELEGRAM