Hello engineers, I hope you're having a great day! Here is part two of our Kubernetes recipes. Grab your coffee
Resources:
Infrastructure Components
Please open Telegram to view this post
VIEW IN TELEGRAM
1704041490928.gif
583.5 KB
Please open Telegram to view this post
VIEW IN TELEGRAM
Dive deep into your Kubernetes pods with this nifty command! It elegantly extracts and displays the name and status of each pod in your current namespace. Perfect for a quick status check or for integrating into your monitoring scripts.
Embrace the power of JSONPath with kubectl to tailor your Kubernetes data exactly how you need it. The possibilities are endless!
Please open Telegram to view this post
VIEW IN TELEGRAM
Here's your typical Docker Workflow 🐳
If you understand this, you understand enough to accomplish 80% of your Docker tasks.
1⃣ After developing your application, Create a Dockerfile to capture all the assets like code, executables & dependencies.
🔢 Use “docker build” to build an Image from your Dockerfile. You’d normally also use the “--tag” option to give your Image a name & tag (eg- “hello_world:latest”).
🔢 At this point, Docker pulls the Base Image (eg- Alpine, Ubuntu) from a Registry (this is Docker Hub by default). If you’re using a private registry instead, this step might perform authentication as well.
🔢 Run the Container from your newly baked Image using “docker run”. A container goes through various states throughout its lifecycle, depending on the processes running inside it and what you do with it from outside.
🔢 Your image is now ready to be distributed to other users so you “docker push” it to the registry.
🔢 Continuously monitor the performance of your container(s) using "docker stats”. Debug a live container using “docker exec” and “docker inspect”.
🔢 Get back to building 🚀
✈️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
If you understand this, you understand enough to accomplish 80% of your Docker tasks.
Please open Telegram to view this post
VIEW IN TELEGRAM
Continuous Integration vs Continuous Delivery vs Continuous Deployment
✅ Developers today face increasing demands to deliver software updates and new features at a rapid pace.
Adopting modern development practices like continuous integration (CI), continuous delivery (CD), and continuous deployment can help teams meet these demands and ship software more frequently.
➡️ But what's the difference between these three approaches?
➡️ 𝗖𝗼𝗻𝘁𝗶𝗻𝘂𝗼𝘂𝘀 𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻👇
➡️ 𝗖𝗼𝗻𝘁𝗶𝗻𝘂𝗼𝘂𝘀 𝗗𝗲𝗹𝗶𝘃𝗲𝗿𝘆 👇
➡️ 𝗖𝗼𝗻𝘁𝗶𝗻𝘂𝗼𝘂𝘀 𝗗𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁👇
✈️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
Adopting modern development practices like continuous integration (CI), continuous delivery (CD), and continuous deployment can help teams meet these demands and ship software more frequently.
Continuous integration is the practice of merging developer working copies to shared repositories multiple times per day.
With CI, developers frequently commit their code changes to a shared version control repository.
Each commit triggers an automated build and test process to catch integration errors as early as possible.
CI helps teams avoid "integration hell" that can happen when developers work in isolation for too long before merging their changes.
Continuous delivery takes CI a step further with automated releases.
CD means that at any point, you can push a button to release the latest app version to users.
The CD pipeline deploys each code change to a testing/staging environment and runs automated tests to confirm the app is production ready.
This ensures developers always have a releasable artifact that has passed tests.
While CD enables releasing often, someone still needs to manually push the button to promote changes to production.
Continuous deployment fully automates the release process.
Every code commit that passes the automated tests triggers an immediate production deployment.
This enables teams to ship features as fast as developers write code.
However, the business may not want to release daily since this could overwhelm users with constant changes.
Many teams use feature flags so developers can deploy new features, but limit their exposure until the business is ready for the public launch.
Adopting CI, CD, and CD practices can accelerate a team's ability to safely deliver innovation.
The key is automating repetitive processes to limit manual errors, provide rapid feedback, and reduce risk.
This frees up developers to focus their energy on writing great code rather than building and deploying it.
The outcome is faster time-to-market and more frequent delivery of customer value.
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
Project Overview:
Check for full details
https://github.com/NotHarshhaa/DevOps-Projects/tree/master/DevOps%20Project-01
@prodevopsguy
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
1705211045837.gif
329.3 KB
Kubernetes: You need to know this 👇
When you do a port farward to a nginx service,
you happily create a tunnel to a single pod😚
❌ Now. Here's a problem:
1. Wonder what happens if the traffic serving pod is terminated?
2. The browser returns "refused to connect" error.
Why?
Because the tunnel is broken.
✔️ To re-establish connection:
"You need to run port-forward command again."
"Port forwarding is useful for testing only."
"For production use cases, always use deployments"
Hope you happily learned something😎
✈️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
When you do a port farward to a nginx service,
you happily create a tunnel to a single pod
kubectl port-forward svc/nginx 8080:801. Wonder what happens if the traffic serving pod is terminated?
2. The browser returns "refused to connect" error.
Why?
Because the tunnel is broken.
"You need to run port-forward command again."
"Port forwarding is useful for testing only."
"For production use cases, always use deployments"
Hope you happily learned something
Please open Telegram to view this post
VIEW IN TELEGRAM
Wishing you a blessed Makar Sankranti 🪴 . May the bright colours of kites paint this day with smiles and joy for you and your loved ones.
Celebrating the festival of kites with a heart full of joy!🪁 🪁 🪁
✈️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!!
Celebrating the festival of kites with a heart full of joy!
Please open Telegram to view this post
VIEW IN TELEGRAM
1705241983880.gif
1.8 MB
Apache Kafka has become increasingly popular in recent years.
It's used by companies like Netflix, LinkedIn, and Uber to handle high-volume data streams.
It's used by companies like Netflix, LinkedIn, and Uber to handle high-volume data streams.
A Kafka producer is an entity that publishes data to topics within the Kafka cluster. In essence, producers are the sources of data streams, which might originate from various applications, systems, or sensors. They push records into Kafka topics, and each record consists of a key, a value, and a timestamp.
A Kafka consumer pulls data from Kafka topics to which it subscribes. Consumers process the data and often are part of a consumer group. In a group, multiple consumers can read from a topic in parallel, with each consumer responsible for reading from certain partitions, ensuring efficient data processing.
A topic is a category or feed name to which records are published. Topics in Kafka are multi-subscriber; they can be consumed by multiple consumers and consumer groups. Topics are divided into partitions to allow for data scalability and parallel processing.
A topic can be divided into partitions, which are essentially subsets of a topic's data. Each partition is an ordered, immutable sequence of records that is continually appended to. Partitions allow topics to be parallelized by splitting the data across multiple brokers.
A broker is a single Kafka server that forms part of the Kafka cluster. Brokers are responsible for maintaining the published data. Each broker may have zero or more partitions per topic and can handle data for multiple topics.
A Kafka cluster comprises one or more brokers. The cluster is the physical grouping of one or more brokers that work together to provide scalability, fault tolerance, and load balancing. The Kafka cluster manages the persistence and replication of message data.
A replica is a copy of a partition. Kafka replicates partitions across multiple brokers to ensure data is not lost if a broker fails. Replicas are classified as either leader replicas or follower replicas.
For each partition, one broker is designated as the leader. The leader replica handles all read and write requests for the partition. Other replicas simply copy the data from the leader.
Follower replicas are copies of the leader replica for a partition. They replicate the leader's log and do not serve client requests. Instead, their purpose is to provide redundancy and to take over as the leader if the current leader fails.
Please open Telegram to view this post
VIEW IN TELEGRAM
Navigating Kubernetes services? Understanding when to use NodePort
- NodePort for simplicity and cost-effectiveness.
- LoadBalancer for scalability and advanced features.
Please open Telegram to view this post
VIEW IN TELEGRAM
Here to Here
Please open Telegram to view this post
VIEW IN TELEGRAM