In the world of cloud computing, infrastructure as code (IaC) plays a pivotal role in automating the deployment and management of resources. This blog post provides a step-by-step guide on creating a Two-Tier architecture on AWS using Terraform. We’ll explore the essential services involved, ensuring high availability, security, and scalability for hosting a static website.
Also, we are adopting a modular approach with enhanced security measures. The infrastructure is organized into dedicated modules, ensuring a scalable, maintainable, and secure deployment.
Please open Telegram to view this post
VIEW IN TELEGRAM
Tools and frameworks will change. Fundamentals remain intact.
Without fundamentals, you are incomplete.
Please open Telegram to view this post
VIEW IN TELEGRAM
What is Ansible →
➡️ Ansible is DevOps tool and it is similar like chef means it is a Configuration management tool let’s Begins with a Story → suppose you have a big organisation which have 100’s of servers Now a task is came to install git on that 100’s of servers …man responsible for doing this is System Administrator who is doing this manually which takes a lot of time…
guys!!!! we have that tool and that is Ansible→ A Configuration Management Tool…..
➡️ But !! But !! But !! First you need to connect all the nodes to ansible server which is done manually after that you will be able to automate the things…..
➡️ configuration management →It is a method through which we automate admin tasks.
➡️ It automates the task which the system administrator doing manually
Configuration management tool is of 2 types →
➡️ Pull based → In Pull Based it periodically check for the update from the main server to the nodes if update available it automatically install on the nodes connected with the server → chef and puppet is a pull based config tool.
➡️ Push based → In push based nodes is not going to the main server for the update the update is pushed to the nodes automatically for example the update of apps is pushed to your phone play store now it’s your choice whether you update or not → push based tool is Ansible when you need control in your hands so you take control of your own server for updating.
History of Ansible →
➡️ Michael Dehan developed Ansible in Feb 2012
➡️ Red Hat acquired the Ansible tool in 2015.
➡️ Ansible is available for RHEL, Debian, cent OS, Oracle Linux.
➡️ It is developed in Python background and also in Windows PowerShell.
➡️ You Can use this tool whether your server are in on premises or in the cloud.
➡️ It converted your code into infrastructure means you can say that it is a little bit called an Infrastructure building tool.
✈️ 𝐅𝐨𝐥𝐥𝐨𝐰 @prodevopsguy 𝐟𝐨𝐫 𝐦𝐨𝐫𝐞 𝐬𝐮𝐜𝐡 𝐜𝐨𝐧𝐭𝐞𝐧𝐭 𝐚𝐫𝐨𝐮𝐧𝐝 𝐜𝐥𝐨𝐮𝐝 & 𝐃𝐞𝐯𝐎𝐩𝐬!!! // 𝐉𝐨𝐢𝐧 𝐟𝐨𝐫 𝐃𝐞𝐯𝐎𝐩𝐬 𝐃𝐎𝐂𝐬: @devopsdocs
guys!!!! we have that tool and that is Ansible→ A Configuration Management Tool…..
Configuration management tool is of 2 types →
History of Ansible →
Please open Telegram to view this post
VIEW IN TELEGRAM
𝐋𝐢𝐬𝐭 𝐨𝐟 𝐟𝐫𝐞𝐞 𝐜𝐨𝐮𝐫𝐬𝐞𝐬 𝐭𝐨 𝐭𝐫𝐚𝐧𝐬𝐟𝐨𝐫𝐦 𝐘𝐨𝐮𝐫 𝐒𝐤𝐢𝐥𝐥𝐬: 𝐃𝐢𝐯𝐞 𝐢𝐧𝐭𝐨 𝐭𝐡𝐞 𝐖𝐨𝐫𝐥𝐝 𝐨𝐟 𝐃𝐞𝐯𝐎𝐩𝐬 & 𝐁𝐚𝐜𝐤𝐞𝐧𝐝 𝐰𝐢𝐭𝐡 𝐭𝐡𝐞𝐬𝐞 𝐦𝐮𝐬𝐭 𝐭𝐫𝐲 𝐅𝐫𝐞𝐞 𝐂𝐨𝐮𝐫𝐬𝐞𝐬
🔢 . 𝐏𝐲𝐭𝐡𝐨𝐧:
A scripting language used for automation in DevOps.
🔗 https://lnkd.in/gTEsX2VC
🔢 . 𝐆𝐢𝐭:
Distributes version control system handles everything from small to very large projects with speed and efficiency.
🔗 https://lnkd.in/gFTyTWCC
🔢 . 𝐂𝐥𝐨𝐮𝐝:
Its fair to say the rapid increament of startUps is revolutionised by the cloud technology.
🔗 https://lnkd.in/gf6_8RNG
🔢 . 𝐌𝐢𝐜𝐫𝐨𝐬𝐞𝐫𝐯𝐢𝐜𝐞𝐬:
An approach of making a loosely coupled application.
🔗 https://lnkd.in/gYqdHCdF
🔢 . 𝐒𝐞𝐫𝐯𝐞𝐫𝐥𝐞𝐬𝐬:
Allows developers to build and run applications without worrying about servers.
🔗 https://lnkd.in/g8knM8uE
🔢 . 𝐋𝐢𝐧𝐮𝐱:
Probably the most famous primitive yet secure OS to use
🔗 https://lnkd.in/ghmZybpz
🔢 . 𝐃𝐞𝐯𝐎𝐩𝐬:
An exploding domain to learn (It is an ecosystem that takes care of continuous integration, delivery, deployment and monitoring)
🔗 https://lnkd.in/g6ryYv8N
🔢 . 𝐃𝐨𝐜𝐤𝐞𝐫:
Packages application along with dependencies and libraries required to run the application.
🔗 https://lnkd.in/ggaqmu8p
🔢 . 𝐊𝐮𝐛𝐞𝐫𝐧𝐞𝐭𝐞𝐬, 𝐎𝐩𝐞𝐧𝐒𝐡𝐢𝐟𝐭:
Manages the deployment of an application and have autoscaling and autohealing capabilities.
🔗 https://lnkd.in/gsKYTciW
🔢 🔢 . 𝐌𝐲𝐒𝐐𝐋:
Relational Database Management System
🔗 https://lnkd.in/gbmjQcsD
😎 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!! // Join for DevOps DOCs: @devopsdocs
A scripting language used for automation in DevOps.
Distributes version control system handles everything from small to very large projects with speed and efficiency.
Its fair to say the rapid increament of startUps is revolutionised by the cloud technology.
An approach of making a loosely coupled application.
Allows developers to build and run applications without worrying about servers.
Probably the most famous primitive yet secure OS to use
An exploding domain to learn (It is an ecosystem that takes care of continuous integration, delivery, deployment and monitoring)
Packages application along with dependencies and libraries required to run the application.
Manages the deployment of an application and have autoscaling and autohealing capabilities.
Relational Database Management System
Please open Telegram to view this post
VIEW IN TELEGRAM
1707116300601.gif
1.4 MB
Linux's file system is tree-like. The base is "/", with everything else branching off.
/bin
/boot
/dev
/etc
/home
/lib
/media
/mnt
/opt
/proc
/root
/sbin
/srv
/tmp
/usr
/var
cd
ls
mkdir
rmdir
cp
mv
rm
Please open Telegram to view this post
VIEW IN TELEGRAM
1711255043413.gif
2.3 MB
While CICD gets thrown around a lot, it actually refers to two separate practices that work together in the software development lifecycle: Continuous Integration (CI) and Continuous Delivery/Deployment (CD).
Here's a quick breakdown:
Here's the key difference:
Please open Telegram to view this post
VIEW IN TELEGRAM
May this Holi be filled with fun, joy and love. Happy Holi 2024!
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
We are seeking a highly motivated and enthusiastic individual to join our team as an AWS DevOps Engineer.
AWS, Docker, kubernetes, Ansible,Linux,Shells Scripting, Python, Jenkins, CI/CD, Git/GitHub
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
𝗠𝗼𝘀𝘁 𝗶𝗺𝗽𝗼𝗿𝘁𝗮𝗻𝘁 𝗔𝗪𝗦 𝘀𝗲𝗿𝘃𝗶𝗰𝗲𝘀 𝘆𝗼𝘂 𝘀𝗵𝗼𝘂𝗹𝗱 𝗹𝗲𝗮𝗿𝗻 𝘁𝗼 𝗯𝗲 𝗮 𝗗𝗲𝘃𝗢𝗽𝘀 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿.
Please open Telegram to view this post
VIEW IN TELEGRAM
1711390012801.gif
4.2 MB
- Store all your Terraform code in a version-controlled repository.
- Enables collaboration and tracking changes over time.
- Written in HashiCorp Configuration Language (HCL).
- Define the desired state of your infrastructure.
- Tracks the current state of your infrastructure.
- Stores information about resources managed by Terraform.
- Extend Terraform's capabilities to manage different cloud providers.
- Allow seamless integration with AWS, Azure, Google Cloud, etc.
- Encapsulate reusable infrastructure components.
- Promote modularity and maintainability in Terraform code.
- Store Terraform state remotely for better collaboration.
- Provides locking mechanisms to prevent concurrent modifications.
- Automate the process of building, testing, and deploying infrastructure changes.
- Ensures consistency and reliability in infrastructure deployments.
- Validate infrastructure changes before applying them.
- Helps prevent misconfigurations and downtime.
- Securely manage sensitive information like API keys and passwords.
- Integrates with tools like HashiCorp Vault or AWS Secrets Manager.
- Monitor the health and performance of your infrastructure.
- Capture and analyze logs to troubleshoot issues efficiently.
- Enforce compliance policies and regulatory requirements.
- Automate governance tasks to maintain security and compliance standards.
- Facilitate communication and knowledge sharing among team members.
- Document infrastructure changes, decisions, and best practices.
Please open Telegram to view this post
VIEW IN TELEGRAM
1711376018163.gif
1.3 MB
Kubernetes networking is a critical aspect of managing containerized applications in a distributed environment. It ensures that containers within a Kubernetes cluster can communicate with each other, with external users, and with other services smoothly.
Let's explore the key concepts and components of Kubernetes networking:
- Pods share the same network namespace and can communicate via localhost.
- Kubernetes assigns each Pod a unique IP address for inter-node communication.
- Services provide stable endpoints for accessing Pods.
- ClusterIP, NodePort, and LoadBalancer are common Service types for internal and external access.
- Ingress manages external access to Services based on HTTP/HTTPS rules.
- Ingress controllers handle traffic routing to Services within the cluster.
- This defines rules for Pod-to-Pod communication and access to external resources.
- It enables fine-grained control over network traffic within the cluster.
- A standard for defining plugins that handle networking in container runtimes.
- Used by Kubernetes to manage network interfaces and IP addresses.
- Kube-Proxy manages network rules for routing traffic to Services.
- CoreDNS resolves DNS queries for Kubernetes Services and Pods.
Understanding Kubernetes networking is essential for deploying and managing containerized applications effectively within a Kubernetes cluster
Please open Telegram to view this post
VIEW IN TELEGRAM
To ace your 𝐢𝐧𝐭𝐞𝐫𝐯𝐢𝐞𝐰𝐬 for a DevOps Engineer role ,following Linux related Hardware commands, Data information, Process mgmt., files mgmt, search patterns and packages commands you should be aware of:
𝟏: 𝐇𝐀𝐑𝐃𝐖𝐀𝐑𝐄 𝐈𝐍𝐅𝐎𝐑𝐌𝐀𝐓𝐈𝐎𝐍:
-cat /proc/cpuinfo # Display CPU information
-cat /proc/meminfo # Display memory information
-free -h # Display free and used memory ( -hfor human readable, -mfor MB, -gfor GB.)
-lspci -tv # Display PCI devices
-lsusb -tv # Display USB devices
-dmidecode # Display DMI/SMBIOS (hardware info) from the BIOS
-hdparm -i /dev/sda # Show info about disk sda
-hdparm -tT /dev/sda # Perform a read speed test on disk sda
𝟐. 𝐌𝐀𝐍𝐈𝐏𝐔𝐋𝐀𝐓𝐈𝐍𝐆 𝐃𝐀𝐓𝐀 :
-awk # Pattern scanning and processing language
-perl # Data manipulation language
-cmp # Compare the contents of two files
-paste # Merge file data
-sed # Stream text editor
-cut # Cut out selected fields of each line of a file
-sort # Sort file data
-diff # Differential file comparator
-split # Split file into smaller files
-expand, unexpand # Expand tabs to spaces, and vice versa
𝟑. 𝐏𝐑𝐎𝐂𝐄𝐒𝐒 𝐌𝐀𝐍𝐀𝐆𝐄𝐌𝐄𝐍𝐓:
ps # Display your currently running processes
ps -ef # Display all the currently running processes on the system.
ps -ef | grep processname # Display process information for processname
top # Display and manage the top processes
htop # Interactive process viewer (top alternative)
kill pid # Kill process with process ID of pid
killall processname # Kill all processes named processname
program & # Start programin the background
bg # Display stopped or background jobs
fg # Brings the most recent background job to foreground
fg n # Brings job nto the foreground
𝟒. 𝐀𝐑𝐂𝐇𝐈𝐕𝐄𝐒 (𝐓𝐀𝐑 𝐅𝐈𝐋𝐄𝐒) :
-tar cf archive.tar directory # Create tar named archive .tar containing dir.
-tar xf archive.tar # Extract the contents from archive.tar. tar czf
-archive.tar.gz director # Create a gzip compressed tar file name archive.tar
-tar xzf archive.tar.gz # Extract a gzip compressed tar file.
-tar cjf archive.tar.bz2 directory # Create a tar file with bzip2 compression
-tar xjf archive.tar.bz2 # Extract a bzip2 compressed tar file.
𝟓. 𝐈𝐍𝐒𝐓𝐀𝐋𝐋𝐈𝐍𝐆 𝐏𝐀𝐂𝐊𝐀𝐆𝐄𝐒:
yum search keyword # Search for a package by keyword.
yum install package # Install package.
yum info package # Display desc and summary information about package.
rpm -i package.rpm # Install package from local file named package.rpm
yum remove package # Remove/uninstall package
tar zxvf sourcecode.tar.gz # Install software from source code.
𝟔. 𝐒𝐄𝐀𝐑𝐂𝐇 :
grep pattern file # Search for pattern in file
grep -r pattern directory # Search recursively for patternin directory
locate name # Find files and directories by name
find /home/john -name 'prefix*' # Find files in /home/john that start with "prefix".
find /home -size +100M # Find files larger than 100MB in /home
😎 𝐅𝐨𝐥𝐥𝐨𝐰 @prodevopsguy 𝐟𝐨𝐫 𝐦𝐨𝐫𝐞 𝐬𝐮𝐜𝐡 𝐜𝐨𝐧𝐭𝐞𝐧𝐭 𝐚𝐫𝐨𝐮𝐧𝐝 𝐜𝐥𝐨𝐮𝐝 & 𝐃𝐞𝐯𝐎𝐩𝐬!!! // 𝐉𝐨𝐢𝐧 𝐟𝐨𝐫 𝐃𝐞𝐯𝐎𝐩𝐬 𝐃𝐎𝐂𝐬: @devopsdocs
𝟏: 𝐇𝐀𝐑𝐃𝐖𝐀𝐑𝐄 𝐈𝐍𝐅𝐎𝐑𝐌𝐀𝐓𝐈𝐎𝐍:
-cat /proc/cpuinfo # Display CPU information
-cat /proc/meminfo # Display memory information
-free -h # Display free and used memory ( -hfor human readable, -mfor MB, -gfor GB.)
-lspci -tv # Display PCI devices
-lsusb -tv # Display USB devices
-dmidecode # Display DMI/SMBIOS (hardware info) from the BIOS
-hdparm -i /dev/sda # Show info about disk sda
-hdparm -tT /dev/sda # Perform a read speed test on disk sda
𝟐. 𝐌𝐀𝐍𝐈𝐏𝐔𝐋𝐀𝐓𝐈𝐍𝐆 𝐃𝐀𝐓𝐀 :
-awk # Pattern scanning and processing language
-perl # Data manipulation language
-cmp # Compare the contents of two files
-paste # Merge file data
-sed # Stream text editor
-cut # Cut out selected fields of each line of a file
-sort # Sort file data
-diff # Differential file comparator
-split # Split file into smaller files
-expand, unexpand # Expand tabs to spaces, and vice versa
𝟑. 𝐏𝐑𝐎𝐂𝐄𝐒𝐒 𝐌𝐀𝐍𝐀𝐆𝐄𝐌𝐄𝐍𝐓:
ps # Display your currently running processes
ps -ef # Display all the currently running processes on the system.
ps -ef | grep processname # Display process information for processname
top # Display and manage the top processes
htop # Interactive process viewer (top alternative)
kill pid # Kill process with process ID of pid
killall processname # Kill all processes named processname
program & # Start programin the background
bg # Display stopped or background jobs
fg # Brings the most recent background job to foreground
fg n # Brings job nto the foreground
𝟒. 𝐀𝐑𝐂𝐇𝐈𝐕𝐄𝐒 (𝐓𝐀𝐑 𝐅𝐈𝐋𝐄𝐒) :
-tar cf archive.tar directory # Create tar named archive .tar containing dir.
-tar xf archive.tar # Extract the contents from archive.tar. tar czf
-archive.tar.gz director # Create a gzip compressed tar file name archive.tar
-tar xzf archive.tar.gz # Extract a gzip compressed tar file.
-tar cjf archive.tar.bz2 directory # Create a tar file with bzip2 compression
-tar xjf archive.tar.bz2 # Extract a bzip2 compressed tar file.
𝟓. 𝐈𝐍𝐒𝐓𝐀𝐋𝐋𝐈𝐍𝐆 𝐏𝐀𝐂𝐊𝐀𝐆𝐄𝐒:
yum search keyword # Search for a package by keyword.
yum install package # Install package.
yum info package # Display desc and summary information about package.
rpm -i package.rpm # Install package from local file named package.rpm
yum remove package # Remove/uninstall package
tar zxvf sourcecode.tar.gz # Install software from source code.
𝟔. 𝐒𝐄𝐀𝐑𝐂𝐇 :
grep pattern file # Search for pattern in file
grep -r pattern directory # Search recursively for patternin directory
locate name # Find files and directories by name
find /home/john -name 'prefix*' # Find files in /home/john that start with "prefix".
find /home -size +100M # Find files larger than 100MB in /home
Please open Telegram to view this post
VIEW IN TELEGRAM
- Optimizes the overall size of the Docker image
- Removes the burden of creating multiple Dockerfiles for different stages
- Easy to debug a particular build stage
- Able to use the previous stage as a new stage in the new environment
- Ability to use the cached image to make the overall process quicker
- Reduces the risk of vulnerabilities found as the image size becomes smaller with multi-stage builds
Please open Telegram to view this post
VIEW IN TELEGRAM
In Kubernetes, readiness and liveness probes are mechanisms used to ensure that applications running inside containers are healthy and able to handle traffic properly. They are essential for maintaining the reliability and availability of applications in a Kubernetes cluster.
📢 Readiness Probe:
🎤 The readiness probe is used to determine when a container is ready to start accepting traffic. It is crucial for ensuring that services don't send requests to a container until it's ready to handle them effectively.
🎤 If the readiness probe fails (returns a non-successful HTTP status code or times out), Kubernetes marks the container as not ready, and it won't receive any traffic until the probe succeeds.
🎤 The readiness probe can be configured to use HTTP endpoints, TCP sockets, or custom scripts to determine the readiness of the container.
📢 Liveness Probe:
🎤 The liveness probe is used to check if a container is still running properly. It helps Kubernetes determine whether a container should be restarted if it's unresponsive or in a failed state.
🎤 Unlike the readiness probe, which determines if the container is ready to serve traffic, the liveness probe checks if the container is still functioning correctly after it has started.
🎤 If the liveness probe fails (returns a non-successful HTTP status code or times out), Kubernetes restarts the container to try to recover it.
⚠️ Both the readiness and liveness probes are configured to perform an HTTP GET request to the /healthz endpoint on port 8080 of the container.
😎 𝐅𝐨𝐥𝐥𝐨𝐰 @prodevopsguy 𝐟𝐨𝐫 𝐦𝐨𝐫𝐞 𝐬𝐮𝐜𝐡 𝐜𝐨𝐧𝐭𝐞𝐧𝐭 𝐚𝐫𝐨𝐮𝐧𝐝 𝐜𝐥𝐨𝐮𝐝 & 𝐃𝐞𝐯𝐎𝐩𝐬!!! // 𝐉𝐨𝐢𝐧 𝐟𝐨𝐫 𝐃𝐞𝐯𝐎𝐩𝐬 𝐃𝐎𝐂𝐬: @devopsdocs
➡️ The readiness probe will start 5 seconds after the container starts and will be performed every 10 seconds thereafter.
➡️ The liveness probe will start 10 seconds after the container starts and will be performed every 15 seconds thereafter.
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
1711513802253.gif
494.5 KB
It's a versatile skill that can take you far, and it's known for being one of the easiest programming languages to learn and understand.
Here's a roadmap to help you master Python:
Please open Telegram to view this post
VIEW IN TELEGRAM
A container runtime in Kubernetes is the software component responsible for managing the lifecycle of individual containers within a pod. It's the engine that executes the commands and manages the processes within the container environment.
- containerd
- CRI-O
- Docker Engine
- Mirantis Container Runtime
Please open Telegram to view this post
VIEW IN TELEGRAM