Today we look into the Kubernetes system from a bird's eye view.
𝟭: It is a container orchestrator that performs the scheduling, running and recovery of your containerised applications in a horizontally scalable and self-healing way.
𝟮: Control plane - this is where K8s system processes that are responsible for scheduling workloads defined by you and keeping the system healthy live.
𝟯: Worker nodes - this is where containers are scheduled and run.
𝟰: You can have thousands of Nodes (usually you only need tens of them) in your K8s cluster, each of them can host multiple containers. Nodes can be added or removed from the cluster as needed. This enables unrivaled horizontal scalability.
𝟱: Kubernetes provides an easy to use and understand declarative interface to deploy applications. Your application deployment definition can be described in yaml, submitted to the cluster and the system will take care that the desired state of the application is always up to date.
𝟲: Users are empowered to create and own their application architecture in boundaries pre-defined by Cluster Administrators.
Please open Telegram to view this post
VIEW IN TELEGRAM
Are endless manual deployments and sluggish release cycles holding your team back? You're not alone! But fear not, there's a solution that can turn your development process into a well-oiled machine: CI/CD (Continuous Integration/Continuous Delivery).
The benefits are out of this world:
Please open Telegram to view this post
VIEW IN TELEGRAM
1. Check Pod Status: To assess the health and status of your pods within a specific namespace, use the kubectl get pods command.
->
kubectl get pods -n <namespace>2. Review Pod Logs: To review the logs of a specific pod, which can be invaluable for troubleshooting, use the kubectl logs command.
->
kubectl logs <pod-name> -n <namespace>3. Use kubectl describe: For a comprehensive overview of a pod’s configuration and events, kubectl describe is invaluable.
->
kubectl describe pod <pod-name> -n <namespace>4. Check for Resource Constraints: Resource constraints can cause pods to fail to start or run properly. Use kubectl describe nodes to identify resource allocation and availability.
->
kubectl describe nodes5. Examine Liveness and Readiness Probes: Liveness and readiness probes determine the health of a pod. Misconfigurations can cause pods to be killed or not receive traffic. Define probes in your pod or deployment YAML.
6. Debugging with kubectl exec: kubectl exec lets you execute commands inside a container, which can be useful for debugging.
->
kubectl exec -it <pod-name> -- /bin/sh7. Inspect Kubernetes Events: Kubernetes events provide insight into what’s happening in the cluster. Use kubectl get events to retrieve events.
->
kubectl get events --sort-by='.metadata.creationTimestamp' -n <namespace>8. Verify Service and Ingress Configurations: Services and Ingresses are key to exposing Kubernetes applications. Misconfigurations can lead to inaccessible services. Use kubectl get to inspect these resources.
->
kubectl get svc,ingress -n <namespace>9. Analyze Network Policies: Network Policies define how pods communicate with each other and the outside world. Use kubectl get to list active network policies.
->
kubectl get networkpolicy -n <namespace>10. Check for ImagePullBackOff Errors: ImagePullBackOff indicates Kubernetes is unable to pull a container image. Inspect the pod or describe it to see the error details.
->
kubectl describe pod <pod-name> -n <namespace>11. Utilize Kubernetes Dashboard: The Kubernetes Dashboard provides a web-based UI for managing cluster resources. Install or access it to visually inspect resources, view logs, and manage workloads.
Please open Telegram to view this post
VIEW IN TELEGRAM
A StatefulSet in Kubernetes is a workload API object used to manage stateful applications. Unlike a Deployment, which is suitable for stateless applications, a StatefulSet is designed to manage stateful applications that require stable, unique identifiers and persistent storage.
Key features of StatefulSets include:
Please open Telegram to view this post
VIEW IN TELEGRAM
Securely accessing private resources hosted on AWS is paramount. In this post, we'll explore how to securely access multiple private RDS instances, EKS, EC2, etc. on AWS using SSH tunneling.
To streamline the process, we can leverage SSH config files. Below is an example SSH config file tailored for accessing multiple private instances through a bastion host:
𝚌𝚊𝚝 ~/.𝚜𝚜𝚑/𝚌𝚘𝚗𝚏𝚒𝚐
𝙷𝚘𝚜𝚝 𝚛𝚍𝚜_𝚝𝚞𝚗𝚗𝚎𝚕_𝚌𝚘𝚖𝚋𝚒𝚗𝚎𝚍
𝚄𝚜𝚎𝚛 𝚞𝚋𝚞𝚗𝚝𝚞
𝙷𝚘𝚜𝚝𝚗𝚊𝚖𝚎 <𝚒𝚗𝚜𝚝𝚊𝚗𝚌𝚎 𝚒𝚍 𝚘𝚏 𝚋𝚊𝚜𝚝𝚒𝚘𝚗 𝚑𝚘𝚜𝚝>
𝙸𝚍𝚎𝚗𝚝𝚒𝚝𝚢𝙵𝚒𝚕𝚎 ~/.𝚜𝚜𝚑/𝚋𝚊𝚜𝚝𝚒𝚘𝚗-𝚑𝚘𝚜𝚝.𝚙𝚎𝚖
𝚁𝚎𝚚𝚞𝚎𝚜𝚝𝚃𝚃𝚈 𝚗𝚘
𝙿𝚛𝚘𝚡𝚢𝙲𝚘𝚖𝚖𝚊𝚗𝚍 𝚜𝚑 -𝚌 "𝚊𝚠𝚜 𝚜𝚜𝚖 𝚜𝚝𝚊𝚛𝚝-𝚜𝚎𝚜𝚜𝚒𝚘𝚗 --𝚙𝚛𝚘𝚏𝚒𝚕𝚎 𝚊𝚠𝚜-𝚙𝚛𝚘𝚏𝚒𝚕𝚎 --𝚛𝚎𝚐𝚒𝚘𝚗 𝚞𝚜-𝚎𝚊𝚜𝚝-𝟷 --𝚝𝚊𝚛𝚐𝚎𝚝 %𝚑 --𝚍𝚘𝚌𝚞𝚖𝚎𝚗𝚝-𝚗𝚊𝚖𝚎 𝙰𝚆𝚂-𝚂𝚝𝚊𝚛𝚝𝚂𝚂𝙷𝚂𝚎𝚜𝚜𝚒𝚘𝚗 --𝚙𝚊𝚛𝚊𝚖𝚎𝚝𝚎𝚛𝚜 '𝚙𝚘𝚛𝚝𝙽𝚞𝚖𝚋𝚎𝚛=%𝚙'"
𝙻𝚘𝚌𝚊𝚕𝙵𝚘𝚛𝚠𝚊𝚛𝚍 𝟻𝟺𝟹𝟷 𝚍𝚎𝚟-𝚍𝚋-𝚑𝚘𝚜𝚝𝚗𝚊𝚖𝚎:𝟻𝟺𝟹𝟸
𝙻𝚘𝚌𝚊𝚕𝙵𝚘𝚛𝚠𝚊𝚛𝚍 𝟻𝟺𝟹𝟸 𝚚𝚊-𝚍𝚋-𝚑𝚘𝚜𝚝𝚗𝚊𝚖𝚎:𝟻𝟺𝟹𝟸
𝙻𝚘𝚌𝚊𝚕𝙵𝚘𝚛𝚠𝚊𝚛𝚍 𝟻𝟺𝟹𝟹 𝚎𝚔𝚜-𝚎𝚗𝚍𝚙𝚘𝚒𝚗𝚝:𝟺𝟺𝟹
Each LocalForward rule defines a port forwarding configuration from a local port on your machine to a specific RDS instance:
Port 5431 forwards to the development database.
Port 5432 forwards to the QA database.
Port 5433 forwards to the EKS cluster.
Using the provided SSH config file is straightforward. Simply save the configuration to your ~/.ssh/config file and replace ~/.ssh/bastion-host.pem with the path to your SSH private key file. Once configured, you can initiate SSH connections to the remote private instances by running "ssh rds_tunnel_combined" command
Enhanced security: All connections are encrypted, minimizing the risk of data interception.
Simplified access: Users can easily connect to multiple RDS instances with a single command.
Flexible configuration: The SSH config file allows for easy customization of port forwarding rules to suit different use cases.
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
1.
ls: List directory contents2.
cd: Change directory3.
pwd: Print working directory4.
mkdir: Create a directory5.
touch: Create a file6.
cp: Copy files and directories7.
mv: Move or rename files and directories8.
rm: Remove files and directories9.
find: Search for files and directories10.
grep: Search for patterns in files11.
cat: Concatenate and display files12.
less: View file contents page by page13.
head: Display the first lines of a file14.
tail: Display the last lines of a file15.
vi/vim: Text editor16.
nano: Text editor17.
tar: Archive and compress files18.
gzip: Compress files19.
gunzip: Decompress files20.
wget: Download files from the web21.
curl: Transfer data to or from a server22.
ssh: Secure shell remote login23.
scp: Securely copy files between hosts24.
chmod: Change file permissions25.
chown: Change file ownership26.
chgrp: Change group ownership27.
ps: Display running processes28.
top: Monitor system resources and processes29.
kill: Terminate processes30.
df: Display disk space usage31.
du: Estimate file and directory space usage32.
free: Display memory usage33.
uname: Print system information34.
ifconfig: Configure network interfaces35.
ping: Test network connectivity36.
netstat: Network statistics37.
iptables: Firewall administration38.
systemctl: Manage system services39.
journalctl: Query the system journal40.
crontab: Schedule cron jobs41.
useradd: Create a user account42.
passwd: Change user password43.
su: Switch user44.
sudo: Execute a command as another user45.
usermod: Modify user account46.
groupadd: Create a group47.
groupmod: Modify a group48.
id: Print user and group information49.
ssh-keygen: Generate SSH key pairs50.
rsync: Synchronize files and directories51.
diff: Compare files line by line52.
patch: Apply a patch to files53.
tar: Extract files from an archive54.
curl: Perform HTTP requests55.
nc: Netcat - networking utility56.
wget: Download files from the web57.
whois: Lookup domain registration details58.
dig: DNS lookup utility59.
sed: Stream editor for text manipulation60.
awk: Pattern scanning and processing language61.
sort: Sort lines in a text file62.
cut: Extract sections from lines of files63.
wc: Word, line, character, and byte count64.
tee: Redirect output to multiple files or commands65.
history: Command history66.
source: Execute commands from a file in the current shell67.
alias: Create command aliases68.
ln: Create links between files69.
uname: Print system information70.
lsof: List open files and processes71.
mkfs: Create a file system72.
mount: Mount a file system73.
umount: Unmount a file system74.
ssh-agent: Manage SSH keys in memory75.
grep: Search for patterns in files76.
tr: Translate characters77.
cut: Select portions of lines from files78.
paste: Merge lines of files79.
uniq: Report or omit repeated linesPlease open Telegram to view this post
VIEW IN TELEGRAM
1.
ansible-playbook: Executes Ansible playbooks.ex: ansible-playbook -i <inventory_file> <playbook.yml>
2.
ansible: Runs ad-hoc commands or tasks.ex: ansible all -m copy -a "src=/path/to/local/file dest=/path/to/remote/file"
ansible all -m yum -a "name=httpd state=latest"
3.
ansible-galaxy: Manages Ansible roles.ex: ansible-galaxy install <role_name>
4.
ansible-vault: Manages encrypted data within Ansible.ex: ansible-vault encrypt <file>
5.
ansible-galaxy init role_name: Initializes a new Ansible role scaffold.ex: ansible-galaxy init <role_name>
6.
ansible-inventory: Shows Ansible's inventory.ex: ansible-inventory --list -i /path/to/inventory/hosts
7.
ansible-config: Manages Ansible configuration.ex: ansible-config list, ansible-config view
8.
ansible-pull: Pulls playbooks from a version control system and executes them locally.ex: ansible-pull -U <repository_url> <playbook.yml>
9.
ansible-playbook --syntax-check: Checks playbook syntax without executing.ex: ansible-playbook --syntax-check <playbook.yml>
10.
ansible-playbook --list-hosts: Lists hosts defined in a playbook.ex: ansible-playbook --list-hosts playbook.yml
11.
ansible-playbook --tags: Runs specific tagged tasks within a playbook.ex: ansible-playbook --tags=tag1,tag2 playbook.yml
12.
ansible-playbook --limit: Limits playbook execution to specific hosts or groups.ex: ansible-playbook --limit=<host_pattern> <playbook.yml>
13.
ansible-vault edit: Edits an encrypted file.ex: ansible-vault edit secrets.yml
14.
ansible-doc: Displays documentation for Ansible modules.ex: ansible-doc <module_name>
15.
ansible-config view: Displays the current Ansible configuration.ex: ansible-config view
16.
ansible-config dump: Dumps the current Ansible configuration variables.ex: ansible-config dump
17.
ansible-config list: Lists configuration settings.ex: ansible-config list
18.
ansible-console: Starts an interactive console for executing Ansible tasks.ex: ansible-console
19.
ansible-lint: Lints Ansible playbooks for best practices and potential errors.ex: ansible-lint <playbook.yml>
20.
ansible-vault encrypt_string: Encrypts a string for use in a playbook.ex: ansible-vault encrypt_string <string>
21.
ansible-vault rekey: Rekeys an encrypted file with a new password.ex: ansible-vault rekey <file>
Please open Telegram to view this post
VIEW IN TELEGRAM
𝟏. 𝐒𝐡𝐨𝐫𝐭-𝐥𝐢𝐯𝐞𝐝 𝐂𝐨𝐧𝐭𝐚𝐢𝐧𝐞𝐫𝐬 :-
These containers are designed to perform a specific task or job and exit.
𝟐. 𝐋𝐨𝐧𝐠-𝐫𝐮𝐧𝐧𝐢𝐧𝐠 𝐂𝐨𝐧𝐭𝐚𝐢𝐧𝐞𝐫𝐬 :-
These containers are designed to run continuously for extended periods, hosting services or your applications. Examples include web servers, databases, or other services that need to remain operational as long as the application is running.
𝟑. 𝐈𝐧𝐭𝐞𝐫𝐚𝐜𝐭𝐢𝐯𝐞 𝐂𝐨𝐧𝐭𝐚𝐢𝐧𝐞𝐫𝐬 :-
Containers can also be used interactively for debugging or testing purposes. In this case, the container may run as long as the user keeps the interactive session open.
𝟒. 𝐎𝐫𝐜𝐡𝐞𝐬𝐭𝐫𝐚𝐭𝐞𝐝 𝐂𝐨𝐧𝐭𝐚𝐢𝐧𝐞𝐫𝐬 :-
𝐊𝐮𝐛𝐞𝐫𝐧𝐞𝐭𝐞𝐬 orchestrated 𝐂𝐨𝐧𝐭𝐚𝐢𝐧𝐞𝐫𝐬 may be continuously monitored and automatically restarted if they fail or crash.
In these cases, the containers can run for a long time as they are automatically managed by the orchestrator unless interrupted externally.
𝟓. 𝐃𝐚𝐞𝐦𝐨𝐧 𝐂𝐨𝐧𝐭𝐚𝐢𝐧𝐞𝐫𝐬 :-
The last on this list are 𝐃𝐚𝐞𝐦𝐨𝐧 𝐂𝐨𝐧𝐭𝐚𝐢𝐧𝐞𝐫𝐬. Docker containers can be run as background daemons, serving a specific purpose and running as long as the system is active or until manually stopped.
htop as a daemon.Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
Managing resources , updating configurations and integrating with CI/CD takes up a lot of time and effort.
- Versioning of application releases and rollbacks if necessary.
- Customize configurations and charts for different environments.
- Centralized package management.
- Simplify deploying and using applications in a standardized manner.
Please open Telegram to view this post
VIEW IN TELEGRAM
In the world of cloud computing, infrastructure as code (IaC) plays a pivotal role in automating the deployment and management of resources. This blog post provides a step-by-step guide on creating a Two-Tier architecture on AWS using Terraform. We’ll explore the essential services involved, ensuring high availability, security, and scalability for hosting a static website.
Also, we are adopting a modular approach with enhanced security measures. The infrastructure is organized into dedicated modules, ensuring a scalable, maintainable, and secure deployment.
Please open Telegram to view this post
VIEW IN TELEGRAM
Tools and frameworks will change. Fundamentals remain intact.
Without fundamentals, you are incomplete.
Please open Telegram to view this post
VIEW IN TELEGRAM
What is Ansible →
➡️ Ansible is DevOps tool and it is similar like chef means it is a Configuration management tool let’s Begins with a Story → suppose you have a big organisation which have 100’s of servers Now a task is came to install git on that 100’s of servers …man responsible for doing this is System Administrator who is doing this manually which takes a lot of time…
guys!!!! we have that tool and that is Ansible→ A Configuration Management Tool…..
➡️ But !! But !! But !! First you need to connect all the nodes to ansible server which is done manually after that you will be able to automate the things…..
➡️ configuration management →It is a method through which we automate admin tasks.
➡️ It automates the task which the system administrator doing manually
Configuration management tool is of 2 types →
➡️ Pull based → In Pull Based it periodically check for the update from the main server to the nodes if update available it automatically install on the nodes connected with the server → chef and puppet is a pull based config tool.
➡️ Push based → In push based nodes is not going to the main server for the update the update is pushed to the nodes automatically for example the update of apps is pushed to your phone play store now it’s your choice whether you update or not → push based tool is Ansible when you need control in your hands so you take control of your own server for updating.
History of Ansible →
➡️ Michael Dehan developed Ansible in Feb 2012
➡️ Red Hat acquired the Ansible tool in 2015.
➡️ Ansible is available for RHEL, Debian, cent OS, Oracle Linux.
➡️ It is developed in Python background and also in Windows PowerShell.
➡️ You Can use this tool whether your server are in on premises or in the cloud.
➡️ It converted your code into infrastructure means you can say that it is a little bit called an Infrastructure building tool.
✈️ 𝐅𝐨𝐥𝐥𝐨𝐰 @prodevopsguy 𝐟𝐨𝐫 𝐦𝐨𝐫𝐞 𝐬𝐮𝐜𝐡 𝐜𝐨𝐧𝐭𝐞𝐧𝐭 𝐚𝐫𝐨𝐮𝐧𝐝 𝐜𝐥𝐨𝐮𝐝 & 𝐃𝐞𝐯𝐎𝐩𝐬!!! // 𝐉𝐨𝐢𝐧 𝐟𝐨𝐫 𝐃𝐞𝐯𝐎𝐩𝐬 𝐃𝐎𝐂𝐬: @devopsdocs
guys!!!! we have that tool and that is Ansible→ A Configuration Management Tool…..
Configuration management tool is of 2 types →
History of Ansible →
Please open Telegram to view this post
VIEW IN TELEGRAM