- 40% Scripting automation
- 30% Cloud deployments
- 20% Monitoring and optimizing
- 10% Team collaboration
- 20% Scripting automation
- 25% Cloud deployments
- 15% Monitoring and optimizing
- 40% Team collaboration
- 65.73% Debating on the infra/tool choices
- On-demand support
- Many alignment meetings
- Managing system incidents
- Balancing cost-efficiency
- Technical review sessions
- Cross-department collaboration
- Defending infrastructure choices
- Implementing stakeholder feedback
Please open Telegram to view this post
VIEW IN TELEGRAM
1720250854494.gif
392.1 KB
In this way, the process that starts with a developer 'pushing' code to GitHub goes through stages of automated webhook triggering, continuous delivery,
Docker image creation, and container deployment.
All these steps are automated to minimize manual errors and speed up the process.
Please open Telegram to view this post
VIEW IN TELEGRAM
Hit the Star!
If you are planning to use this repo for learning, please hit the star.
Please open Telegram to view this post
VIEW IN TELEGRAM
DEV Community
End-to-End AWS DevOps Project: Automating Build and Deployment of a Node.js Application to Amazon ECS using GitLab CI/CD
Table of Contents Introduction Project Overview Technology Stack Architecture...
I just published a detailed article on End-to-End AWS DevOps Project
If you're looking to level up your DevOps skills or explore AWS automation, this one's for you!🙌
Please open Telegram to view this post
VIEW IN TELEGRAM
DevOps & Cloud (AWS, AZURE, GCP) Tech Free Learning
Photo
Here’s a handy list of essential Kubernetes commands to streamline your workflow and boost your productivity. Save this post for quick reference!
# Check cluster info
kubectl cluster-info
# Get all nodes
kubectl get nodes
# Describe a node
kubectl describe node <node-name>
# Check cluster health
kubectl get componentstatuses
# List all namespaces
kubectl get namespaces
# Create a namespace
kubectl create namespace <namespace-name>
# Delete a namespace
kubectl delete namespace <namespace-name>
# List all pods in the default namespace
kubectl get pods
# List pods in a specific namespace
kubectl get pods -n <namespace>
# Describe a pod
kubectl describe pod <pod-name>
# Delete a pod
kubectl delete pod <pod-name>
# List all deployments
kubectl get deployments
# Create a deployment
kubectl create deployment <deployment-name> --image=<image-name>
# Update a deployment
kubectl set image deployment/<deployment-name> <container-name>=<new-image>
# Scale a deployment
kubectl scale deployment <deployment-name> --replicas=<number>
# Delete a deployment
kubectl delete deployment <deployment-name>
# List all services
kubectl get services
# Create a service
kubectl expose deployment <deployment-name> --type=<type> --port=<port>
# Describe a service
kubectl describe service <service-name>
# Delete a service
kubectl delete service <service-name>
# List all ConfigMaps
kubectl get configmaps
# Create a ConfigMap
kubectl create configmap <configmap-name> --from-literal=<key>=<value>
# List all Secrets
kubectl get secrets
# Create a Secret
kubectl create secret generic <secret-name> --from-literal=<key>=<value>
# List all persistent volumes
kubectl get pv
# List all persistent volume claims
kubectl get pvc
# Create a persistent volume
kubectl apply -f <persistent-volume-definition>.yaml
# Create a persistent volume claim
kubectl apply -f <persistent-volume-claim-definition>.yaml
# View logs of a pod
kubectl logs <pod-name>
# View logs of a specific container in a pod
kubectl logs <pod-name> -c <container-name>
# Stream logs of a pod
kubectl logs -f <pod-name>
# Get events
kubectl get events
# Describe a resource
kubectl describe <resource-type> <resource-name>
# Exec into a pod
kubectl exec -it <pod-name> -- /bin/bash
# List custom resource definitions
kubectl get crd
# Describe a custom resource
kubectl describe crd <custom-resource-name>
Please open Telegram to view this post
VIEW IN TELEGRAM
DevOps & Cloud (AWS, AZURE, GCP) Tech Free Learning
Photo
Here’s a comprehensive list of essential Docker commands to make your container management smooth and efficient. Save this post for quick reference!
# Check Docker version
docker --version
# Display Docker system information
docker info
# List all Docker commands
docker --help
# List all images
docker images
# Search for an image on Docker Hub
docker search <image-name>
# Pull an image from Docker Hub
docker pull <image-name>
# Build an image from a Dockerfile
docker build -t <image-name>:<tag> .
# Remove an image
docker rmi <image-id>
# List all running containers
docker ps
# List all containers (including stopped ones)
docker ps -a
# Start a container
docker start <container-id>
# Stop a container
docker stop <container-id>
# Restart a container
docker restart <container-id>
# Remove a container
docker rm <container-id>
# Run a container
docker run -d --name <container-name> <image-name>
# Run a container with a specific port mapping
docker run -d -p <host-port>:<container-port> <image-name>
# Run a container with a volume
docker run -d -v <host-dir>:<container-dir> <image-name>
# Attach to a running container
docker attach <container-id>
# View logs of a container
docker logs <container-id>
# Follow logs of a container
docker logs -f <container-id>
# Inspect a container
docker inspect <container-id>
# View resource usage statistics of a container
docker stats <container-id>
# List all networks
docker network ls
# Create a network
docker network create <network-name>
# Connect a container to a network
docker network connect <network-name> <container-id>
# Disconnect a container from a network
docker network disconnect <network-name> <container-id>
# Inspect a network
docker network inspect <network-name>
# Remove a network
docker network rm <network-name>
# List all volumes
docker volume ls
# Create a volume
docker volume create <volume-name>
# Inspect a volume
docker volume inspect <volume-name>
# Remove a volume
docker volume rm <volume-name>
# Start services defined in docker-compose.yml
docker-compose up
# Start services in detached mode
docker-compose up -d
# Stop services
docker-compose down
# View running services
docker-compose ps
# Build or rebuild services
docker-compose build
# View logs of services
docker-compose logs
# Remove all stopped containers
docker container prune
# Remove all unused images
docker image prune
# Remove all unused volumes
docker volume prune
# Remove all unused networks
docker network prune
Keep this list handy and make container management a breeze! Happy Dockering!
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
1. What is a CI/CD pipeline?
A CI/CD pipeline is an automated workflow that integrates and delivers code changes continuously. It consists of processes like code integration, building, testing, deployment, and delivery. The goal of CI/CD pipelines is to deliver software updates quickly, reliably, and consistently, reducing the risk of errors and improving collaboration.
2. How do you implement a CI/CD pipeline from scratch?
Version Control: Start by ensuring the code is managed in a version control system (e.g., Git).
Build Automation: Set up build automation tools (e.g., Jenkins, GitLab CI/CD, GitHub Actions) that will compile and package your code.
Testing: Integrate automated testing for unit, integration, and acceptance tests.
Artifact Repository: Use an artifact repository (e.g., Nexus, Artifactory) for storing build artifacts.
Deployment Automation: Automate the deployment process using tools like Ansible, Docker, or Kubernetes.
Monitoring and Alerts: Set up monitoring tools to alert about issues post-deployment.
3. What are the common stages of a CI/CD pipeline?
Source Code Control: Code commits trigger the pipeline.
Build: The code is compiled and packaged.
Test: Automated testing, including unit, integration, and functional tests, are run.
Release/Deploy: The code is deployed to staging or production environments.
4. How do you manage secrets in a CI/CD pipeline?
Using secret management tools (e.g., HashiCorp Vault, AWS Secrets Manager, Azure Key Vault).
Storing secrets in environment variables or vaults outside the codebase.
Using pipeline tools’ native secret management features (e.g., Jenkins credentials store).
5. Explain the importance of automated testing in CI/CD?
Automated testing ensures code quality by catching issues early in the pipeline, preventing faulty code from reaching production. It helps:
Maintain code consistency. Reduce human error. Accelerate feedback loops, allowing developers to fix issues faster.
Ensure that changes don’t introduce regressions.
6. How do you ensure that deployments are zero-downtime?
Use blue-green deployments or canary releases to gradually roll out new versions while keeping the old version live.
Leverage container orchestration platforms like Kubernetes, which can manage rolling updates. Ensure that the database schema and application logic are backward-compatible during updates.
Implement load balancers to route traffic between old and new versions.
8. How do you handle rollbacks in CI/CD?
Versioning Artifacts: Store previous builds and redeploy an older version in case of failure.
Blue-Green Deployments: Switch back to the old version if the new version fails.
Database Migrations: Use reversible migrations to ensure that changes can be rolled back easily.
Monitoring and Alerts: Integrate automated rollback triggers based on predefined metrics or errors.
Please open Telegram to view this post
VIEW IN TELEGRAM
In today’s fast-paced software world, delivering high-quality software efficiently is a must. This project demonstrates how to set up a real-time CI/CD pipeline for a Java application, enabling seamless deployment to an Apache server.
If you're looking to enhance your CI/CD skills and streamline your Java application deployments, this tutorial has you covered!
🛠 Dive in now and take your DevOps journey to the next level!
📣 Note: Fork this Repository🧑💻 for upcoming future projects, Every week releases new Project.
Please open Telegram to view this post
VIEW IN TELEGRAM
1.
terraform init: Initializes a working directory containing Terraform configuration files.2.
terraform plan: Generates an execution plan, outlining actions Terraform will take.3.
terraform apply: Applies the changes described in the Terraform configuration.4.
terraform destroy: Destroys all resources described in the Terraform configuration.5.
terraform validate: Checks the syntax and validity of Terraform configuration files.6.
terraform refresh: Updates the state file against real resources in the provider.7.
terraform output: Displays the output values from the Terraform state.8.
terraform state list: Lists resources within the Terraform state.9.
terraform show: Displays a human-readable output of the current state or a specific resource's state.10.
terraform import: Imports existing infrastructure into Terraform state.11.
terraform fmt: Rewrites Terraform configuration files to a canonical format.12.
terraform graph: Generates a visual representation of the Terraform dependency graph.13.
terraform providers: Prints a tree of the providers used in the configuration.14.
terraform workspace list: Lists available workspaces.15.
terraform workspace select: Switches to another existing workspace.16.
terraform workspace new: Creates a new workspace.17.
terraform workspace delete: Deletes an existing workspace.18.
terraform output: Retrieves output values from a module.19.
terraform state mv: Moves an item in the state.20.
terraform state pull: Pulls the state from a remote backend.21.
terraform state push: Pushes the state to a remote backend.22.
terraform state rm: Removes items from the state.23.
terraform taint: Manually marks a resource for recreation.24.
terraform untaint: Removes the 'tainted' state from a resource.25.
terraform login: Saves credentials for Terraform Cloud.26.
terraform logout: Removes credentials for Terraform Cloud.27.
terraform force-unlock: Releases a locked state.28.
terraform import: Imports existing infrastructure into your Terraform state.29.
terraform plan -out: Saves the generated plan to a file.30.
terraform apply -auto-approve: Automatically applies changes without requiring approval.31.
terraform apply -target=resource: Applies changes only to a specific resource.32.
terraform destroy -target=resource: Destroys a specific resource.33.
terraform apply -var="key=value": Sets a variable's value directly in the command line.34.
terraform apply -var-file=filename.tfvars: Specifies a file containing variable definitions.35.
terraform apply -var-file=filename.auto.tfvars: Automatically loads variables from a file.Please open Telegram to view this post
VIEW IN TELEGRAM
- Replace career gap by freelance in resume
- Create multiple naukri profile based on location
- Upadte job profile everyday in morning
- Add hot keywords related to job in resume
- Everyday apply for max job openings
- Check job desc to get those keywords
- For ex for DE: Pyspark, ADF, Databricks
- Find HR & send DM/ mails personally
- Make job profiles on multiple job portals
- Try all job searching platforms
- Like LinkedIn, referrals, Frnd N/w
Try some of these hacks and very sure you will get better calls than before.
Please open Telegram to view this post
VIEW IN TELEGRAM
Medium
THE ULTIMATE CICD DEVOPS PIPELINE PROJECT
PHASE-1 | Setup Infra
In DevOps and CI/CD (Continuous Integration/Continuous Deployment) projects, different environments play crucial roles in the software development lifecycle. Let's explore the main types of deployment environments:
1️⃣ . Development Environment:
- In the development environment, each programmer has an isolated workspace to write and tweak code without affecting others.
- Developers use this environment to build, test, and experiment with new features or changes.
- It's a stepping stone from local development to broader testing.
- Typically, it's less stable and more dynamic than other environments.
2️⃣ . Staging Environment:
- The staging environment is where code goes before it gets shipped to production.
- It closely resembles the production environment but is separate from it.
- QA (Quality Assurance) teams and stakeholders thoroughly test the application here.
- Any issues discovered are addressed before moving to production.
3️⃣ . Quality Assurance (QA) Environment:
- QA environments come in various forms, such as QA testing servers or dedicated QA clusters.
- QA teams perform comprehensive testing, including functional, performance, security, and regression testing.
- It's essential for identifying and fixing defects before deploying to production.
4️⃣ . Production Environment:
- The production environment is the final destination for your code.
- It hosts the live application that end-users interact with.
- Stability, reliability, and performance are critical in this environment.
- Changes are carefully managed through CI/CD pipelines to minimize disruptions.
Remember that these environments serve specific purposes, and their configurations should align with the needs of your application and organization. Properly managing and maintaining these environments ensures a smooth software delivery process!🚀
🌟 Sources:
1. The Ultimate CI/CD DevOps Pipeline Project
2. How to Manage Multiple Environments with DevOps
3. Deployment Environments: Everything You Need To Know As A DevOps Engineer
4. Tutorial: Deploy environments in CI/CD by using GitHub - Azure DevOps
5. Building Your First Azure DevOps CI/CD Pipeline: A Step-by-Step Guide [1] [2] [3] [4] [5]
➡️ Reference links: [1] [2] [3] [4] [5]
✈️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝐟𝐨𝐫 𝐦𝐨𝐫𝐞 𝐬𝐮𝐜𝐡 𝐜𝐨𝐧𝐭𝐞𝐧𝐭 𝐚𝐫𝐨𝐮𝐧𝐝 𝐜𝐥𝐨𝐮𝐝 & 𝐃𝐞𝐯𝐎𝐩𝐬!!! // 𝐉𝐨𝐢𝐧 𝐟𝐨𝐫 𝐃𝐞𝐯𝐎𝐩𝐬 𝐃𝐎𝐂𝐬: @devopsdocs
- In the development environment, each programmer has an isolated workspace to write and tweak code without affecting others.
- Developers use this environment to build, test, and experiment with new features or changes.
- It's a stepping stone from local development to broader testing.
- Typically, it's less stable and more dynamic than other environments.
- The staging environment is where code goes before it gets shipped to production.
- It closely resembles the production environment but is separate from it.
- QA (Quality Assurance) teams and stakeholders thoroughly test the application here.
- Any issues discovered are addressed before moving to production.
- QA environments come in various forms, such as QA testing servers or dedicated QA clusters.
- QA teams perform comprehensive testing, including functional, performance, security, and regression testing.
- It's essential for identifying and fixing defects before deploying to production.
- The production environment is the final destination for your code.
- It hosts the live application that end-users interact with.
- Stability, reliability, and performance are critical in this environment.
- Changes are carefully managed through CI/CD pipelines to minimize disruptions.
Remember that these environments serve specific purposes, and their configurations should align with the needs of your application and organization. Properly managing and maintaining these environments ensures a smooth software delivery process!
1. The Ultimate CI/CD DevOps Pipeline Project
2. How to Manage Multiple Environments with DevOps
3. Deployment Environments: Everything You Need To Know As A DevOps Engineer
4. Tutorial: Deploy environments in CI/CD by using GitHub - Azure DevOps
5. Building Your First Azure DevOps CI/CD Pipeline: A Step-by-Step Guide [1] [2] [3] [4] [5]
Please open Telegram to view this post
VIEW IN TELEGRAM
1733123002179.gif
4 MB
𝐒𝐡𝐫𝐢𝐧𝐤 𝐘𝐨𝐮𝐫 𝐃𝐨𝐜𝐤𝐞𝐫 𝐈𝐦𝐚𝐠𝐞𝐬 𝐛𝐲 50%-𝐓𝐡𝐞 𝐏𝐨𝐰𝐞𝐫 𝐨𝐟 𝐌𝐮𝐥𝐭𝐢-𝐒𝐭𝐚𝐠𝐞 𝐁𝐮𝐢𝐥𝐝𝐬⚠️
Large Docker images slow deployments, waste storage, and increase vulnerabilities. Multi-Stage Builds optimize images by splitting the process into stages, keeping only essentials in the final lightweight image, improving speed, security, and maintainability.
🚨 𝐖𝐡𝐚𝐭 𝐀𝐫𝐞 𝐌𝐮𝐥𝐭𝐢-𝐒𝐭𝐚𝐠𝐞 𝐁𝐮𝐢𝐥𝐝𝐬?
Multi-Stage Builds let you use multiple FROM instructions in a single Dockerfile, each representing a different stage. This allows you to compile or build your application in one stage and copy only the necessary output into the final, lightweight image.
🤔 𝐖𝐡𝐲 𝐔𝐬𝐞 𝐌𝐮𝐥𝐭𝐢-𝐒𝐭𝐚𝐠𝐞 𝐁𝐮𝐢𝐥𝐝𝐬 ⁉️
✅ 𝐃𝐫𝐚𝐬𝐭𝐢𝐜𝐚𝐥𝐥𝐲 𝐑𝐞𝐝𝐮𝐜𝐞 𝐈𝐦𝐚𝐠𝐞 𝐒𝐢𝐳𝐞: By excluding unnecessary build dependencies, multi-stage builds keep only the essentials in your final image, shrinking its size by up to 50% or more.
✅ 𝐄𝐧𝐡𝐚𝐧𝐜𝐞𝐝 𝐒𝐞𝐜𝐮𝐫𝐢𝐭𝐲: A smaller image has fewer layers and dependencies, reducing the attack surface and the risk of vulnerabilities.
✅ 𝐅𝐚𝐬𝐭𝐞𝐫 𝐃𝐞𝐩𝐥𝐨𝐲𝐦𝐞𝐧𝐭𝐬: Smaller images mean quicker downloads and deployments, speeding up your CI/CD pipelines.
✅ 𝐒𝐢𝐦𝐩𝐥𝐢𝐟𝐢𝐞𝐝 𝐌𝐚𝐢𝐧𝐭𝐞𝐧𝐚𝐧𝐜𝐞: With separate stages for building and production, your Docker file becomes cleaner and easier to manage.
📍 𝐖𝐨𝐧𝐝𝐞𝐫𝐢𝐧𝐠 𝐖𝐡𝐲 𝐈𝐭'𝐬 𝐚 𝐆𝐚𝐦𝐞 𝐂𝐡𝐚𝐧𝐠𝐞𝐫❓
With Multi-Stage Builds, you’re not just reducing image size—you’re also improving security, boosting deployment speeds, and making your Dockerfiles more maintainable. It’s a win-win for developers and operations teams alike.
📱 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝐟𝐨𝐫 𝐦𝐨𝐫𝐞 𝐬𝐮𝐜𝐡 𝐜𝐨𝐧𝐭𝐞𝐧𝐭 𝐚𝐫𝐨𝐮𝐧𝐝 𝐜𝐥𝐨𝐮𝐝 & 𝐃𝐞𝐯𝐎𝐩𝐬!!! // 𝐉𝐨𝐢𝐧 𝐟𝐨𝐫 𝐃𝐞𝐯𝐎𝐩𝐬 𝐃𝐎𝐂𝐬: @devopsdocs
Large Docker images slow deployments, waste storage, and increase vulnerabilities. Multi-Stage Builds optimize images by splitting the process into stages, keeping only essentials in the final lightweight image, improving speed, security, and maintainability.
Multi-Stage Builds let you use multiple FROM instructions in a single Dockerfile, each representing a different stage. This allows you to compile or build your application in one stage and copy only the necessary output into the final, lightweight image.
With Multi-Stage Builds, you’re not just reducing image size—you’re also improving security, boosting deployment speeds, and making your Dockerfiles more maintainable. It’s a win-win for developers and operations teams alike.
Please open Telegram to view this post
VIEW IN TELEGRAM
DevOps & Cloud (AWS, AZURE, GCP) Tech Free Learning
Photo
1. 𝗴𝗶𝘁 𝗶𝗻𝗶𝘁: Initializes a new Git repository in the current directory.
2. 𝗴𝗶𝘁 𝗰𝗹𝗼𝗻𝗲 [𝘂𝗿𝗹]: Clones a repository into a new directory.
3. 𝗴𝗶𝘁 𝗮𝗱𝗱 [𝗳𝗶𝗹𝗲]: Adds a file or changes in a file to the staging area.
4. 𝗴𝗶𝘁 𝗰𝗼𝗺𝗺𝗶𝘁 -𝗺 "[𝗺𝗲𝘀𝘀𝗮𝗴𝗲]": Records changes to the repository with a descriptive message.
5. 𝗴𝗶𝘁 𝗽𝘂𝘀𝗵: Uploads local repository content to a remote repository.
6. 𝗴𝗶𝘁 𝗽𝘂𝗹𝗹: Fetches changes from the remote repository and merges them into the local branch.
7. 𝗴𝗶𝘁 𝘀𝘁𝗮𝘁𝘂𝘀: Displays the status of the working directory and staging area.
8. 𝗴𝗶𝘁 𝗯𝗿𝗮𝗻𝗰𝗵: Lists all local branches in the current repository.
9. 𝗴𝗶𝘁 𝗰𝗵𝗲𝗰𝗸𝗼𝘂𝘁 [𝗯𝗿𝗮𝗻𝗰𝗵]: Switches to the specified branch.
10. 𝗴𝗶𝘁 𝗺𝗲𝗿𝗴𝗲 [𝗯𝗿𝗮𝗻𝗰𝗵]: Merges the specified branch's history into the current branch.
11. 𝗴𝗶𝘁 𝗿𝗲𝗺𝗼𝘁𝗲 -𝘃: Lists the remote repositories along with their URLs.
12. 𝗴𝗶𝘁 𝗹𝗼𝗴: Displays commit logs.
13. 𝗴𝗶𝘁 𝗿𝗲𝘀𝗲𝘁 [𝗳𝗶𝗹𝗲]: Unstages the file, but preserves its contents.
14. 𝗴𝗶𝘁 𝗿𝗺 [𝗳𝗶𝗹𝗲]: Deletes the file from the working directory and stages the deletion.
15. 𝗴𝗶𝘁 𝘀𝘁𝗮𝘀𝗵: Temporarily shelves (or stashes) changes that haven't been committed.
16. 𝗴𝗶𝘁 𝘁𝗮𝗴 [𝘁𝗮𝗴𝗻𝗮𝗺𝗲]: Creates a lightweight tag pointing to the current commit.
17. 𝗴𝗶𝘁 𝗳𝗲𝘁𝗰𝗵 [𝗿𝗲𝗺𝗼𝘁𝗲]: Downloads objects and refs from another repository.
18. 𝗴𝗶𝘁 𝗺𝗲𝗿𝗴𝗲 --𝗮𝗯𝗼𝗿𝘁: Aborts the current conflict resolution process, and tries to reconstruct the pre-merge state.
19. 𝗴𝗶𝘁 𝗿𝗲𝗯𝗮𝘀𝗲 [𝗯𝗿𝗮𝗻𝗰𝗵]: Reapplies commits on top of another base tip, often used to integrate changes from one branch onto another cleanly.
20. 𝗴𝗶𝘁 𝗰𝗼𝗻𝗳𝗶𝗴 --𝗴𝗹𝗼𝗯𝗮𝗹 𝘂𝘀𝗲𝗿.𝗻𝗮𝗺𝗲 "[𝗻𝗮𝗺𝗲]" 𝗮𝗻𝗱 𝗴𝗶𝘁 𝗰𝗼𝗻𝗳𝗶𝗴 --𝗴𝗹𝗼𝗯𝗮𝗹 𝘂𝘀𝗲𝗿.𝗲𝗺𝗮𝗶𝗹 "[𝗲𝗺𝗮𝗶𝗹]": Sets the name and email to be used with your commits.
21. 𝗴𝗶𝘁 𝗱𝗶𝗳𝗳: Shows changes between commits, commit and working tree, etc.
22. 𝗴𝗶𝘁 𝗿𝗲𝗺𝗼𝘁𝗲 𝗮𝗱𝗱 [𝗻𝗮𝗺𝗲] [𝘂𝗿𝗹]: Adds a new remote repository.
23. 𝗴𝗶𝘁 𝗿𝗲𝗺𝗼𝘁𝗲 𝗿𝗲𝗺𝗼𝘃𝗲 [𝗻𝗮𝗺𝗲]: Removes a remote repository.
24. 𝗴𝗶𝘁 𝗰𝗵𝗲𝗰𝗸𝗼𝘂𝘁 -𝗯 [𝗯𝗿𝗮𝗻𝗰𝗵]: Creates a new branch and switches to it.
25. 𝗴𝗶𝘁 𝗯𝗿𝗮𝗻𝗰𝗵 -𝗱 [𝗯𝗿𝗮𝗻𝗰𝗵]: Deletes the specified branch.
26. 𝗴𝗶𝘁 𝗽𝘂𝘀𝗵 --𝘁𝗮𝗴𝘀: Pushes all tags to the remote repository.
27. 𝗴𝗶𝘁 𝗰𝗵𝗲𝗿𝗿𝘆-𝗽𝗶𝗰𝗸 [𝗰𝗼𝗺𝗺𝗶𝘁]: Picks a commit from another branch and applies it to the current branch.
28. 𝗴𝗶𝘁 𝗳𝗲𝘁𝗰𝗵 --𝗽𝗿𝘂𝗻𝗲: Prunes remote tracking branches no longer on the remote.
29. 𝗴𝗶𝘁 𝗰𝗹𝗲𝗮𝗻 -𝗱𝗳: Removes untracked files and directories from the working directory.
30. 𝗴𝗶𝘁 𝘀𝘂𝗯𝗺𝗼𝗱𝘂𝗹𝗲 𝘂𝗽𝗱𝗮𝘁𝗲 --𝗶𝗻𝗶𝘁 --𝗿𝗲𝗰𝘂𝗿𝘀𝗶𝘃𝗲: Initializes and updates submodules recursively.
Please open Telegram to view this post
VIEW IN TELEGRAM
𝟏. 𝐋𝐨𝐚𝐝 𝐁𝐚𝐥𝐚𝐧𝐜𝐞𝐫:
𝟐. 𝐑𝐞𝐯𝐞𝐫𝐬𝐞 𝐏𝐫𝐨𝐱𝐲:
𝟑. 𝐅𝐨𝐫𝐰𝐚𝐫𝐝 𝐏𝐫𝐨𝐱𝐲:
𝟒. 𝐀𝐏𝐈 𝐆𝐚𝐭𝐞𝐰𝐚𝐲:
In 𝐬𝐮𝐦𝐦𝐚𝐫𝐲, use load balancers for distributing traffic, reverse proxies for security and load balancing, forward proxies for controlling internet access, and API gateways for managing and securing APIs. These components can be combined to create robust and scalable network architectures tailored to your specific needs.
Please open Telegram to view this post
VIEW IN TELEGRAM
1733087342222.gif
2.4 MB
𝐂𝐨𝐫𝐞 𝐒𝐐𝐋 𝐂𝐨𝐦𝐦𝐚𝐧𝐝 𝐂𝐚𝐭𝐞𝐠𝐨𝐫𝐢𝐞𝐬
➥ 𝐃𝐚𝐭𝐚 𝐃𝐞𝐟𝐢𝐧𝐢𝐭𝐢𝐨𝐧 𝐋𝐚𝐧𝐠𝐮𝐚𝐠𝐞 (𝐃𝐃𝐋)
- Manages database structure and objects
- Used for defining and modifying database schemas
- Essential for database architecture management
➥ 𝐓𝐫𝐚𝐧𝐬𝐚𝐜𝐭𝐢𝐨𝐧 𝐂𝐨𝐧𝐭𝐫𝐨𝐥 𝐋𝐚𝐧𝐠𝐮𝐚𝐠𝐞 (𝐓𝐂𝐋)
- Handles database transaction management
- Critical for maintaining data integrity
- Controls transaction boundaries and states
➥ 𝐃𝐚𝐭𝐚 𝐐𝐮𝐞𝐫𝐲 𝐋𝐚𝐧𝐠𝐮𝐚𝐠𝐞 (𝐃𝐐𝐋)
- Retrieves data from databases
- Focused on data retrieval operations
- Primary tool for data analysis and reporting 1
➥ 𝐃𝐚𝐭𝐚 𝐂𝐨𝐧𝐭𝐫𝐨𝐥 𝐋𝐚𝐧𝐠𝐮𝐚𝐠𝐞 (𝐃𝐂𝐋)
- Manages database access permissions
- Controls user privileges and security
- Essential for database security management
➥ 𝐃𝐚𝐭𝐚 𝐌𝐚𝐧𝐢𝐩𝐮𝐥𝐚𝐭𝐢𝐨𝐧 𝐋𝐚𝐧𝐠𝐮𝐚𝐠𝐞 (𝐃𝐌𝐋)
- Modifies stored data
- Handles data insertion, updating, and deletion
- Core component for data operations
𝐒𝐐𝐋 𝐅𝐮𝐧𝐜𝐭𝐢𝐨𝐧𝐬
➥ 𝐀𝐠𝐠𝐫𝐞𝐠𝐚𝐭𝐞 𝐅𝐮𝐧𝐜𝐭𝐢𝐨𝐧𝐬
- Perform calculations on data sets
- Return single consolidated values
👉 Common examples:
➥ 𝐖𝐢𝐧𝐝𝐨𝐰 𝐅𝐮𝐧𝐜𝐭𝐢𝐨𝐧𝐬
- Process data across related rows
- Maintain individual row values
👉 Key functions include:
Please open Telegram to view this post
VIEW IN TELEGRAM