DevOps & Cloud (AWS, AZURE, GCP) Tech Free Learning
Photo
Ansible is a powerful tool for automation and configuration management. Here's a handy list of essential Ansible commands that will boost your productivity:
1. Check Ansible Version
ansible --version
2. Ping All Hosts
ansible all -m ping
3. Run a Command on All Hosts
ansible all -a "uptime"
4. Use a Specific Inventory File
ansible all -i /path/to/inventory -m ping
5. Run a Playbook
ansible-playbook playbook.yml
6. Check Syntax of a Playbook
ansible-playbook playbook.yml --syntax-check
7. List Hosts in Inventory
ansible-inventory --list -i /path/to/inventory
8. Test a Playbook with Dry Run
ansible-playbook playbook.yml --check
9. Encrypt a File with Ansible Vault
ansible-vault encrypt filename.yml
10. Decrypt a File with Ansible Vault
ansible-vault decrypt filename.yml
11. View Encrypted File with Ansible Vault
ansible-vault view filename.yml
12. Edit an Encrypted File with Ansible Vault
ansible-vault edit filename.yml
13. Create a New Vault Password File
ansible-vault create vault-password-file
14. Run a Playbook with a Vault Password File
ansible-playbook playbook.yml --vault-password-file /path/to/vault-password-file
15. Gather Facts About Hosts
ansible all -m setup
16. Display All Modules
ansible-doc -l
17. Get Documentation for a Specific Module
ansible-doc <module_name>
18. Check the Status of a Service
ansible all -m service -a "name=httpd state=started"
19. Copy a File to Hosts
ansible all -m copy -a "src=/path/to/source dest=/path/to/destination"
20. Run a Task as a Different User
ansible all -m command -a "ls -alh /home/user" -u username
Stay efficient and keep automating!
Please open Telegram to view this post
VIEW IN TELEGRAM
DEV Community
Kubernetes: Advanced Concepts and Best Practices
Kubernetes is a powerful container orchestration platform that automates many aspects of deploying,...
Please open Telegram to view this post
VIEW IN TELEGRAM
www.prodevopsguy.site
𝐏𝐫𝐨𝐃𝐞𝐯𝐎𝐩𝐬𝐆𝐮𝐲 ♾️ 𝐅𝐫𝐞𝐞 𝐃𝐞𝐯𝐎𝐩𝐬/𝐂𝐥𝐨𝐮𝐝 𝐖𝐨𝐫𝐥𝐝
Free DevOps/Cloud World
𝑓𝑜𝑟 𝑚𝑜𝑟𝑒 𝑖𝑛𝑓𝑜, 𝑦𝑜𝑢 𝑐𝑎𝑛 𝑐ℎ𝑒𝑐𝑘 𝑡ℎ𝑖𝑠 𝑙𝑖𝑛𝑘:
Please open Telegram to view this post
VIEW IN TELEGRAM
- 𝐒𝐨𝐥𝐮𝐭𝐢𝐨𝐧: Check that you are in the correct directory with a Git repository, or initialize a new repository using
𝐠𝐢𝐭 𝐢𝐧𝐢𝐭.- 𝐒𝐨𝐥𝐮𝐭𝐢𝐨𝐧: Use
𝐠𝐢𝐭 𝐩𝐮𝐥𝐥 to update your local branch with the remote branch or 𝐠𝐢𝐭 𝐩𝐮𝐬𝐡 to push your changes to the remote branch.- 𝐒𝐨𝐥𝐮𝐭𝐢𝐨𝐧: Resolve conflicts manually in the conflicting files, then use 𝐠𝐢𝐭 𝐚𝐝𝐝 to stage the changes, and commit them.
- 𝐒𝐨𝐥𝐮𝐭𝐢𝐨𝐧: Use 𝐠𝐢𝐭 𝐩𝐮𝐥𝐥 to get the latest changes from the remote branch and then commit your changes.
- 𝐒𝐨𝐥𝐮𝐭𝐢𝐨𝐧: Ensure your SSH key is added to your SSH agent and associated with your Git account.
- 𝐒𝐨𝐥𝐮𝐭𝐢𝐨𝐧: Update the remote's URL using 𝐠𝐢𝐭 𝐫𝐞𝐦𝐨𝐭𝐞 𝐬𝐞𝐭-𝐮𝐫𝐥 𝐨𝐫𝐢𝐠𝐢𝐧 <𝐧𝐞𝐰_𝐮𝐫𝐥>.
- 𝐒𝐨𝐥𝐮𝐭𝐢𝐨𝐧: Check the spelling and case of the file name and ensure it's part of the repository.
- 𝐒𝐨𝐥𝐮𝐭𝐢𝐨𝐧: Provide a commit message using 𝐠𝐢𝐭 𝐜𝐨𝐦𝐦𝐢𝐭 -𝐦 "Your message here".
- 𝐒𝐨𝐥𝐮𝐭𝐢𝐨𝐧: Configure line endings using .𝐠𝐢𝐭𝐚𝐭𝐭𝐫𝐢𝐛𝐮𝐭𝐞𝐬 or global Git configuration.
- 𝐒𝐨𝐥𝐮𝐭𝐢𝐨𝐧: Stash your local changes with 𝐠𝐢𝐭 𝐬𝐭𝐚𝐬𝐡, then perform the merge, and finally apply your changes back with 𝐠𝐢𝐭 𝐬𝐭𝐚𝐬𝐡 𝐚𝐩𝐩𝐥𝐲.
Remember that these are just brief solutions. The specific actions needed may vary based on the context of the error and the state of your Git repository.
Please open Telegram to view this post
VIEW IN TELEGRAM
1720932302741.gif
587.3 KB
How does Docker 🐬 Work? Is Docker still relevant?
Docker's architecture comprises three main components:
🔹 Docker Client
This is the interface through which users interact. It communicates with the Docker daemon.
🔹 Docker Host
Here, the Docker daemon listens for Docker API requests and manages various Docker objects, including images, containers, networks, and volumes.
🔹 Docker Registry
This is where Docker images are stored. Docker Hub, for instance, is a widely-used public registry.
📱 𝐅𝐨𝐥𝐥𝐨𝐰 @prodevopsguy 𝐟𝐨𝐫 𝐦𝐨𝐫𝐞 𝐬𝐮𝐜𝐡 𝐜𝐨𝐧𝐭𝐞𝐧𝐭 𝐚𝐫𝐨𝐮𝐧𝐝 𝐜𝐥𝐨𝐮𝐝 & 𝐃𝐞𝐯𝐎𝐩𝐬!!! // 𝐉𝐨𝐢𝐧 𝐟𝐨𝐫 𝐃𝐞𝐯𝐎𝐩𝐬 𝐃𝐎𝐂𝐬: @devopsdocs
Docker's architecture comprises three main components:
This is the interface through which users interact. It communicates with the Docker daemon.
Here, the Docker daemon listens for Docker API requests and manages various Docker objects, including images, containers, networks, and volumes.
This is where Docker images are stored. Docker Hub, for instance, is a widely-used public registry.
Please open Telegram to view this post
VIEW IN TELEGRAM
Zero downtime deployments are crucial for modern applications, ensuring that users experience uninterrupted service even during updates. Kubernetes, a powerful container orchestration platform, provides several strategies to achieve zero downtime. This article will delve into the various techniques and best practices for implementing zero downtime deployments in Kubernetes.
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
- The Linux Foundation: https://lnkd.in/epkP5dYQ
- Linux Documentation: https://lnkd.in/eWNYW246
- Fedora Project: fedoraproject.org
- Python: learnpython.org
- Go: go.dev/tour
- Automate with Python: automatetheboringstuff.com
- Golang Bootcamp: https://lnkd.in/eSsK7KUG
- GenAI - https://brij.guru/ai
- Cisco Networking Academy: netacad.com
- Networking Fundamentals: https://lnkd.in/eQ62Bfza
- Networking: A Top-Down Approach: kurose.cslash.net
- FreeCodeCamp's Course: https://lnkd.in/ecAsMH2w
- Git SCM: git-scm.com
- Try Git: github.com/Try
- Git Tutorials: https://lnkd.in/eDbQBQfD
- Git Interactive Tutorial: https://lnkd.in/eqfE2ZC4
- Docker Documentation: docs.docker.com
- Docker Hub: hub.docker.com
- Docker Labs: dockerlabs.collabnix.com
- Kubernetes Fundamentals: https://lnkd.in/eurRUTSt
- AWS Free Tier: aws.amazon.com/free
- Microsoft Azure Free Account: https://lnkd.in/ehxD777x
- Google Cloud Platform Free Tier: cloud.google.com/free
- Cloud Academy: cloudacademy.com
- Jenkins: jenkins.io
- Travis CI: https://lnkd.in/eDTJtRjB
- CircleCI: circleci.com
- GitLab CI/CD: docs.gitlab.com/ee/ci
- Kubernetes Documentation: kubernetes.io/docs/home
- Kubernetes the Hard Way: https://lnkd.in/edWs7_FW
- CNCF Curriculum: cncf.io
- Kubernetes Fundamentals: https://lnkd.in/e55BRxGy
- Prometheus: prometheus.io
- Grafana: grafana.com
- Elasticsearch: elastic.co
- Jaeger: https://lnkd.in/eiFkzXwD
- Terraform: terraform.io
- AWS CloudFormation: https://lnkd.in/e4wGb2eT
- Azure Resource Manager: https://lnkd.in/eWzjg94i
- Deployment Manager: https://lnkd.in/ekAQpT3n
- Open Policy Agent: https://lnkd.in/eG4jMZSU
- Kyverno: kyverno.io/docs
- Rego: https://lnkd.in/eD75meCB
- Istio: https://lnkd.in/eaxdAMZC
- Linkerd: linkerd.io
- Consul Service Mesh: https://lnkd.in/eEn3eacn
Please open Telegram to view this post
VIEW IN TELEGRAM
1708229314239.gif
784.6 KB
Netflix's database infrastructure is a true marvel! They use a combination of several cutting-edge technologies to ensure content is available 24/7, without buffering or interruptions.
Netflix's engineering team leverages a diverse array of databases to deliver top-notch service. Here's a glimpse into their database selection:
Please open Telegram to view this post
VIEW IN TELEGRAM
1712476913889.gif
171.8 KB
𝟏 . 𝐂𝐨𝐧𝐭𝐢𝐧𝐮𝐨𝐮𝐬 𝐈𝐧𝐭𝐞𝐠𝐫𝐚𝐭𝐢𝐨𝐧.
----------------------------------------------------------------------------------
𝟐. 𝐂𝐨𝐧𝐭𝐢𝐧𝐮𝐨𝐮𝐬 𝐃𝐞𝐩𝐥𝐨𝐲𝐦𝐞𝐧𝐭
----------------------------------------------------------------------------------
Please open Telegram to view this post
VIEW IN TELEGRAM
1721103976903.gif
1.1 MB
Route 53 Routing Policies 🚀
Route 53 is a powerful DNS service by AWS, offering various routing policies to manage traffic.
🔢 . Simple Routing
- Most straightforward approach, good for single resources.
- Routes traffic to a single endpoint, like a web server or an elastic load balancer.
- Easy to set up and manage.
🔢 . Weighted Routing
- Distributes traffic across multiple resources.
- Controls traffic distribution based on predefined weights.
- Great for load balancing and testing new deployments.
🔢 . Failover Routing
- Routes traffic to a primary resource, with a secondary resource on standby.
- Automatically routes the traffic to the secondary resource if the primary resource goes into an unhealthy state or fails.
- Ensures high availability.
✈️ 𝐅𝐨𝐥𝐥𝐨𝐰 @prodevopsguy 𝐟𝐨𝐫 𝐦𝐨𝐫𝐞 𝐬𝐮𝐜𝐡 𝐜𝐨𝐧𝐭𝐞𝐧𝐭 𝐚𝐫𝐨𝐮𝐧𝐝 𝐜𝐥𝐨𝐮𝐝 & 𝐃𝐞𝐯𝐎𝐩𝐬!!! // 𝐉𝐨𝐢𝐧 𝐟𝐨𝐫 𝐃𝐞𝐯𝐎𝐩𝐬 𝐃𝐎𝐂𝐬: @devopsdocs
Route 53 is a powerful DNS service by AWS, offering various routing policies to manage traffic.
- Most straightforward approach, good for single resources.
- Routes traffic to a single endpoint, like a web server or an elastic load balancer.
- Easy to set up and manage.
- Distributes traffic across multiple resources.
- Controls traffic distribution based on predefined weights.
- Great for load balancing and testing new deployments.
- Routes traffic to a primary resource, with a secondary resource on standby.
- Automatically routes the traffic to the secondary resource if the primary resource goes into an unhealthy state or fails.
- Ensures high availability.
Please open Telegram to view this post
VIEW IN TELEGRAM
DevOps & Cloud (AWS, AZURE, GCP) Tech Free Learning
Please open Telegram to view this post
VIEW IN TELEGRAM
As a DevOps engineer, mastering the Linux 🐧 command line is crucial for efficient system administration and management. Here are some essential Linux commands you should know:
1️⃣ . File and Directory Management:
2️⃣ . User and Permission Management:
3️⃣ . Process and Service Management:
4️⃣ . Networking and System Monitoring:
➡️ Reference links: [1] [2] [3] [4]
📱 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝐟𝐨𝐫 𝐦𝐨𝐫𝐞 𝐬𝐮𝐜𝐡 𝐜𝐨𝐧𝐭𝐞𝐧𝐭 𝐚𝐫𝐨𝐮𝐧𝐝 𝐜𝐥𝐨𝐮𝐝 & 𝐃𝐞𝐯𝐎𝐩𝐬!!! // 𝐉𝐨𝐢𝐧 𝐟𝐨𝐫 𝐃𝐞𝐯𝐎𝐩𝐬 𝐃𝐎𝐂𝐬: @devopsdocs
ls: List directory contents.cd: Change directory.pwd: Print working directory.mkdir: Create a new directory.rm: Remove files or directories.cp: Copy files or directories.mv: Move or rename files or directories.useradd: Add a new user.passwd: Set or change user passwords.chown: Change file ownership.chmod: Modify file permissions.su: Switch user.sudo: Execute commands with superuser privileges.ps: Display running processes.top: Monitor system processes.kill: Terminate processes.systemctl: Manage system services (systemd-based systems).service: Manage services (init-based systems).ifconfig or ip: Configure network interfaces.netstat: Display network statistics.ping: Test network connectivity.df: Show disk space usage.free: Display memory usage.uptime: Show system uptime.Remember that this is just a starting point, and there are many more Linux commands and utilities. Feel free to explore and deepen your knowledge as you work with Linux in your DevOps journey!🐧 🚀
Please open Telegram to view this post
VIEW IN TELEGRAM
Here are some common GitHub-related issues that DevOps engineers encounter, along with their solutions:
1️⃣ . Merge Conflicts:
Issue: When multiple contributors modify the same file simultaneously, merge conflicts occur during pull requests.
Solution: Resolve conflicts by carefully reviewing conflicting changes and manually merging them.
2️⃣ . Authentication Issues:
Issue: Improper authentication (SSH keys or personal access tokens) can lead to problems when pushing or pulling from repositories.
Solution: Ensure correct authentication methods to avoid issues.
3️⃣ . Git Submodules:
Issue: Managing Git submodules can be challenging.
Solution: Understand how submodules work and handle them correctly.
4️⃣ . Large Files and LFS:
Issue: GitHub has a file size limit. Large binary files can cause issues.
Solution: Use Git LFS (Large File Storage) for managing large files.
5️⃣ . Branch Protection Rules:
Issue: Accidental force pushes or direct commits to protected branches.
Solution: Set up branch protection rules to prevent such actions.
6️⃣ . Rate Limiting:
Issue: GitHub API requests are rate-limited.
Solution: Use tokens and avoid excessive requests.
7️⃣ . Repository Permissions:
Issue: Incorrect permissions for collaborators.
Solution: Ensure proper permissions to avoid unauthorized access.
8️⃣ . Webhooks and CI/CD Failures:
Issue: Debugging webhook and CI/CD failures.
Solution: Check logs and configurations to identify and fix issues.
📱 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝐟𝐨𝐫 𝐦𝐨𝐫𝐞 𝐬𝐮𝐜𝐡 𝐜𝐨𝐧𝐭𝐞𝐧𝐭 𝐚𝐫𝐨𝐮𝐧𝐝 𝐜𝐥𝐨𝐮𝐝 & 𝐃𝐞𝐯𝐎𝐩𝐬!!! // 𝐉𝐨𝐢𝐧 𝐟𝐨𝐫 𝐃𝐞𝐯𝐎𝐩𝐬 𝐃𝐎𝐂𝐬: @devopsdocs
Issue: When multiple contributors modify the same file simultaneously, merge conflicts occur during pull requests.
Solution: Resolve conflicts by carefully reviewing conflicting changes and manually merging them.
Issue: Improper authentication (SSH keys or personal access tokens) can lead to problems when pushing or pulling from repositories.
Solution: Ensure correct authentication methods to avoid issues.
Issue: Managing Git submodules can be challenging.
Solution: Understand how submodules work and handle them correctly.
Issue: GitHub has a file size limit. Large binary files can cause issues.
Solution: Use Git LFS (Large File Storage) for managing large files.
Issue: Accidental force pushes or direct commits to protected branches.
Solution: Set up branch protection rules to prevent such actions.
Issue: GitHub API requests are rate-limited.
Solution: Use tokens and avoid excessive requests.
Issue: Incorrect permissions for collaborators.
Solution: Ensure proper permissions to avoid unauthorized access.
Issue: Debugging webhook and CI/CD failures.
Solution: Check logs and configurations to identify and fix issues.
Remember, addressing these challenges will enhance your DevOps skills!😊 🚀
Please open Telegram to view this post
VIEW IN TELEGRAM
Navigating AWS costs can sometimes be tricky. To aid users in proactive cost management, I've developed a Terraform module that automates the setup of billing alerts. With this tool, you'll receive timely notifications if your AWS charges cross predefined thresholds.
For those keen on ensuring their AWS expenses stay within predictable boundaries, this tool is a valuable asset for every AWS Engineer.
Please open Telegram to view this post
VIEW IN TELEGRAM
DEV Community
GitHub - 30 GitHub commands used by every DevOps Engineer
Introduction: Git & GitHub has steadily risen from being just a preferred skill to a...
GitHub ☁️ - 30 GitHub commands used by every DevOps Engineer
🖥 https://dev.to/prodevopsguytech/github-30-github-commands-used-by-every-devops-engineer-4llj
❤️ 𝐅𝐨𝐥𝐥𝐨𝐰 @prodevopsguy 𝐟𝐨𝐫 𝐦𝐨𝐫𝐞 𝐬𝐮𝐜𝐡 𝐜𝐨𝐧𝐭𝐞𝐧𝐭 𝐚𝐫𝐨𝐮𝐧𝐝 𝐜𝐥𝐨𝐮𝐝 & 𝐃𝐞𝐯𝐎𝐩𝐬!!! // 𝐉𝐨𝐢𝐧 𝐟𝐨𝐫 𝐃𝐞𝐯𝐎𝐩𝐬 𝐃𝐎𝐂𝐬: @devopsdocs
Please open Telegram to view this post
VIEW IN TELEGRAM
1721028472232.gif
374.6 KB
Kubernetes Cluster Election
💡 Choosing the Right K8s Environment for Your Needs
K8s offers various technologies tailored to different tasks, each with its own characteristics and advantages.
Some popular options:
1️⃣ Minikube (https://lnkd.in/ePQKyEZ7)
> Compatible with Linux, Windows, and macOS
> Uses virtualization to deploy a cluster on a Linux virtual machine
> Suitable for Linux without virtualization support
2️⃣ Kubeadm (https://lnkd.in/epyumfKZ)
> The official CNCF tool for provisioning Kubernetes clusters
> Offers flexibility for various cluster configurations (single node, multi-node, HA, self-hosted, etc.)
> Ideal for launching minimal viable Kubernetes clusters
3️⃣ Kops (Kubernetes Operations) (https://lnkd.in/e7ApRVJP)
> Provides tools for installing, operating, and removing Kubernetes clusters on cloud platforms like AWS, Google Cloud Platform, OpenStack, and DigitalOcean
4️⃣ Microk8s (https://microk8s.io)
> Similar to Minikube, it creates single-node clusters
> Features its own set of add-ons as configuration plugins
> Exclusive to Linux environments
5️⃣ K3s (https://k3s.io)
> Works on any Linux distribution without external dependencies
> Replaces Docker with containerd as the container runtime and uses sqlite3 as the default database
> Lightweight, consuming only 512MB of RAM and 200MB of disk space.
6️⃣ Kind (Kubernetes-in-Docker) (https://kind.sigs.k8s.io)
> Runs Kubernetes clusters in Docker containers
> Supports multi-node and High-Availability clusters
> Compatible with Windows, Mac, and Linux as it runs on top of Docker
7️⃣ K3d (https://k3d.io)
> A project aiming to dockerize K3s
The choice of the Kubernetes environment depends on your project's specific needs.
✈️ 𝗙𝗼𝗹𝗹𝗼𝘄 @prodevopsguy 𝗳𝗼𝗿 𝗺𝗼𝗿𝗲 𝘀𝘂𝗰𝗵 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗮𝗿𝗼𝘂𝗻𝗱 𝗰𝗹𝗼𝘂𝗱 & 𝗗𝗲𝘃𝗢𝗽𝘀!!! // Join for DevOps DOCs: @devopsdocs
K8s offers various technologies tailored to different tasks, each with its own characteristics and advantages.
Some popular options:
> Compatible with Linux, Windows, and macOS
> Uses virtualization to deploy a cluster on a Linux virtual machine
> Suitable for Linux without virtualization support
> The official CNCF tool for provisioning Kubernetes clusters
> Offers flexibility for various cluster configurations (single node, multi-node, HA, self-hosted, etc.)
> Ideal for launching minimal viable Kubernetes clusters
> Provides tools for installing, operating, and removing Kubernetes clusters on cloud platforms like AWS, Google Cloud Platform, OpenStack, and DigitalOcean
> Similar to Minikube, it creates single-node clusters
> Features its own set of add-ons as configuration plugins
> Exclusive to Linux environments
> Works on any Linux distribution without external dependencies
> Replaces Docker with containerd as the container runtime and uses sqlite3 as the default database
> Lightweight, consuming only 512MB of RAM and 200MB of disk space.
> Runs Kubernetes clusters in Docker containers
> Supports multi-node and High-Availability clusters
> Compatible with Windows, Mac, and Linux as it runs on top of Docker
> A project aiming to dockerize K3s
The choice of the Kubernetes environment depends on your project's specific needs.
Once you understand K8s basics, the next step is to create a cluster, which can be done both locally and in the cloud.
Please open Telegram to view this post
VIEW IN TELEGRAM
DevOps & Cloud (AWS, AZURE, GCP) Tech Free Learning
Photo
Get the most out of Google Cloud Platform (GCP) with these essential
gcloud commands! Here's a handy reference to help you streamline your DevOps workflows. 1. Initialize GCP SDK:
gcloud init
2. Authenticate to GCP:
gcloud auth login
3. Set Default Project:
gcloud config set project [PROJECT_ID]
1. List VM Instances:
gcloud compute instances list
2. Create a New VM:
gcloud compute instances create [INSTANCE_NAME] --zone=[ZONE]
3. Start/Stop/Delete VM:
gcloud compute instances start [INSTANCE_NAME] --zone=[ZONE]
gcloud compute instances stop [INSTANCE_NAME] --zone=[ZONE]
gcloud compute instances delete [INSTANCE_NAME] --zone=[ZONE]
1. Get Credentials for Cluster:
gcloud container clusters get-credentials [CLUSTER_NAME] --zone=[ZONE]
2. List GKE Clusters:
gcloud container clusters list
3. Create/Delete GKE Cluster:
gcloud container clusters create [CLUSTER_NAME] --zone=[ZONE]
gcloud container clusters delete [CLUSTER_NAME] --zone=[ZONE]
1. List Buckets:
gcloud storage ls
2. Create/Delete Bucket:
gcloud storage buckets create gs://[BUCKET_NAME]
gcloud storage buckets delete gs://[BUCKET_NAME]
3. Upload/Download Files:
gcloud storage cp [LOCAL_PATH] gs://[BUCKET_NAME]/[OBJECT_NAME]
gcloud storage cp gs://[BUCKET_NAME]/[OBJECT_NAME] [LOCAL_PATH]
1. List Datasets:
gcloud bigquery datasets list
2. Create/Delete Dataset:
gcloud bigquery datasets create [DATASET_NAME]
gcloud bigquery datasets delete [DATASET_NAME]
3. Run Query:
gcloud bigquery query "SELECT * FROM `[PROJECT_ID].[DATASET].[TABLE]` LIMIT 10"
1. List Deployments:
gcloud deployment-manager deployments list
2. Create/Delete Deployment:
gcloud deployment-manager deployments create [DEPLOYMENT_NAME] --config [CONFIG_FILE]
gcloud deployment-manager deployments delete [DEPLOYMENT_NAME]
1. List Service Accounts:
gcloud iam service-accounts list
2. Create/Delete Service Account:
gcloud iam service-accounts create [ACCOUNT_NAME]
gcloud iam service-accounts delete [ACCOUNT_NAME]@[PROJECT_ID].iam.gserviceaccount.com
1. List Instances:
gcloud sql instances list
2. Create/Delete SQL Instance:
gcloud sql instances create [INSTANCE_NAME] --tier=db-n1-standard-1 --region=[REGION]
gcloud sql instances delete [INSTANCE_NAME]
Keep these commands handy to master Google Cloud like a pro!
Stay tuned for more DevOps tips and tricks.
Please open Telegram to view this post
VIEW IN TELEGRAM