Tech C**P
15 subscribers
161 photos
9 videos
59 files
304 links
مدرس و برنامه نویس پایتون و لینوکس @alirezastack
Download Telegram
Delete elasticsearch indexes older than 1 month:

#!/bin/bash

last_month=`date +%Y%m%d --date '1 month ago'`
old_es_index="faxplus_*-$last_month"
echo "Deleting ES indexes $old_es_index..."
curl -X DELETE 'http://localhost:9200/myindex_*-20180520'
echo ''

NOTE: asterisk in curl command will be anything in between of myindex_ and -20180520. For example myindex_module1-20180520.

#linux #sysadmin #bash #script #es #elasticsearch #DELETE #purge
Get specific date like 2 days ago by bash script:

#!/bin/bash
specific_date=`date --date="3 day ago" +%Y%m%d`
echo $specific_date

#linux #date #bash_script #bash
space=$(df -k / | tail -1 | awk '{print $4}')
echo "free disk space is $space"

if [ $space -lt 510000 ]
then
echo $(date) + " - Purge elasticsearch indexes..."
curl -X DELETE "http://localhost:9200/your_index_name_*"
echo ''
else
echo $(date) + " - disk space seems OK"
fi

Put this in a crontab and you are good to go.

#linux #sysadmin #bash #script #df #elasticsearch #es
tail command in Linux is used to see content of a file from the end. It is usually used for checking log files in server. The interesting thing about tail is that you can use this command to get the last line. So in a bash script if you want to get last row of the below output:

root@server:~# ls -l
total 24
-rw-r--r-- 1 root root 291 May 26 05:19 es_queries
-rw-r--r-- 1 root root 1198 Jun 19 10:34 users.json
-rwxr-xr-x 1 root root 272 Jun 19 11:22 monitor_disk_space.sh
-rwxr-xr-x 1 root root 433 Jun 19 10:00 another_script.sh

You would do:

root@server:~# ls -l | tail -1
-rwxr-xr-x 1 root root 433 Jun 19 10:00 another_script.sh
That's why we have used this command in the previous post on df -k /.

#bash #tail #script #ls
Run a linux command multiple times:

for i in `seq 10`; do command; done


Or equivalently, using the Bash builtin for generating sequences:

for i in {1..10}; do command; done

#linux #bash #seq #repeat
Get the oldest elasticsearch index:

curl 'http://127.0.0.1:9200/_cat/indices' 2>&1 | awk '{print $3}' | grep "logstash_.*" | sort -t- -k2

DO NOT PANIC! Just enjoy it :)

First of all we use curl to get list of indexes from elasticsearch. By using awk with fetch just the 3rd column of the output, 3rd column refers to your index names (be careful to give your index name as there are internal indexes too and we do not want to purge them). grep command will then filter indexes and outputs those that start by logstash_, if yours are different change it. Finally the sort command sorts the result, but it first gets a delimiter by -t. sort -t- will split the column to TWO columns based on dash (-):

If my index name is logstash_data-20180619, it will exports 2 columns one is logstash_data and the other is 20180619. Now we use -k2 in order to sort based on the second column which is the date of the index.

This is how we can get the oldest elastic search index. I use this for maintenance of ES. In case disk space is almost full, I will delete the oldest elasticsearch index. You can even send a SLACK notification using cURL too.

The possibilities are endless.

Happy bashing :)

#linux #bash #curl #grep #sort #es #elasticsearch #split #awk #script
Simple bash script to take nightly MongoDB backups:

#!/bin/sh
DIR=`date +%m%d%y`
DEST=/db_backups/$DIR
mkdir $DEST
mongodump -h <your_database_host> -d <your_database_name> -u <username> -p <password> -o $DEST

NOTE: db_backups folder shoud already be created by mkdir /db_backups.


Put it in a crontab for nightly backups. First open crotab:

sudo crontab -e


Create a new line (entry) in crontab and paste the below cron task:

45 1 * * * ../../scripts/db_backup.sh

NOTE: here our script is called db_backup.sh, should you use your own script name here. and make it executable by chmod +x /your/ full_path/scripts/db_backup.sh


#mongodb #backup #cron #cronjob #coderwall #mongodump #bash
In Linux bash scripting you can check commands exit codes and do appropriate jobs accordingly. For that we will use || and &&.

Let's start by a simple echo command:

echo "Hello everybody"


If for any reason we want to check the exit code of echo command to see if it is successful or not. We can use the code block:

echo "Hello everybody" && echo "Phew! We're good." || echo "echo command FAILED!"


You can use code block to run multiple commands:

echo "Hello everybody" && {
echo "Phew! We're good."
touch ME
} || {
echo "echo command FAILED!"
touch YOURSELF
}

NOTE: exit code 0 means command execution was successful, and exit code 1 means something nasty happened to the previous command.


The is another way that you can check exit code and it is $?:

cp ME YOURSELF
if [ $? = 0 ] ; then
echo "copy seems OK!"
else
echo "Yuck! File could not get copied! :("
fi

When cp command is run $? will keep the exit code of recent command which has been executed.

#linux #bash #script #scripting #exit_code
In Linux we have a command called test, you can check whether a file/directory exists or not and run commands based on the result. For example let's say we want to check if a folder exists and if it does not exist, create the folder.

For checking directory existence we use test -d and for file existence we use test -f, so for our example in order to check if the directory exists we use test -d and in case the folder does not exists we will create it:

directory_to_check="/data/mysql"
test -d $directory_to_check || {
echo "$directory_to_check does not exist, creating the folder..." && mkdir -p $directory_to_check || {
echo "$directory_to_check directory could not be created!"
exit 1
}
}

NOTE: you can read more about exit codes with hashtag #exit_code

#bash #linux #directory_existence #file_existence