Tech C**P
#glance
You can use
Grafana
to display your OS metrics. You can use its API endpoints to get data in JSON or XML and moreover it provides a web UI for you to take look at the graphs.By default when you install
If you take a look at the config rotation of nginX you will see a part called postrotate that run a command, for nginx it is as below:
If you run the command between
Just remove a file related to
Now you can run
Now every log will be directed to its file not
#nginx #policy_rc #invoke_rc #log_rotate #rotate
nginX
on Linux
a logrotate config file will be created in /etc/logrotate.d/nginx
. Sometimes you may see that after a while logs inside of nginX access log is empty and it is logged into the file usually named access.log.1
. This error happens when a process cannot close its file handler and has to write into access.log.1.If you take a look at the config rotation of nginX you will see a part called postrotate that run a command, for nginx it is as below:
postrotate
invoke-rc.d nginx rotate >/dev/null 2>&1
endscript
If you run the command between
postrotate
and endscript
it may gives the below error:invoke-rc.d: action rotate is unknown, but proceeding anyway.
invoke-rc.d: policy-rc.d denied execution of rotate.
Just remove a file related to
i-MSCP
:rm /usr/sbin/policy-rc.d
NOTE:
Or if you want to be safe rename it to something else.Now you can run
invoke-rc.d
command and you should see a result like below:[ ok ] Re-opening nginx log files: nginx.
Now every log will be directed to its file not
it_file_name.log.1
, and file handlers are closed safely.#nginx #policy_rc #invoke_rc #log_rotate #rotate
There are times that you noway but getting data from a third part library and that library has rate limit on their endpoints. For example I have recently used
This library by default sends its requests to
Now read from cache in case it exists:
Make sure to put
#python #geopy #geo #latitude #longitude #Nominatim #redis #hset #geocoders
geopy
python library to get latitude and longitude by giving city name to the function:from geopy.geocoders import Nominatim
city_name = 'Tehran'
geolocator = Nominatim()
location = geolocator.geocode(city_name)
print location.latitude, location.longitude
This library by default sends its requests to
https://nominatim.openstreetmap.org/search
to get geo location data. It's rate limit is 1 request per second. To circumvent these problems and limitations use redis to cache results in your server and read cached result from your own system:self.redis.hset(city_name, 'lat', lat)
self.redis.hset(city_name, 'long', longitude)
Now read from cache in case it exists:
if self.redis.hexists(city_name, 'lat'):
location = self.redis.hgetall(city_name)
Make sure to put
sleep(1)
when reading from Nominatim
in order to by pass its limitation.NOTE:
instead of Nominatim
other 3rd parties can be used.#python #geopy #geo #latitude #longitude #Nominatim #redis #hset #geocoders
Apply new basic license on
You need to download the license first:
- https://register.elastic.co/xpack_register
The license is a json file that can be applied by
#kibana #curl #license #elasticsearch
Kibana
:You need to download the license first:
- https://register.elastic.co/xpack_register
The license is a json file that can be applied by
cURL
, first go to your server where elasticsearch is running and then:curl -XPUT 'http://172.16.133.102:9200/_xpack/license' -H "Content-Type: application/json" -d @license.json
NOTE:
license.json is the file that should be present beside from where you are issuing the cURL
command.#kibana #curl #license #elasticsearch
register.elastic.co
Register | Elastic
space=$(df -k / | tail -1 | awk '{print $4}')
echo "free disk space is $space"
if [ $space -lt 510000 ]
then
echo $(date) + " - Purge elasticsearch indexes..."
curl -X DELETE "http://localhost:9200/your_index_name_*"
echo ''
else
echo $(date) + " - disk space seems OK"
fi
Put this in a
crontab
and you are good to go.#linux #sysadmin #bash #script #df #elasticsearch #es
tail
command in Linux
is used to see content of a file from the end. It is usually used for checking log files in server. The interesting thing about tail
is that you can use this command to get the last line. So in a bash script if you want to get last row of the below output:root@server:~# ls -l
total 24
-rw-r--r-- 1 root root 291 May 26 05:19 es_queries
-rw-r--r-- 1 root root 1198 Jun 19 10:34 users.json
-rwxr-xr-x 1 root root 272 Jun 19 11:22 monitor_disk_space.sh
-rwxr-xr-x 1 root root 433 Jun 19 10:00 another_script.sh
You would do:
root@server:~# ls -l | tail -1That's why we have used this command in the previous post on
-rwxr-xr-x 1 root root 433 Jun 19 10:00 another_script.sh
df -k /
.#bash #tail #script #ls
Have you ever wanted to syntax highlight text area with specific content in it? Let's say it contains json data. We also need to have
code folding (+/- in front of objects to collapse them). The tool that can be used for this purpose is
- https://codemirror.net/index.html
One of the things that I want to note here is that textarea wont get updated when you enter data in codeMirror field. For that you need to call save() method of codeMirror like below:
Download all the demos from github:
- https://github.com/codemirror/codemirror
Instead of myEditor change you can update textarea on form submit.
#syntax_highlighting #syntax #codeMirror #code_folding
code folding (+/- in front of objects to collapse them). The tool that can be used for this purpose is
codeMirror
:- https://codemirror.net/index.html
One of the things that I want to note here is that textarea wont get updated when you enter data in codeMirror field. For that you need to call save() method of codeMirror like below:
var myEditor = CodeMirror.fromTextArea(page_content, {
lineNumbers: true,
mode: "markdown",
lineWrapping: true,
lineNumbers:false,
indentWithTabs: true
});
function updateTextArea() {
myEditor.save();
}
myEditor.on('change', updateTextArea);
Download all the demos from github:
- https://github.com/codemirror/codemirror
Instead of myEditor change you can update textarea on form submit.
#syntax_highlighting #syntax #codeMirror #code_folding
codemirror.net
In-browser code editor
pyflame, a fantastic python profiler that uses linux ptrace system call to collect profiling information. It gives you a graph to see where you have messed things up!
One of the great great great things about this library is that you can attach it to a currently running process to profile it. A command like below will do the job:
Code and installation can be found in github:
- https://github.com/uber/pyflame
Read more about it:
- https://pyflame.readthedocs.io/en/latest/
#python #pyflame #profiler
One of the great great great things about this library is that you can attach it to a currently running process to profile it. A command like below will do the job:
# Attach to PID 12345 and profile it for 1 second
pyflame -p 12345
Code and installation can be found in github:
- https://github.com/uber/pyflame
Read more about it:
- https://pyflame.readthedocs.io/en/latest/
#python #pyflame #profiler
GitHub
GitHub - uber-archive/pyflame: 🔥 Pyflame: A Ptracing Profiler For Python. This project is deprecated and not maintained.
🔥 Pyflame: A Ptracing Profiler For Python. This project is deprecated and not maintained. - GitHub - uber-archive/pyflame: 🔥 Pyflame: A Ptracing Profiler For Python. This project is deprecated and ...
If for any reason you had to increase uwsgi_pass timeout in nginX you can use
You can also increase timeout in
Its value is in seconds.
#uwsgi #nginx #uwsgi_pass #harakiri #timeout #uwsgi_read_timeout
uwsgi_read_timeout
:upstream uwsgicluster {
server 127.0.0.1:5000;
}
.
.
.
include uwsgi_params;
uwsgi_pass uwsgicluster;
uwsgi_read_timeout 3000;
You can also increase timeout in
uwsgi
. If you are using ini
file you need to use harakiri
parameter like below:harakiri = 30
Its value is in seconds.
#uwsgi #nginx #uwsgi_pass #harakiri #timeout #uwsgi_read_timeout
Get the oldest elasticsearch index:
DO NOT PANIC! Just enjoy it :)
First of all we use
If my index name is
This is how we can get the oldest elastic search index. I use this for maintenance of
The possibilities are endless.
Happy bashing :)
#linux #bash #curl #grep #sort #es #elasticsearch #split #awk #script
curl 'http://127.0.0.1:9200/_cat/indices' 2>&1 | awk '{print $3}' | grep "logstash_.*" | sort -t- -k2
DO NOT PANIC! Just enjoy it :)
First of all we use
curl
to get list of indexes from elasticsearch
. By using awk
with fetch just the 3rd column of the output, 3rd column refers to your index names (be careful to give your index name as there are internal indexes too and we do not want to purge them). grep
command will then filter indexes and outputs those that start by logstash_
, if yours are different change it. Finally the sort command sorts the result, but it first gets a delimiter by -t
. sort -t-
will split the column to TWO columns based on dash (-):If my index name is
logstash_data-20180619
, it will exports 2 columns one is logstash_data
and the other is 20180619
. Now we use -k2
in order to sort based on the second column which is the date of the index.This is how we can get the oldest elastic search index. I use this for maintenance of
ES
. In case disk space is almost full, I will delete the oldest elasticsearch
index. You can even send a SLACK
notification using cURL
too.The possibilities are endless.
Happy bashing :)
#linux #bash #curl #grep #sort #es #elasticsearch #split #awk #script
Simple bash script to take nightly
Put it in a crontab for nightly backups. First open crotab:
Create a new line (entry) in crontab and paste the below cron task:
#mongodb #backup #cron #cronjob #coderwall #mongodump #bash
MongoDB
backups:#!/bin/sh
DIR=`date +%m%d%y`
DEST=/db_backups/$DIR
mkdir $DEST
mongodump -h <your_database_host> -d <your_database_name> -u <username> -p <password> -o $DEST
NOTE:
db_backups folder shoud already be created by mkdir /db_backups
.Put it in a crontab for nightly backups. First open crotab:
sudo crontab -e
Create a new line (entry) in crontab and paste the below cron task:
45 1 * * * ../../scripts/db_backup.sh
NOTE:
here our script is called db_backup.sh, should you use your own script name here. and make it executable by chmod +x /your/ full_path/scripts/db_backup.sh
#mongodb #backup #cron #cronjob #coderwall #mongodump #bash
In order to compress a file with level of 9 (maximum level of compression), you need to
set an ENV variable. In order to not clobber the file system environments you can use pipe in your command:
#tar #gzip #compression_level
set an ENV variable. In order to not clobber the file system environments you can use pipe in your command:
tar cvf - /path/to/directory | gzip -9 - > file.tar.gz
#tar #gzip #compression_level
In Linux bash scripting you can check commands exit codes and do appropriate jobs accordingly. For that we will use || and &&.
Let's start by a simple echo command:
If for any reason we want to check the exit code of
You can use code block to run multiple commands:
The is another way that you can check exit code and it is
When
#linux #bash #script #scripting #exit_code
Let's start by a simple echo command:
echo "Hello everybody"
If for any reason we want to check the exit code of
echo
command to see if it is successful or not. We can use the code block:echo "Hello everybody" && echo "Phew! We're good." || echo "echo command FAILED!"
You can use code block to run multiple commands:
echo "Hello everybody" && {
echo "Phew! We're good."
touch ME
} || {
echo "echo command FAILED!"
touch YOURSELF
}
NOTE:
exit code 0 means command execution was successful, and exit code 1 means something nasty happened to the previous command.The is another way that you can check exit code and it is
$?
:cp ME YOURSELF
if [ $? = 0 ] ; then
echo "copy seems OK!"
else
echo "Yuck! File could not get copied! :("
fi
When
cp
command is run $?
will keep the exit code of recent command which has been executed.#linux #bash #script #scripting #exit_code