Tech C**P
14 subscribers
161 photos
9 videos
59 files
304 links
مدرس و برنامه نویس پایتون و لینوکس @alirezastack
Download Telegram
How to truncate a log file in Linux:


> logfile


or


cat /dev/null > logfile


If you want to be more eloquent, will empty logfile (actually they will truncate it to zero size). If you want to know how long it "takes", you may use

dd if=/dev/null of=logfile

(which is the same as dd if=/dev/null > logfile, by the way)

You can also use:


truncate logfile --size 0


to be perfectly explicit or, if you don't want to

rm logfile

(applications usually do recreate a logfile if it doesn't exist already).

However, since logfiles are usually useful, you might want to compress and save a copy. While you could do that with your own script, it is a good idea to at least try using an existing working solution, in this case logrotate, which can do exactly that and is reasonably configurable.

#linux #sysadmin #truncate #dd #dev_null #logfile
To get on going processes in mysql client and see which queries are taking longer use:

SHOW PROCESSLIST;

It will show you a table with list of all connection from different hosts (if applicable) and their PID number. You can use this number to kill a process that consumes your server CPU, Memory, etc.:

KILL <pid>;

#mysql #client #kill #processlist #sysadmin #dba #linux
watch linux command is used to run a command at regular intervals.

The command below is the simplest form of watch:
watch YOUR_COMMAND

For instance:
watch df -h

The command above runs df -h (check disk space) every 2 seconds by default.

In order to change the interval:
watch -n 5 df -h

-n or --interval specify update interval in second. The command will not allow quicker than 0.1 second interval.

In case you want to see the differences in your output command use -d or --differences. It wil highlight
when part of your command output changes. For example in our command if disk space usage changes we will see
the new result highlighted.


SIDE NOTE: -h in df command will show a human readable format of disk space in mega byte.


#linux #sysadmin #watch
To make a user as sudoer use usermod as below:
usermod -aG sudo username

If it gives error that sudo group does not exist, use groups to see list of users, it maybe root like below:
usermod -aG root username

#linux #sysadmin #usermod #sudo #sudoer #root
1. List all Open Files with lsof Command

> lsof
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
init 1 root cwd DIR 253,0 4096 2 /
init 1 root rtd DIR 253,0 4096 2 /
init 1 root txt REG 253,0 145180 147164 /sbin/init
init 1 root mem REG 253,0 1889704 190149 /lib/libc-2.12.so

FD column stands for File Descriptor, This column values are as below:
- cwd current working directory
- rtd root directory
- txt program text (code and data)
- mem memory-mapped file


To get the count of open files you can use wc -l with lsof like as follow:

lsof | wc -l


2. List User Specific Opened Files

lsof -u alireza
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sshd 1838 alireza cwd DIR 253,0 4096 2 /
sshd 1838 alireza rtd DIR 253,0 4096 2 /

#linux #sysadmin #lsof #wc #file_descriptor
Delete elasticsearch indexes older than 1 month:

#!/bin/bash

last_month=`date +%Y%m%d --date '1 month ago'`
old_es_index="faxplus_*-$last_month"
echo "Deleting ES indexes $old_es_index..."
curl -X DELETE 'http://localhost:9200/myindex_*-20180520'
echo ''

NOTE: asterisk in curl command will be anything in between of myindex_ and -20180520. For example myindex_module1-20180520.

#linux #sysadmin #bash #script #es #elasticsearch #DELETE #purge
space=$(df -k / | tail -1 | awk '{print $4}')
echo "free disk space is $space"

if [ $space -lt 510000 ]
then
echo $(date) + " - Purge elasticsearch indexes..."
curl -X DELETE "http://localhost:9200/your_index_name_*"
echo ''
else
echo $(date) + " - disk space seems OK"
fi

Put this in a crontab and you are good to go.

#linux #sysadmin #bash #script #df #elasticsearch #es
nethogs is used to monitor network traffic. You can see which processes use the most bandwidth and hogs the network.

Installtion on debian:

apt-get install nethogs

You can give the nethogs a network interface to see what's going on under the hood:

nethogs eth0


The output would something like:

PID USER       PROGRAM                                                       DEV        SENT      RECEIVED
9023 root python eth0 6.083 175.811 KB/sec
20745 root python eth0 2.449 45.715 KB/sec
11934 www-da.. nginx: worker process eth0 131.580 20.238 KB/sec
25925 root /usr/bin/python eth0 3.674 10.090 KB/sec

When nethogs is open, you can press r in order to sort based on RECEIVED or press s to sort based on SENT packets. To change the mode that it is shown for KB/sec press m multiple times and see the output.

#network #sysadmin #linux #nethogs #nethog #network #eth0
How to check MongoDB replication lag in Icinga2 and get notified when it is over 15 seconds?

We assume here that you have a replica set in place. First download the python script for our nagios plugin:

cd /usr/lib/nagios/plugins
git clone git://github.com/mzupan/nagios-plugin-mongodb.git

Now the Icinga2 part. You first need to create a command for replication lag check:

cd /etc/icinga2/conf.d/commands

Create a new file replication_lag.conf:

object CheckCommand "check_replication_lag" {
import "plugin-check-command"
command = [ PluginDir + "/nagios-plugin-mongodb/check_mongodb.py", "-A", "replication_lag" ]
arguments = {
"-H" = "$mongo_host$"
"-P" = "$mongo_port$"
}
}


Create a new file in services folder called replication_lag.conf:

apply Service for (display_name => config in host.vars.replication) {
import "generic-service"
check_command = "check_replication_lag"
vars += config
assign where host.vars.replication
}


This service gets enabled where it finds replication in host config. Now in secondary mongoDB hosts configuration add the below part:

vars.replication["Secondary DB"] = {
mongo_host = "slave.example.com"
mongo_port = 27017
}

#sysadmin #icinga2 #mongodb #replication #replication_lag #nagios_plugin
I have a script that checks a source folder for new files in case there are files in the source folder, it will move those files to destination.

The problem I encountered recently was that files are huge and it may be in the middle of the copying into source by another process so my script tries to move an incomplete file to a destination. Let's say the file is 4GB in size and just only 1GB of the file has been copied. I have to wait until file is 4GB and other handler using that file, then I should safely move the file.

You can use lsof command in order to check which processes are using the source file:


if [[ `lsof -- /var/my-folder/my-big-file.tar.gz` ]]
then
echo "File is being used by a process."
exit 1
fi


NOTE: you can give file directly to lsof using -- or you can use grep command as follow:


lsof | grep /var/my-folder/my-big-file.tar.gz


NOTE2: if you are in a loop use break instead of exit.

NOTE3: if you get command not found, install it using apt-get install lsof

#linux #sysadmin #lsof #grep
How to limit bandwidth of rsync linux command?

rsync is an awesome tool in order to move files via ssh into another server or from server to local system like what scp does but
far better in terms of features and incremental copy mechanism. To limit bandwidth use bwlimit like below:

The general form:

rsync --bwlimit=<kb/second> <source> <dest>

An example of usage with 2MB limit per second transfer:

rsync --bwlimit=2000 /backup/folder user@example-host:/remote/backup/folder/

#linux #sysadmin #rsync #bandwidth
How to remove a user on CentOS 6?

userdel my-target-user

In case you want to remove all the associated user files too use -r parameter:

userdel -r my-target-user

Yes! As simple as that.

#linux #sysadmin #userdel #user #centos
How to SSH login without password?

You want to use Linux and OpenSSH to automate your tasks. Therefore you need an automatic login from host A / user a to Host B / user b. You don't want to enter any passwords, because you want to call ssh from a within a shell script.

How to do it?
First log in on A as user a and generate a pair of authentication keys. Do not enter a passphrase:

a@A:~> ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/a/.ssh/id_rsa):
Created directory '/home/a/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/a/.ssh/id_rsa.
Your public key has been saved in /home/a/.ssh/id_rsa.pub.
The key fingerprint is:
3e:4f:05:79:3a:9f:96:7c:3b:ad:e9:58:37:bc:37:e4 a@A


Now use ssh to create a directory ~/.ssh as user b on B. (The directory may already exist, which is fine):

a@A:~> ssh b@B mkdir -p .ssh
b@B's password:


Finally append a's new public key to b@B:.ssh/authorized_keys and enter b's password one last time:

a@A:~> cat .ssh/id_rsa.pub | ssh b@B 'cat >> .ssh/authorized_keys'
b@B's password:


From now on you can log into B as b from A as a without password:

a@A:~> ssh b@B


#linux #sysadmin #ssh #password_less #ssh_login
How to get default gateway IP address?

ip route | grep default


The output would be something like below:

default via 192.168.1.1 dev eth0 onlink


NOTE: it can be different on your system

#linux #sysadmin #ip #route #default_gateway
Virt-builder is a tool for quickly building new virtual machines. You can build a variety of VMs for local or cloud use, usually within a few minutes or less. Virt-builder also has many ways to customize these VMs. Everything is run from the command line and nothing requires root privileges, so automation and scripting is simple.

To see available virtual machines:

virt-builder --list


Sample command to create a debian-9 image:

sudo virt-builder debian-9 --size=50G --hostname prod.example.com --network --install network-manager --root-password password:YOUR_PASS


The above command creates a debian 9 image with disk size of 50GB and sets the hostname to prod.example.com. --network enables the networking on guest and --install installs packages on the target O
S. The last parameter sets the root password to YOUR_PASS.

To read more about the axtra parameters:
- http://libguestfs.org/virt-builder.1.html

#linux #sysadmin #virt_builder #debian #image
You can login to a server without entering a password by a simple command as below:

ssh-copy-id USERNAME@YOUR_HOST_IP -p 22


By issuing the above command it puts your public key content on server ~/.ssh/authorized_keys and prompts you to enter the password. You are all done by this.

#linux #sysadmin #ssh #passwordless_login #ssh_copy_id #authorized_keys #public_key
How I store & manage passwords in Linux for servers and other credentials? gopass

Password management should be simple and follow Unix philosophy. With pass, each secret lives inside of a gpg encrypted file whose filename is the title of the website or resource that requires the secret. These encrypted files may be organized into meaningful folder hierarchies, copied from computer to computer, and, in general, manipulated using standard command line file management utilities.

gopass is a rewrite of the pass password manager in Go with the aim of making it cross-platform and adding additional features. Our target audience are professional developers and sysadmins (and especially teams of those) who are well versed with a command line interface. One explicit goal for this project is to make it more approachable to non-technical users. We go by the UNIX philosophy and try to do one thing and do it well, providing a stellar user experience and a sane, simple interface:

- https://github.com/gopasspw/gopass

#unix #linux #sysadmin #pass #gopass #password_management #golang #github
How to copy a file in the same path with a different name?

Well you just need to use curly braces, {}, to make it happen:

What you usually MIGHT do:

cp /usr/local/share/lib/sample.conf /usr/local/share/lib/settings.conf



You can take the smart path btw:

cp /usr/local/share/lib/{sample,settings}.conf


NOTE: if suffix of both files are different you can put that inside of curly braces too.

#linux #sysadmin #cp #copy #trick