How to check whether name servers (NS) are propagated in Domain Name Server (DNS)?
Occasionally sysadmins/devops migrate their server to a new server with a new IP address due to server lag/ data loss, you name it. Here our domain name will point to the old IP address. It sometimes take a couple of days to fully get propagated.
There are some network tools like
At prompt type your domain and hit enter:
If it resolves to what you expected then it works. It should give you something like:
Using dig:
It prints lots of information, you should see
If you see the correct IP address, it means that it has properly cached by remote name servers.
#sysadmin #nslookup #dig #ns #dns #name_server
Occasionally sysadmins/devops migrate their server to a new server with a new IP address due to server lag/ data loss, you name it. Here our domain name will point to the old IP address. It sometimes take a couple of days to fully get propagated.
There are some network tools like
nslookup`/`dig
that can help in checking DNS propagation. Let's say your name server (NS) is ns1. example.com
:nslookup - ns1-5-61-24-199.parsdev.net
At prompt type your domain and hit enter:
nillkin24.ir
If it resolves to what you expected then it works. It should give you something like:
Server: ns1-5-61-24-199.parsdev.net
Address: 5.61.24.199#53
NOTE:
it may still take a while to propagate to the rest of the internet, that's out of your control.Using dig:
dig @ns1-5-61-24-199.parsdev.net nillkin24.ir
It prints lots of information, you should see
ANSWER SECTION
with a result like below:;; ANSWER SECTION:
nillkin24.ir. 14400 IN A 5.61.24.199
If you see the correct IP address, it means that it has properly cached by remote name servers.
#sysadmin #nslookup #dig #ns #dns #name_server
You are on a server and all of a sudden you need your public
The website will just spit out the IP address with no bullshit around it! It is more specifically used by
#linux #sysadmin #curl #ifconfig #ifconfigco
IP address
. You can do it using cURL
and terminal:$ curl ifconfig.co
142.17.150.17
The website will just spit out the IP address with no bullshit around it! It is more specifically used by
sysadmins
.#linux #sysadmin #curl #ifconfig #ifconfigco
View open ports without
#linux #ports #netstat #tcp #open_ports #sysadmin
netstat
or other tool:# Get all open ports in hex format
declare -a open_ports=($(cat /proc/net/tcp | grep -v "local_address" | awk '{ print $2 }' | cut -d':' -f2))
# Show all open ports and decode hex to dec
for port in ${open_ports[*]}; do echo $((0x${port})); done
#linux #ports #netstat #tcp #open_ports #sysadmin
How to truncate a log file in
or
If you want to be more eloquent, will empty logfile (actually they will truncate it to zero size). If you want to know how long it "takes", you may use
(which is the same as
You can also use:
to be perfectly explicit or, if you don't want to
(applications usually do recreate a logfile if it doesn't exist already).
However, since logfiles are usually useful, you might want to compress and save a copy. While you could do that with your own script, it is a good idea to at least try using an existing working solution, in this case logrotate, which can do exactly that and is reasonably configurable.
#linux #sysadmin #truncate #dd #dev_null #logfile
Linux
:> logfile
or
cat /dev/null > logfile
If you want to be more eloquent, will empty logfile (actually they will truncate it to zero size). If you want to know how long it "takes", you may use
dd if=/dev/null of=logfile
(which is the same as
dd if=/dev/null > logfile
, by the way)You can also use:
truncate logfile --size 0
to be perfectly explicit or, if you don't want to
rm logfile
(applications usually do recreate a logfile if it doesn't exist already).
However, since logfiles are usually useful, you might want to compress and save a copy. While you could do that with your own script, it is a good idea to at least try using an existing working solution, in this case logrotate, which can do exactly that and is reasonably configurable.
#linux #sysadmin #truncate #dd #dev_null #logfile
To get on going processes in mysql client and see which queries are taking longer use:
It will show you a table with list of all connection from different hosts (if applicable) and their
#mysql #client #kill #processlist #sysadmin #dba #linux
SHOW PROCESSLIST;
It will show you a table with list of all connection from different hosts (if applicable) and their
PID
number. You can use this number to kill a process that consumes your server CPU, Memory, etc.:KILL <pid>;
#mysql #client #kill #processlist #sysadmin #dba #linux
watch
linux command is used to run a command at regular intervals.The command below is the simplest form of watch:
watch YOUR_COMMAND
For instance:
watch df -h
The command above runs
df -h
(check disk space) every 2 seconds by default.In order to change the interval:
watch -n 5 df -h
-n
or --interval
specify update interval in second. The command will not allow quicker than 0.1 second interval.In case you want to see the differences in your output command use
-d
or --differences
. It wil highlightwhen part of your command output changes. For example in our command if disk space usage changes we will see
the new result highlighted.
SIDE NOTE:
-h
in df
command will show a human readable format of disk space in mega byte.#linux #sysadmin #watch
1. List all Open Files with lsof Command
> lsof
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
init 1 root cwd DIR 253,0 4096 2 /
init 1 root rtd DIR 253,0 4096 2 /
init 1 root txt REG 253,0 145180 147164 /sbin/init
init 1 root mem REG 253,0 1889704 190149 /lib/libc-2.12.so
FD
column stands for File Descriptor
, This column values are as below:-
cwd
current working directory-
rtd
root directory-
txt
program text (code and data)-
mem
memory-mapped fileTo get the count of open files you can use
wc -l
with lsof
like as follow:lsof | wc -l
2. List User Specific Opened Files
lsof -u alireza
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sshd 1838 alireza cwd DIR 253,0 4096 2 /
sshd 1838 alireza rtd DIR 253,0 4096 2 /
#linux #sysadmin #lsof #wc #file_descriptor
Delete
#linux #sysadmin #bash #script #es #elasticsearch #DELETE #purge
elasticsearch
indexes older than 1 month:#!/bin/bash
last_month=`date +%Y%m%d --date '1 month ago'`
old_es_index="faxplus_*-$last_month"
echo "Deleting ES indexes $old_es_index..."
curl -X DELETE 'http://localhost:9200/myindex_*-20180520'
echo ''
NOTE:
asterisk in curl command will be anything in between of myindex_
and -20180520
. For example myindex_module1-20180520
.#linux #sysadmin #bash #script #es #elasticsearch #DELETE #purge
space=$(df -k / | tail -1 | awk '{print $4}')
echo "free disk space is $space"
if [ $space -lt 510000 ]
then
echo $(date) + " - Purge elasticsearch indexes..."
curl -X DELETE "http://localhost:9200/your_index_name_*"
echo ''
else
echo $(date) + " - disk space seems OK"
fi
Put this in a
crontab
and you are good to go.#linux #sysadmin #bash #script #df #elasticsearch #es
nethogs
is used to monitor network traffic. You can see which processes use the most bandwidth and hogs the network.Installtion on debian:
apt-get install nethogs
You can give the
nethogs
a network interface to see what's going on under the hood:nethogs eth0
The output would something like:
PID USER PROGRAM DEV SENT RECEIVED
9023 root python eth0 6.083 175.811 KB/sec
20745 root python eth0 2.449 45.715 KB/sec
11934 www-da.. nginx: worker process eth0 131.580 20.238 KB/sec
25925 root /usr/bin/python eth0 3.674 10.090 KB/sec
When
nethogs
is open, you can press r
in order to sort based on RECEIVED
or press s
to sort based on SENT
packets. To change the mode that it is shown for KB/sec
press m multiple times and see the output.#network #sysadmin #linux #nethogs #nethog #network #eth0
How to check
We assume here that you have a replica set in place. First download the python script for our nagios plugin:
Now the
Create a new file
Create a new file in
This service gets enabled where it finds
#sysadmin #icinga2 #mongodb #replication #replication_lag #nagios_plugin
MongoDB
replication lag in Icinga2
and get notified when it is over 15 seconds?We assume here that you have a replica set in place. First download the python script for our nagios plugin:
cd /usr/lib/nagios/plugins
git clone git://github.com/mzupan/nagios-plugin-mongodb.git
Now the
Icinga2
part. You first need to create a command for replication lag check:cd /etc/icinga2/conf.d/commands
Create a new file
replication_lag.conf
:object CheckCommand "check_replication_lag" {
import "plugin-check-command"
command = [ PluginDir + "/nagios-plugin-mongodb/check_mongodb.py", "-A", "replication_lag" ]
arguments = {
"-H" = "$mongo_host$"
"-P" = "$mongo_port$"
}
}
Create a new file in
services
folder called replication_lag.conf
:apply Service for (display_name => config in host.vars.replication) {
import "generic-service"
check_command = "check_replication_lag"
vars += config
assign where host.vars.replication
}
This service gets enabled where it finds
replication
in host config. Now in secondary mongoDB hosts configuration add the below part:vars.replication["Secondary DB"] = {
mongo_host = "slave.example.com"
mongo_port = 27017
}
#sysadmin #icinga2 #mongodb #replication #replication_lag #nagios_plugin
I have a script that checks a source folder for new files in case there are files in the source folder, it will move those files to destination.
The problem I encountered recently was that files are huge and it may be in the middle of the copying into source by another process so my script tries to move an incomplete file to a destination. Let's say the file is 4GB in size and just only 1GB of the file has been copied. I have to wait until file is 4GB and other handler using that file, then I should safely move the file.
You can use
#linux #sysadmin #lsof #grep
The problem I encountered recently was that files are huge and it may be in the middle of the copying into source by another process so my script tries to move an incomplete file to a destination. Let's say the file is 4GB in size and just only 1GB of the file has been copied. I have to wait until file is 4GB and other handler using that file, then I should safely move the file.
You can use
lsof
command in order to check which processes are using the source file:if [[ `lsof -- /var/my-folder/my-big-file.tar.gz` ]]
then
echo "File is being used by a process."
exit 1
fi
NOTE:
you can give file directly to lsof
using --
or you can use grep command as follow:lsof | grep /var/my-folder/my-big-file.tar.gz
NOTE2:
if you are in a loop use break
instead of exit
.NOTE3:
if you get command not found
, install it using apt-get install lsof
#linux #sysadmin #lsof #grep
How to limit bandwidth of
far better in terms of features and incremental copy mechanism. To limit bandwidth use
The general form:
An example of usage with 2MB limit per second transfer:
#linux #sysadmin #rsync #bandwidth
rsync
linux command?rsync
is an awesome tool in order to move files via ssh
into another server or from server to local system like what scp does butfar better in terms of features and incremental copy mechanism. To limit bandwidth use
bwlimit
like below:The general form:
rsync --bwlimit=<kb/second> <source> <dest>
An example of usage with 2MB limit per second transfer:
rsync --bwlimit=2000 /backup/folder user@example-host:/remote/backup/folder/
#linux #sysadmin #rsync #bandwidth
https://xiaoxiaoke.wordpress.com/2015/10/06/changing-java-security-restriction-for-network-connect-error-issue-in-kvm/
#linux #devops #sysadmin #kvm #hp
#linux #devops #sysadmin #kvm #hp
Ke Xu | Geek World
Changing java security restriction for “Network connect error” issue in KVM
I am using KVM to manage the servers in a rack. When launching the session jnlp files with javaws, I saw the “Network connect error” message. This is due to the default java security re…
How to SSH login without password?
You want to use Linux and OpenSSH to automate your tasks. Therefore you need an automatic login from host A / user a to Host B / user b. You don't want to enter any passwords, because you want to call ssh from a within a shell script.
First log in on A as user a and generate a pair of authentication keys. Do not enter a passphrase:
Now use ssh to create a directory
Finally append a's new public key to
From now on you can log into B as b from A as a without password:
#linux #sysadmin #ssh #password_less #ssh_login
You want to use Linux and OpenSSH to automate your tasks. Therefore you need an automatic login from host A / user a to Host B / user b. You don't want to enter any passwords, because you want to call ssh from a within a shell script.
How to do it?
First log in on A as user a and generate a pair of authentication keys. Do not enter a passphrase:
a@A:~> ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/a/.ssh/id_rsa):
Created directory '/home/a/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/a/.ssh/id_rsa.
Your public key has been saved in /home/a/.ssh/id_rsa.pub.
The key fingerprint is:
3e:4f:05:79:3a:9f:96:7c:3b:ad:e9:58:37:bc:37:e4 a@A
Now use ssh to create a directory
~/.ssh
as user b on B. (The directory may already exist, which is fine):a@A:~> ssh b@B mkdir -p .ssh
b@B's password:
Finally append a's new public key to
b@B:.ssh/authorized_keys
and enter b's password one last time:a@A:~> cat .ssh/id_rsa.pub | ssh b@B 'cat >> .ssh/authorized_keys'
b@B's password:
From now on you can log into B as b from A as a without password:
a@A:~> ssh b@B
#linux #sysadmin #ssh #password_less #ssh_login