How to check whether name servers (NS) are propagated in Domain Name Server (DNS)?
Occasionally sysadmins/devops migrate their server to a new server with a new IP address due to server lag/ data loss, you name it. Here our domain name will point to the old IP address. It sometimes take a couple of days to fully get propagated.
There are some network tools like
At prompt type your domain and hit enter:
If it resolves to what you expected then it works. It should give you something like:
Using dig:
It prints lots of information, you should see
If you see the correct IP address, it means that it has properly cached by remote name servers.
#sysadmin #nslookup #dig #ns #dns #name_server
Occasionally sysadmins/devops migrate their server to a new server with a new IP address due to server lag/ data loss, you name it. Here our domain name will point to the old IP address. It sometimes take a couple of days to fully get propagated.
There are some network tools like
nslookup`/`dig
that can help in checking DNS propagation. Let's say your name server (NS) is ns1. example.com
:nslookup - ns1-5-61-24-199.parsdev.net
At prompt type your domain and hit enter:
nillkin24.ir
If it resolves to what you expected then it works. It should give you something like:
Server: ns1-5-61-24-199.parsdev.net
Address: 5.61.24.199#53
NOTE:
it may still take a while to propagate to the rest of the internet, that's out of your control.Using dig:
dig @ns1-5-61-24-199.parsdev.net nillkin24.ir
It prints lots of information, you should see
ANSWER SECTION
with a result like below:;; ANSWER SECTION:
nillkin24.ir. 14400 IN A 5.61.24.199
If you see the correct IP address, it means that it has properly cached by remote name servers.
#sysadmin #nslookup #dig #ns #dns #name_server
You are on a server and all of a sudden you need your public
The website will just spit out the IP address with no bullshit around it! It is more specifically used by
#linux #sysadmin #curl #ifconfig #ifconfigco
IP address
. You can do it using cURL
and terminal:$ curl ifconfig.co
142.17.150.17
The website will just spit out the IP address with no bullshit around it! It is more specifically used by
sysadmins
.#linux #sysadmin #curl #ifconfig #ifconfigco
View open ports without
#linux #ports #netstat #tcp #open_ports #sysadmin
netstat
or other tool:# Get all open ports in hex format
declare -a open_ports=($(cat /proc/net/tcp | grep -v "local_address" | awk '{ print $2 }' | cut -d':' -f2))
# Show all open ports and decode hex to dec
for port in ${open_ports[*]}; do echo $((0x${port})); done
#linux #ports #netstat #tcp #open_ports #sysadmin
How to truncate a log file in
or
If you want to be more eloquent, will empty logfile (actually they will truncate it to zero size). If you want to know how long it "takes", you may use
(which is the same as
You can also use:
to be perfectly explicit or, if you don't want to
(applications usually do recreate a logfile if it doesn't exist already).
However, since logfiles are usually useful, you might want to compress and save a copy. While you could do that with your own script, it is a good idea to at least try using an existing working solution, in this case logrotate, which can do exactly that and is reasonably configurable.
#linux #sysadmin #truncate #dd #dev_null #logfile
Linux
:> logfile
or
cat /dev/null > logfile
If you want to be more eloquent, will empty logfile (actually they will truncate it to zero size). If you want to know how long it "takes", you may use
dd if=/dev/null of=logfile
(which is the same as
dd if=/dev/null > logfile
, by the way)You can also use:
truncate logfile --size 0
to be perfectly explicit or, if you don't want to
rm logfile
(applications usually do recreate a logfile if it doesn't exist already).
However, since logfiles are usually useful, you might want to compress and save a copy. While you could do that with your own script, it is a good idea to at least try using an existing working solution, in this case logrotate, which can do exactly that and is reasonably configurable.
#linux #sysadmin #truncate #dd #dev_null #logfile
To get on going processes in mysql client and see which queries are taking longer use:
It will show you a table with list of all connection from different hosts (if applicable) and their
#mysql #client #kill #processlist #sysadmin #dba #linux
SHOW PROCESSLIST;
It will show you a table with list of all connection from different hosts (if applicable) and their
PID
number. You can use this number to kill a process that consumes your server CPU, Memory, etc.:KILL <pid>;
#mysql #client #kill #processlist #sysadmin #dba #linux
watch
linux command is used to run a command at regular intervals.The command below is the simplest form of watch:
watch YOUR_COMMAND
For instance:
watch df -h
The command above runs
df -h
(check disk space) every 2 seconds by default.In order to change the interval:
watch -n 5 df -h
-n
or --interval
specify update interval in second. The command will not allow quicker than 0.1 second interval.In case you want to see the differences in your output command use
-d
or --differences
. It wil highlightwhen part of your command output changes. For example in our command if disk space usage changes we will see
the new result highlighted.
SIDE NOTE:
-h
in df
command will show a human readable format of disk space in mega byte.#linux #sysadmin #watch
1. List all Open Files with lsof Command
> lsof
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
init 1 root cwd DIR 253,0 4096 2 /
init 1 root rtd DIR 253,0 4096 2 /
init 1 root txt REG 253,0 145180 147164 /sbin/init
init 1 root mem REG 253,0 1889704 190149 /lib/libc-2.12.so
FD
column stands for File Descriptor
, This column values are as below:-
cwd
current working directory-
rtd
root directory-
txt
program text (code and data)-
mem
memory-mapped fileTo get the count of open files you can use
wc -l
with lsof
like as follow:lsof | wc -l
2. List User Specific Opened Files
lsof -u alireza
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sshd 1838 alireza cwd DIR 253,0 4096 2 /
sshd 1838 alireza rtd DIR 253,0 4096 2 /
#linux #sysadmin #lsof #wc #file_descriptor
Delete
#linux #sysadmin #bash #script #es #elasticsearch #DELETE #purge
elasticsearch
indexes older than 1 month:#!/bin/bash
last_month=`date +%Y%m%d --date '1 month ago'`
old_es_index="faxplus_*-$last_month"
echo "Deleting ES indexes $old_es_index..."
curl -X DELETE 'http://localhost:9200/myindex_*-20180520'
echo ''
NOTE:
asterisk in curl command will be anything in between of myindex_
and -20180520
. For example myindex_module1-20180520
.#linux #sysadmin #bash #script #es #elasticsearch #DELETE #purge
space=$(df -k / | tail -1 | awk '{print $4}')
echo "free disk space is $space"
if [ $space -lt 510000 ]
then
echo $(date) + " - Purge elasticsearch indexes..."
curl -X DELETE "http://localhost:9200/your_index_name_*"
echo ''
else
echo $(date) + " - disk space seems OK"
fi
Put this in a
crontab
and you are good to go.#linux #sysadmin #bash #script #df #elasticsearch #es
nethogs
is used to monitor network traffic. You can see which processes use the most bandwidth and hogs the network.Installtion on debian:
apt-get install nethogs
You can give the
nethogs
a network interface to see what's going on under the hood:nethogs eth0
The output would something like:
PID USER PROGRAM DEV SENT RECEIVED
9023 root python eth0 6.083 175.811 KB/sec
20745 root python eth0 2.449 45.715 KB/sec
11934 www-da.. nginx: worker process eth0 131.580 20.238 KB/sec
25925 root /usr/bin/python eth0 3.674 10.090 KB/sec
When
nethogs
is open, you can press r
in order to sort based on RECEIVED
or press s
to sort based on SENT
packets. To change the mode that it is shown for KB/sec
press m multiple times and see the output.#network #sysadmin #linux #nethogs #nethog #network #eth0
How to check
We assume here that you have a replica set in place. First download the python script for our nagios plugin:
Now the
Create a new file
Create a new file in
This service gets enabled where it finds
#sysadmin #icinga2 #mongodb #replication #replication_lag #nagios_plugin
MongoDB
replication lag in Icinga2
and get notified when it is over 15 seconds?We assume here that you have a replica set in place. First download the python script for our nagios plugin:
cd /usr/lib/nagios/plugins
git clone git://github.com/mzupan/nagios-plugin-mongodb.git
Now the
Icinga2
part. You first need to create a command for replication lag check:cd /etc/icinga2/conf.d/commands
Create a new file
replication_lag.conf
:object CheckCommand "check_replication_lag" {
import "plugin-check-command"
command = [ PluginDir + "/nagios-plugin-mongodb/check_mongodb.py", "-A", "replication_lag" ]
arguments = {
"-H" = "$mongo_host$"
"-P" = "$mongo_port$"
}
}
Create a new file in
services
folder called replication_lag.conf
:apply Service for (display_name => config in host.vars.replication) {
import "generic-service"
check_command = "check_replication_lag"
vars += config
assign where host.vars.replication
}
This service gets enabled where it finds
replication
in host config. Now in secondary mongoDB hosts configuration add the below part:vars.replication["Secondary DB"] = {
mongo_host = "slave.example.com"
mongo_port = 27017
}
#sysadmin #icinga2 #mongodb #replication #replication_lag #nagios_plugin