Did you push a very large file into git? Does everyone yell at you about your commit and your uselessness? Are you a junky punky like me that just ruin things? Oh I'm kidding...
Because of that big file cloning the repo again would take a long long time. Removing the file locally and pushing again would not solve the problem as that big file is in Git's history.
If you want to remove the large file from your git history, so that when everyone clone the repo should not wait for that large file, just do as follow:
I should note that you should be in the root of git repo.
If you need to do this, be sure to keep a copy of your repo around in case something goes wrong.
#git #clone #rm #remove #large_file #blob #rebase #filter_branch
Because of that big file cloning the repo again would take a long long time. Removing the file locally and pushing again would not solve the problem as that big file is in Git's history.
If you want to remove the large file from your git history, so that when everyone clone the repo should not wait for that large file, just do as follow:
git filter-branch --tree-filter 'rm path/to/your/bigfile' HEAD
git push origin master --force
I should note that you should be in the root of git repo.
If you need to do this, be sure to keep a copy of your repo around in case something goes wrong.
#git #clone #rm #remove #large_file #blob #rebase #filter_branch
How much do you know rebooting/shutting down your linux server?
* First of all you should be root or sudoer to be able to reboot the server.
The command below reboot the server immediately:
In case you want to reboot in a specific time, you can use shutdown command! Yes you have to use shutdown for rebooting server, it has historical reasons.
So by the explanation given so far you can reboot your system after 5 minutes by the below command:
To see last reboots history log:
To reboot a remote server you can use
#linux #cyberciti #reboot #shutdown #remote_reboot
* First of all you should be root or sudoer to be able to reboot the server.
The command below reboot the server immediately:
reboot
In case you want to reboot in a specific time, you can use shutdown command! Yes you have to use shutdown for rebooting server, it has historical reasons.
shutdown -r time "message"
time
parameter can be now
, or in the format of hh:mm
, or in the format of +m
which stands for minute. now
is a shortcut for +0
. The message part is the part that will be broadcast to all users who are logged in.NOTE:
shutdown command is recommended over reboot command.So by the explanation given so far you can reboot your system after 5 minutes by the below command:
shutdown -r +5 "Server is going down for kernel upgrade. Please save your work ASAP."
To see last reboots history log:
last reboot
To reboot a remote server you can use
ssh
:ssh root@server1 /sbin/reboot
#linux #cyberciti #reboot #shutdown #remote_reboot
Create a
This will compress the contents of source-folder-name to a tar.gz archive named tar-archive-name.tar.gz
To extract a tar.gz compressed archive you can use the following command:
This will extract the archive to the folder tar-archive-name.
#linux #tar #targz #zip #compress
tar.gz
file using tar
command:tar -zcvf tar-archive-name.tar.gz source-folder-name
-z
: The z option is very important and tells the tar command to uncompress the file (gzip).-c
: I don't know! If you know what it does please tell me. Thank you.-v
: The “v” stands for “verbose.” This option will list all of the files one by one in the archive.-f
: This options tells tar that you are going to give it a file name to work with.This will compress the contents of source-folder-name to a tar.gz archive named tar-archive-name.tar.gz
To extract a tar.gz compressed archive you can use the following command:
tar -zxvf tar-archive-name.tar.gz
-x
: This option tells tar
to extract the files.This will extract the archive to the folder tar-archive-name.
#linux #tar #targz #zip #compress
If you want to see how much actual data is stored in your
#mysql #myisam #innodb #storage_engine #se
MyISAM
, InnoDB
:SELECT IFNULL(B.engine,'Total') "Storage Engine",
CONCAT(LPAD(REPLACE(FORMAT(B.DSize/POWER(1024,pw),3),',',''),17,' '),' ',
SUBSTR(' KMGTP',pw+1,1),'B') "Data Size", CONCAT(LPAD(REPLACE(
FORMAT(B.ISize/POWER(1024,pw),3),',',''),17,' '),' ',
SUBSTR(' KMGTP',pw+1,1),'B') "Index Size", CONCAT(LPAD(REPLACE(
FORMAT(B.TSize/POWER(1024,pw),3),',',''),17,' '),' ',
SUBSTR(' KMGTP',pw+1,1),'B') "Table Size"
FROM (SELECT engine,SUM(data_length) DSize,SUM(index_length) ISize,
SUM(data_length+index_length) TSize FROM information_schema.tables
WHERE table_schema NOT IN ('mysql','information_schema','performance_schema')
AND engine IS NOT NULL GROUP BY engine WITH ROLLUP) B,
(SELECT 3 pw) A ORDER BY TSize;
#mysql #myisam #innodb #storage_engine #se
You can kill a process by
corrupt the file, if you are sending RPC messages, you will break the process in between and drop all messages.
To handle signal you can use
- https://gist.github.com/alirezastack/ae4e12a21ccb91264b69e1d14a53c044
The above method will handle
To test it you can run the script in terminal and then try to find the pid number of the process and finally kill it:
The above command will issue
#python #sigint #sigterm #signal #kill
kill
command. But have you thought what would happen to the process which is runing if you issue kill command? It will do something nasty in between if you do not handle the kill signal gracefully. If you are writing to a file, it willcorrupt the file, if you are sending RPC messages, you will break the process in between and drop all messages.
To handle signal you can use
signal
python module. A sample of the signal handling is created as a gist in github below:- https://gist.github.com/alirezastack/ae4e12a21ccb91264b69e1d14a53c044
The above method will handle
SIGINT
, SIGTERM
and end the loop gracefully.To test it you can run the script in terminal and then try to find the pid number of the process and finally kill it:
sudo kill 4773
The above command will issue
SIGTERM
and script will handle it gracefully. SIGINT
on the other hand is issued when you press CTRL+C
.#python #sigint #sigterm #signal #kill
Gist
Gracefully kill python script using signal module
MongoDB data types
String:
You know what it is!Integer
: This type is used to store a numerical value. Integer can be 32 bit or 64 bit depending upon your server.Boolean
: True/FalseDouble
: This type is used to store floating point values.Arrays
: [list, of, elements]Timestamp
: This can be handy for recording when a document has been modified or added.Object
: This datatype is used for embedded documents. Like {"images": {"a": "ali", "b": "reza"}}Null
: This type is used to store a Null value.Date
: This datatype is used to store the current date or time in UNIX time format. You can specify your own date time by creating object of Date and passing day, month, year into it.Object ID
: This datatype is used to store the document’s ID.There are some more like
Code
, Regex
and so which is used less than other data types.#mongodb #data_type #mongo #database #collection #object
Kill zombie tasks in
1-
2- At the host the container running in, look up the
3-
It sweeps entries of the zombie tasks from swarm. Happy deploying! :)
#docker #swarm #zombie #no_trunk #shim
docker swarm
using steps below:1-
docker ps --no-trunc
to find the zombie container id.2- At the host the container running in, look up the
PID
of a docker-containerd-shim process by ps aux | grep <container id>
3-
kill <PID>
It sweeps entries of the zombie tasks from swarm. Happy deploying! :)
#docker #swarm #zombie #no_trunk #shim
https://stackoverflow.com/users/22656/jon-skeet
Read on more about jon skeet here:
https://stackoverflow.blog/2018/01/15/thanks-million-jon-skeet/?utm_source=so-owned&utm_medium=hero&utm_campaign=jon-skeet-milestone
#stackoverflow #skeet
1-Million
reputation in stack overflow. He's genius by answering 10 questions per day! (By answer I mean high quality answers)Read on more about jon skeet here:
https://stackoverflow.blog/2018/01/15/thanks-million-jon-skeet/?utm_source=so-owned&utm_medium=hero&utm_campaign=jon-skeet-milestone
#stackoverflow #skeet
Stack Overflow
User Jon Skeet
Stack Overflow | The World’s Largest Online Community for Developers
Sometimes in coding we see a python code like below:
The above code is error prone.
In case users variable does not contain
That curly braces will do the job in case extra_info field is not present in users variable.
#python #dictionary #get #keyError
users.get('extra_info').get('browser')
The above code is error prone.
get
method is usually used to prevent throwing error when accessing a non-existent key. For instance if we try to access extra_info that does not exist, the code below will throw KeyError
exception:> users['extra_info']
Traceback (most recent call last):
File "<input>", line 1, in <module>
KeyError: 'extra_info'
In case users variable does not contain
extra_info
, None will be returned and the get will be applied on None value, to prevent such error you need to return {} as default value:users.get('extra_info', {}).get('browser')
That curly braces will do the job in case extra_info field is not present in users variable.
#python #dictionary #get #keyError
It is always a best practice to store dates in UTC. Do not ever store dates in your local timezone in which you will go to hell in your programming experience!
Given that you have saved your dates in UTC and would like to convert them into
#python #date #utc #strftime #dateutil #timezone #pytz
Given that you have saved your dates in UTC and would like to convert them into
Asia/Tehran
date in Python
do like below:date_format = '%Y-%m-%d %H:%M:%S'
local_tz = pytz.timezone('Asia/Tehran')
submit_time = dateutil.parser.parse(utc_date)
submit_time = submit_time.replace(tzinfo=pytz.utc).astimezone(local_tz)
output = submit_time.strftime(date_format)
#python #date #utc #strftime #dateutil #timezone #pytz
To join lines in
#pycharm #tricks #join #join_lines
PyCharm
first select you lines, then use CTRL+Shift+j
(⌃⇧J) to put them in one line.#pycharm #tricks #join #join_lines
In mysql when insert some data and then delete them auto increment value will not decrement and will be at the number that you have deleted data. Or in case you use transaction and execute your insert queries then rollback your transaction, your auto increment will be the last inserted record that has been rolled back. One of the ways to reset auto increment value is to use
The above command will reset your auto increment value.
One of the other ways is to use
truncate
that WILL EARASE ALL YOUR DATA and set auto increment to 1:truncate table your_table;
The above command will reset your auto increment value.
One of the other ways is to use
alter
statement:ALTER TABLE table_name AUTO_INCREMENT = 1;
NOTE:
for InnoDB you cannot set the auto_increment
value lower or equal to the highest current index.InnoDB file per table why?
if it is not started and failed what to do?
Today was a big day as a technical point of view in MySQL that saved a lot of storage for me and great deal of maintenance in the future.
To better explain the issue I have to talk a little bit about fundamental behaviour of MySQL InnoDB storage engine!
in past MySQL used MyISAM as its default storage engine. It didn't support transaction. It was not fault tolerant and data was not reliable when power outages occured or server got restarted in the middle of the MySQL actions. By now MySQL uses
In InngoDB by default all tables and all databases resides in a single gigantic file called
storage, our server went out of free space.
There a is mechanism in MySQL that you configure InnoDB to store each tables data into its own file not inside of
The
Do not use optimize table on a table, when you have not configured innodb file per table. Running
- Makes the table's data and indexes contiguous inside ibdata1.
- It makes ibdata1 grow because the contiguous data is appended to ibdata1.
You can segregate Table Data and Table Indexes from ibdata1 and manage them independently using innodb_file_per_table. That way, only MVCC and Table MetaData would reside in ibdata1.
In the next post I explain how to do exactly that.
#mysql #innodb #myisam #ibdata1 #database #innodb_file_per_table
if it is not started and failed what to do?
Today was a big day as a technical point of view in MySQL that saved a lot of storage for me and great deal of maintenance in the future.
To better explain the issue I have to talk a little bit about fundamental behaviour of MySQL InnoDB storage engine!
in past MySQL used MyISAM as its default storage engine. It didn't support transaction. It was not fault tolerant and data was not reliable when power outages occured or server got restarted in the middle of the MySQL actions. By now MySQL uses
InnoDB
as its default storage engine that is battery packed by transactions, fault tolerant and more.In InngoDB by default all tables and all databases resides in a single gigantic file called
ibdata
. When data grows and you alter your tables, the scar gets worse! The size of the ibdata grows very fast. When you alter a table ibdata
file would not shrink. For example we had a 120GB single file on server that altering a single table with a huge data would take a long time and would take longstorage, our server went out of free space.
There a is mechanism in MySQL that you configure InnoDB to store each tables data into its own file not inside of
ibdata
file. This mechnism has great advantages like using OPTIMIZE TABLE
to shrink table size.The
OPTIMIZE TABLE
whith InnoDB
tables, locks the table, copy the data in a new clean table (that's why the result is shrinked), drop the original table and rename the new table with the original name. That why you should care to have the double of the volumetry of your table available in your disk. If you have a 30GB table, optimizing that table needs at least 30GB of free disk space.Do not use optimize table on a table, when you have not configured innodb file per table. Running
OPTIMIZE TABLE
against an InnoDB table stored ibdata1 will make things worse because here is what it does:- Makes the table's data and indexes contiguous inside ibdata1.
- It makes ibdata1 grow because the contiguous data is appended to ibdata1.
You can segregate Table Data and Table Indexes from ibdata1 and manage them independently using innodb_file_per_table. That way, only MVCC and Table MetaData would reside in ibdata1.
In the next post I explain how to do exactly that.
#mysql #innodb #myisam #ibdata1 #database #innodb_file_per_table
When innodb_file_per_table is enabled, InnoDB stores data and indexes for each newly created table in a separate .ibd file instead of the system tablespace.
I have to summarize the steps in order to post all in one post:
1- use
2- Drop all databases (except mysql schema)
3- service mysql stop
4- Add the following lines to /etc/my.cnf
[mysqld]
innodb_file_per_table
innodb_flush_method=O_DIRECT
innodb_log_file_size=1G
innodb_buffer_pool_size=4G
* Sidenote: Whatever your set for innodb_buffer_pool_size, make sure innodb_log_file_size is 25% of innodb_buffer_pool_size.
5- rm -f /var/lib/mysql/ibdata1 /var/lib/mysql/ib_logfile
6- service mysql start
* if mysql does not start it may be due to insufficient memory. Try to reduce innodb_buffer_pool_size and innodb_log_file_size occordingly.
7- Reload SQLData.sql into mysql
Suppose you have an InnoDB table named mydb.mytable. If you go into /var/lib/mysql/mydb, you will see two files representing the table
- mytable.frm (Storage Engine Header)
- mytable.ibd (Home of Table Data and Table Indexes for mydb.mytable)
With the innodb_file_per_table option in /etc/my.cnf, you can run OPTIMIZE TABLE mydb.mytable; and the file /var/lib/mysql/mydb/ mytable.ibd will actually shrink.
#mysql #InnoDB #innodb_file_per_table #optimize_table
I have to summarize the steps in order to post all in one post:
1- use
mysqldump
to export your desired databases (call it SQLData.sql).2- Drop all databases (except mysql schema)
3- service mysql stop
4- Add the following lines to /etc/my.cnf
[mysqld]
innodb_file_per_table
innodb_flush_method=O_DIRECT
innodb_log_file_size=1G
innodb_buffer_pool_size=4G
* Sidenote: Whatever your set for innodb_buffer_pool_size, make sure innodb_log_file_size is 25% of innodb_buffer_pool_size.
5- rm -f /var/lib/mysql/ibdata1 /var/lib/mysql/ib_logfile
6- service mysql start
* if mysql does not start it may be due to insufficient memory. Try to reduce innodb_buffer_pool_size and innodb_log_file_size occordingly.
7- Reload SQLData.sql into mysql
ibdata1
will grow but only contain table metadata. Each InnoDB table will exist outside of ibdata1.Suppose you have an InnoDB table named mydb.mytable. If you go into /var/lib/mysql/mydb, you will see two files representing the table
- mytable.frm (Storage Engine Header)
- mytable.ibd (Home of Table Data and Table Indexes for mydb.mytable)
ibdata1
will never contain InnoDB data and Indexes anymore.With the innodb_file_per_table option in /etc/my.cnf, you can run OPTIMIZE TABLE mydb.mytable; and the file /var/lib/mysql/mydb/ mytable.ibd will actually shrink.
#mysql #InnoDB #innodb_file_per_table #optimize_table
wget and ssh session termination
If you SSH into your server and issue
wget
command to get file for instance, then your SSH disconnects your wget will continue its job in background. Some one might say why should it continues and how could it be when connectin of my terminal went away!?This is from
src/main.c
of the wget
sources (version 1.19.2):/* Hangup signal handler. When wget receives SIGHUP or SIGUSR1, it
will proceed operation as usual, trying to write into a log file.
If that is impossible, the output will be turned off. */
A bit further down, the signal handler is installed:
/* Setup the signal handler to redirect output when hangup is
received. */
if (signal(SIGHUP, SIG_IGN) != SIG_IGN)
signal(SIGHUP, redirect_output_signal);
So it looks like
wget
is not ignoring the HUP
signal, but it chooses to continue processing with its output redirected to the log file.Source code of
wget main.c
: http://bzr.savannah.gnu.org/lh/wget/trunk/annotate/head:/src/main.c#linux #ssh #wget #SIGHUP #SIG_IGN #SIGUSR1
bzr.savannah.gnu.org
/wget/trunk
: (revision 2608) To get this branch, use: bzr branch /lh/wget/trunk
If you have
There is a script that you can use to parse these logs and see overall behaviour of your mail server and common errors that happen on your server.
To download the
- http://www.axigen.com/mail-server/download-tool.php?tool=log-parser.tar.gz
After downloading and extracting it, move the script and its data folder into your mail server. To run it first make sure it is executable:
Now to run it you need to give a parameter to script like
The above script will generate some files in
Actions supported are
#mail #mail_server #axigen #log_parser #log
Axigen
mail server, you probably know that it does not have any db backend and just logs everything into some files usually in the below path:/var/opt/axigen/log
There is a script that you can use to parse these logs and see overall behaviour of your mail server and common errors that happen on your server.
To download the
log-parser
script head over to axigen link below:- http://www.axigen.com/mail-server/download-tool.php?tool=log-parser.tar.gz
After downloading and extracting it, move the script and its data folder into your mail server. To run it first make sure it is executable:
chmod +x axigen-log-parser.sh
Now to run it you need to give a parameter to script like
parse
:sudo ./axigen-log-parser.sh parse /var/opt/axigen/log/
The above script will generate some files in
/var/log/axi-parser/
. CD into that path and check the results.Actions supported are
parse
, maillog
, split
, trace
and clean
.#mail #mail_server #axigen #log_parser #log