Tech C**P
14 subscribers
161 photos
9 videos
59 files
304 links
مدرس و برنامه نویس پایتون و لینوکس @alirezastack
Download Telegram
Kill zombie tasks in docker swarm using steps below:

1- docker ps --no-trunc to find the zombie container id.

2- At the host the container running in, look up the PID of a docker-containerd-shim process by ps aux | grep <container id>

3- kill <PID>

It sweeps entries of the zombie tasks from swarm. Happy deploying! :)

#docker #swarm #zombie #no_trunk #shim
Sometimes in coding we see a python code like below:

users.get('extra_info').get('browser')


The above code is error prone. get method is usually used to prevent throwing error when accessing a non-existent key. For instance if we try to access extra_info that does not exist, the code below will throw KeyError exception:

> users['extra_info']

Traceback (most recent call last):
File "<input>", line 1, in <module>
KeyError: 'extra_info'


In case users variable does not contain extra_info, None will be returned and the get will be applied on None value, to prevent such error you need to return {} as default value:

users.get('extra_info', {}).get('browser')

That curly braces will do the job in case extra_info field is not present in users variable.


#python #dictionary #get #keyError
It is always a best practice to store dates in UTC. Do not ever store dates in your local timezone in which you will go to hell in your programming experience!

Given that you have saved your dates in UTC and would like to convert them into Asia/Tehran date in Python do like below:

date_format = '%Y-%m-%d %H:%M:%S'
local_tz = pytz.timezone('Asia/Tehran')
submit_time = dateutil.parser.parse(utc_date)
submit_time = submit_time.replace(tzinfo=pytz.utc).astimezone(local_tz)
output = submit_time.strftime(date_format)

#python #date #utc #strftime #dateutil #timezone #pytz
To join lines in PyCharm first select you lines, then use CTRL+Shift+j (⌃⇧J) to put them in one line.

#pycharm #tricks #join #join_lines
In mysql when insert some data and then delete them auto increment value will not decrement and will be at the number that you have deleted data. Or in case you use transaction and execute your insert queries then rollback your transaction, your auto increment will be the last inserted record that has been rolled back. One of the ways to reset auto increment value is to use truncate that WILL EARASE ALL YOUR DATA and set auto increment to 1:

truncate table your_table;

The above command will reset your auto increment value.


One of the other ways is to use alter statement:

ALTER TABLE table_name AUTO_INCREMENT = 1;

NOTE: for InnoDB you cannot set the auto_increment value lower or equal to the highest current index.
InnoDB file per table why?
if it is not started and failed what to do?


Today was a big day as a technical point of view in MySQL that saved a lot of storage for me and great deal of maintenance in the future.

To better explain the issue I have to talk a little bit about fundamental behaviour of MySQL InnoDB storage engine!

in past MySQL used MyISAM as its default storage engine. It didn't support transaction. It was not fault tolerant and data was not reliable when power outages occured or server got restarted in the middle of the MySQL actions. By now MySQL uses InnoDB as its default storage engine that is battery packed by transactions, fault tolerant and more.

In InngoDB by default all tables and all databases resides in a single gigantic file called ibdata. When data grows and you alter your tables, the scar gets worse! The size of the ibdata grows very fast. When you alter a table ibdata file would not shrink. For example we had a 120GB single file on server that altering a single table with a huge data would take a long time and would take long
storage, our server went out of free space.

There a is mechanism in MySQL that you configure InnoDB to store each tables data into its own file not inside of ibdata file. This mechnism has great advantages like using OPTIMIZE TABLE to shrink table size.

The OPTIMIZE TABLE whith InnoDB tables, locks the table, copy the data in a new clean table (that's why the result is shrinked), drop the original table and rename the new table with the original name. That why you should care to have the double of the volumetry of your table available in your disk. If you have a 30GB table, optimizing that table needs at least 30GB of free disk space.

Do not use optimize table on a table, when you have not configured innodb file per table. Running OPTIMIZE TABLE against an InnoDB table stored ibdata1 will make things worse because here is what it does:

- Makes the table's data and indexes contiguous inside ibdata1.

- It makes ibdata1 grow because the contiguous data is appended to ibdata1.

You can segregate Table Data and Table Indexes from ibdata1 and manage them independently using innodb_file_per_table. That way, only MVCC and Table MetaData would reside in ibdata1.

In the next post I explain how to do exactly that.

#mysql #innodb #myisam #ibdata1 #database #innodb_file_per_table
When innodb_file_per_table is enabled, InnoDB stores data and indexes for each newly created table in a separate .ibd file instead of the system tablespace.

I have to summarize the steps in order to post all in one post:

1- use mysqldump to export your desired databases (call it SQLData.sql).

2- Drop all databases (except mysql schema)

3- service mysql stop

4- Add the following lines to /etc/my.cnf

[mysqld]
innodb_file_per_table
innodb_flush_method=O_DIRECT
innodb_log_file_size=1G
innodb_buffer_pool_size=4G

* Sidenote: Whatever your set for innodb_buffer_pool_size, make sure innodb_log_file_size is 25% of innodb_buffer_pool_size.

5- rm -f /var/lib/mysql/ibdata1 /var/lib/mysql/ib_logfile

6- service mysql start

* if mysql does not start it may be due to insufficient memory. Try to reduce innodb_buffer_pool_size and innodb_log_file_size occordingly.

7- Reload SQLData.sql into mysql

ibdata1 will grow but only contain table metadata. Each InnoDB table will exist outside of ibdata1.


Suppose you have an InnoDB table named mydb.mytable. If you go into /var/lib/mysql/mydb, you will see two files representing the table

- mytable.frm (Storage Engine Header)
- mytable.ibd (Home of Table Data and Table Indexes for mydb.mytable)

ibdata1 will never contain InnoDB data and Indexes anymore.

With the innodb_file_per_table option in /etc/my.cnf, you can run OPTIMIZE TABLE mydb.mytable; and the file /var/lib/mysql/mydb/ mytable.ibd will actually shrink.

#mysql #InnoDB #innodb_file_per_table #optimize_table
wget and ssh session termination

If you SSH into your server and issue wget command to get file for instance, then your SSH disconnects your wget will continue its job in background. Some one might say why should it continues and how could it be when connectin of my terminal went away!?


This is from src/main.c of the wget sources (version 1.19.2):

/* Hangup signal handler.  When wget receives SIGHUP or SIGUSR1, it
will proceed operation as usual, trying to write into a log file.
If that is impossible, the output will be turned off. */


A bit further down, the signal handler is installed:

/* Setup the signal handler to redirect output when hangup is
received. */
if (signal(SIGHUP, SIG_IGN) != SIG_IGN)
signal(SIGHUP, redirect_output_signal);

So it looks like wget is not ignoring the HUP signal, but it chooses to continue processing with its output redirected to the log file.


Source code of wget main.c: http://bzr.savannah.gnu.org/lh/wget/trunk/annotate/head:/src/main.c

#linux #ssh #wget #SIGHUP #SIG_IGN #SIGUSR1
How to store Emojis like 🙄 in MySQL database?

https://mathiasbynens.be/notes/mysql-utf8mb4

#mysql #emoji #charset #database
If you have Axigen mail server, you probably know that it does not have any db backend and just logs everything into some files usually in the below path:

/var/opt/axigen/log


There is a script that you can use to parse these logs and see overall behaviour of your mail server and common errors that happen on your server.

To download the log-parser script head over to axigen link below:
- http://www.axigen.com/mail-server/download-tool.php?tool=log-parser.tar.gz

After downloading and extracting it, move the script and its data folder into your mail server. To run it first make sure it is executable:

chmod +x axigen-log-parser.sh


Now to run it you need to give a parameter to script like parse:

sudo ./axigen-log-parser.sh parse /var/opt/axigen/log/

The above script will generate some files in /var/log/axi-parser/. CD into that path and check the results.

Actions supported are parse, maillog, split, trace and clean.

#mail #mail_server #axigen #log_parser #log
ngxtop - real-time metrics for nginx server (and others)

ngxtop parses your nginx access log and outputs useful, top-like, metrics of your nginx server. So you can tell what is happening with your server in real-time. ngxtop tries to determine the correct location and format of nginx access log file by default, so you can just run ngxtop and having a close look at all requests coming to your nginx server. But it does not limit you to nginx and the default top view. ngxtop is flexible enough for you to configure and change most of its behaviours. You can query for different things, specify your log and format, even parse remote Apache common access log with ease. See sample usages below for some ideas about what you can do with it.


Installation:

pip install ngxtop


It is easy as pie, you just need to run it and look at the results:

nxtop

It will reports for total requests served and total bytes sent to client and will report requests based on their status code. At the pictures in the next post you can see sample usages.

#linux #nginx #top #nxtop #web_server
View top source IPs of clients #nxtop
Default output #nxtop
List 4xx or 5xx responses together with HTTP referer #nxtop
Did you know you can syntax highlight your codes in google docs using an add on?

This is code block! An addon that adds syntax highlighting to your documents. In order to download it, go to Add-ons menu in google docs and click on Get add-ons. Search for code block and install it.

After installation Code Block option will be added to Add-ons menu. Click it and hit Start. Now you will have a great syntax highlighting right in your toolbox. :)

Happy syntax highlighting! :D

#google #code #syntax_highlight #code_block #addon #google_doc