Tech C**P
15 subscribers
161 photos
9 videos
59 files
304 links
مدرس و برنامه نویس پایتون و لینوکس @alirezastack
Download Telegram
It is sometimes tempting to log less in order to let's say improve performance or save disk space, and blah blah blah!

Loggers in python are the last thing to think about when you think about performance. You can place log ratote to use less disk space. From my own experience log as much as you can. Log as every step necessary to make log level verbose. At last in case you think you application is very stable you change log level to info, and all debug logs are ignored.

It took sometimes hours to debug a few modules (in microservice structure) to find the culprit, something that could have been solved by putting more logs.

Log, log and log more...

#python #log #logger #performance
simple introduction to Cement framework and its usage. Cement framework is mostly used for creating a command line application using python.

In version 2.6 of Cement you can initiate an app using with:
from cement.core.foundation import CementApp

with CementApp('myapp') as app:
app.run()

It hides complexieties in your application initiation. That is the above code would be something like below without using with:
from cement.core.foundation import CementApp

app = CementApp('myapp')
app.setup()
app.run()
app.close()

As you can see with procedure is more clear and straight forward with less code. I know it's silly to have an app like above, but that's just an introduction to the world of Cement.

To add logger to your framework you need to set log level in log.logging as below:
from cement.utils.misc import init_defaults
from cement.core.foundation import CementApp

defaults = init_defaults('myapp', 'log.logging')
defaults['log.logging']['level'] = 'DEBUG'
defaults['log.logging']['file'] = 'cementy.log'
defaults['log.logging']['to_console'] = True

with CementApp('myapp', config_defaults=defaults) as app:
app.run()
app.log.debug('This is debug')

init_defaults is used to setup logging. level sets the log level to DEBUG. file would write log data into cementy.log file.
By setting to_console param you can also write the data written to file into console too. So if you run your python application, a file would be created for logging and data will be printed out.

#python #cement #framework #logging #log #level #foundation
If you want to run a script, ALWAYS log script output into a file or you will be bitten in the ass and would not have any log data for future reference.

In a regular whay when you try to run a python script you would use:
python my_script.py

Anything that will be printed inside of the script will be printed out into the stdout, so you use the below code to put the script output (stdout) into a file:
python my_script.py >> my_script.log

The above command will put the output into a persisted file that can be referenced in the future.

NOTE: The above scenario is for cases when you don't use a log handler in your script, or when
you are in a hurry and just want to put output in a file. Logging solution is definitely a good s
olution.

Finally if you want to run the script in background use:
python my_script.py >> my_script.log 2>&1

2>&1: 1 is for stdin and 2 is for stderr( if exist code of non-success happens). This command s
ays that send stderr messages into stdout.

#linux #python #script #log
How to add color to your logs in python?

It's easy as pie, just install coloredlogs with pip and then:

import coloredlogs, logging
logger = logging.getLogger(__name__)
coloredlogs.install(level='DEBUG')

# Some examples.
logger.debug("this is a debugging message")
logger.info("this is an informational message")
logger.warning("this is a warning message")
logger.error("this is an error message")
logger.critical("this is a critical message")

By default the install() function installs a handler on the root logger, this means that log messages from your code and log messages from the libraries that you use will all show up on the terminal.

If you don't want to see log messages from libraries, you can pass a specific logger object to the install() function. In this case only log messages originating from that logger will show up on the terminal:

coloredlogs.install(level='DEBUG', logger=logger)

#log #logger #coloredlogs #logging #color
If your in docker swarm and you want to see log data of a specific service you can use --since as below:

docker service logs project_redis --since "1m" -f

It sometimes has unexpected behaviours and does not print logs. Rather than --since you can use tail it is better in case you want to see recent logs:

docker service logs project_redis --tail 1

#docker #swarm #since #tail #log
When you see logs in docker you cannot use grep on the output. In case you want to put grep on it you need to send data to standard output (2>&1).

Long story, short:
docker service logs --since "1m" -f app_redis 2>&1 | grep "Your search text"


#docker #log #logs #since #grep
If you have Axigen mail server, you probably know that it does not have any db backend and just logs everything into some files usually in the below path:

/var/opt/axigen/log


There is a script that you can use to parse these logs and see overall behaviour of your mail server and common errors that happen on your server.

To download the log-parser script head over to axigen link below:
- http://www.axigen.com/mail-server/download-tool.php?tool=log-parser.tar.gz

After downloading and extracting it, move the script and its data folder into your mail server. To run it first make sure it is executable:

chmod +x axigen-log-parser.sh


Now to run it you need to give a parameter to script like parse:

sudo ./axigen-log-parser.sh parse /var/opt/axigen/log/

The above script will generate some files in /var/log/axi-parser/. CD into that path and check the results.

Actions supported are parse, maillog, split, trace and clean.

#mail #mail_server #axigen #log_parser #log
Docker can set different log drivers for its logging mechanism it can be json file, syslog, fluentd and so on. The default is set to json-file and these log files are located in /var/lib/docker/containers/. You can check type of your log in docker using:

$ docker inspect -f '{{.HostConfig.LogConfig.Type}}' <CONTAINER>
json-file

Instead of <CONTAINER> put your currently running container id.


To read more about this head on to: https://docs.docker.com/config/containers/logging/configure/#configure-the-logging-driver-for-a-container


#docker #log #log_driver
By default when you install nginX on Linux a logrotate config file will be created in /etc/logrotate.d/nginx. Sometimes you may see that after a while logs inside of nginX access log is empty and it is logged into the file usually named access.log.1. This error happens when a process cannot close its file handler and has to write into access.log.1.

If you take a look at the config rotation of nginX you will see a part called postrotate that run a command, for nginx it is as below:

postrotate
invoke-rc.d nginx rotate >/dev/null 2>&1
endscript


If you run the command between postrotate and endscript it may gives the below error:

invoke-rc.d: action rotate is unknown, but proceeding anyway.
invoke-rc.d: policy-rc.d denied execution of rotate.


Just remove a file related to i-MSCP:

rm /usr/sbin/policy-rc.d

NOTE: Or if you want to be safe rename it to something else.


Now you can run invoke-rc.d command and you should see a result like below:

[ ok ] Re-opening nginx log files: nginx.

Now every log will be directed to its file not it_file_name.log.1, and file handlers are closed safely.

#nginx #policy_rc #invoke_rc #log_rotate #rotate
Do you log a lot like me in your Python modules? If so, you had the same problem to always find the first occurence of a log after time, filename, etc. Let's clarify this with a sample log:

[2012-10-02 application.py:1 _get()] DEBUG: this is a log content
[2012-10-02 db.py:1005 _fetch_all_info()] INFO: this is a log content


You can see that both have the same log content but it's hard to follow cause of length of file name, line number and function name. To format this better we can have space padding in formatters. spaces are identified by `s. Now lets see the same log, but this time with space padding.

The formatter is as below:

[%(asctime)s %(filename)15s:%(lineno)4s %(funcName)20s()] %(levelname)s %(message)s


NOTE: this is not the exact formatter for the above log, it is for demonstration!


Now the output will something like below:

[2012-10-02   application.py:    1               _get()] DEBUG: this is a log content
[2012-10-02 db.py: 1005 _fetch_all_info()] DEBUG: this is a log content


You can see that log content is so much easier to follow by using space padding. It may not be obvious on telegram with small devices. So try it your self :)))

#python #logging #log #logger #formatter #log_formatter #space_padding #padding