Tech C**P
14 subscribers
161 photos
9 videos
59 files
304 links
مدرس و برنامه نویس پایتون و لینوکس @alirezastack
Download Telegram
ngxtop - real-time metrics for nginx server (and others)

ngxtop parses your nginx access log and outputs useful, top-like, metrics of your nginx server. So you can tell what is happening with your server in real-time. ngxtop tries to determine the correct location and format of nginx access log file by default, so you can just run ngxtop and having a close look at all requests coming to your nginx server. But it does not limit you to nginx and the default top view. ngxtop is flexible enough for you to configure and change most of its behaviours. You can query for different things, specify your log and format, even parse remote Apache common access log with ease. See sample usages below for some ideas about what you can do with it.


Installation:

pip install ngxtop


It is easy as pie, you just need to run it and look at the results:

nxtop

It will reports for total requests served and total bytes sent to client and will report requests based on their status code. At the pictures in the next post you can see sample usages.

#linux #nginx #top #nxtop #web_server
504 Gateway timeout

It is a known issue to many people even those who are not in the programming field. 504 timeout happens when a response for a request has taken longer than expected. There are times that you know and are sure that you need to increase this time. For example if users export a huge excel file as a report. In nginx you can increase this time to what seems to be appropriate from programmer point of view.

In nginx.conf usually in /etc/nginx/ do as follow:

proxy_connect_timeout       600s;
proxy_send_timeout 600s;
proxy_read_timeout 600s;
send_timeout 600s;

More info: http://nginx.org/en/docs/http/ngx_http_proxy_module.html


#504 #gateway_timeout #timeout #nginx #proxy_read_timeout #proxy_send_timeout
How do you give maintenance for your infrastructure when a new BIG deployment comes in?

You can use nginX to display a nice page about your maintenance. First of all create your fancy landing page for your maintenance in html format with the name of maintenance_off.html.

Inside of your server block in nginX configuration we do like below:

server {
...
location / {
if (-f /webapps/your_app/maintenance_on.html) {
return 503;
}


...
}

# Error pages.
error_page 503 /maintenance_on.html;
location = /maintenance_on.html {
root /webapps/your_app/;
}
...
}

What happens here is that we first check for a file named maintenance_on.html, if it's present we return 503 Server error. This error code is returned when server is not available. This code is inside of location section of root.

Outside of location we need to server /maintenance_on.html for error 503:

error_page 503 /maintenance_on.html;


And with an exact match for the file (`location =`) we serve that static page from root of /webapps/your_app/.

The note about this code is that when your file is named maintenance_off.html maintenance will be ignored and when we rename the file to maintenance_on.html then error 503 is returned.


#linux #nginx #maintenance #maintenance_mode #location #error_page
There are many other ways to put a website in maintenance mode, like to allow specific ip addresses to see the website but others see the maintenance mode page:

server {
..
set $maintenance on;

if ($remote_addr ~ (34.34.133.12|53.13.53.12)) {
set $maintenance off;
}

if ($maintenance = on) {
return 503;
}

location /maintenance {
}

error_page 503 @maintenance;

location @maintenance {
root /var/www/html/maintenance;
rewrite ^(.*)$ /index.html break;
}
..

}

This is it. If you don't know what the above code do, then SEARCH. :)

#nginx #linux #maintenance #maintenance_mode #503
In case you want to serve static files in your website in nginX, you can add a new location directive to your server block that corresponds to your website:

server {

# your rest of codes in server block...

location / {

location ~ \.(css|ico|jpg|png) {
root /etc/nginx/www/your_site/statics;
}

# your rest of codes...

}
}

This is it, whether it is a uwsgi proxy, fpm, etc.

#web_server #nginx #static #location #serve
When you redirect in nginX you would use one of 302, 301 code like the below code:

location = /singup {
return 302 https://docs.google.com/forms;
}


But there is tiny tip here that needs to be told. If you want to pass parameter to the destination link, which in here is https:// docs.google.com/forms it wont work. String parameters are being held in $args varaible in nginX so you need to pass this variable like the following code:

location = /singup {
return 302 https://docs.google.com/forms$is_args$args;
}

The variable $is_args value will be set to "?" if a request line has arguments, or an empty string otherwise.


#nginx #web_server #redirect #302 #is_args #args
By default when you install nginX on Linux a logrotate config file will be created in /etc/logrotate.d/nginx. Sometimes you may see that after a while logs inside of nginX access log is empty and it is logged into the file usually named access.log.1. This error happens when a process cannot close its file handler and has to write into access.log.1.

If you take a look at the config rotation of nginX you will see a part called postrotate that run a command, for nginx it is as below:

postrotate
invoke-rc.d nginx rotate >/dev/null 2>&1
endscript


If you run the command between postrotate and endscript it may gives the below error:

invoke-rc.d: action rotate is unknown, but proceeding anyway.
invoke-rc.d: policy-rc.d denied execution of rotate.


Just remove a file related to i-MSCP:

rm /usr/sbin/policy-rc.d

NOTE: Or if you want to be safe rename it to something else.


Now you can run invoke-rc.d command and you should see a result like below:

[ ok ] Re-opening nginx log files: nginx.

Now every log will be directed to its file not it_file_name.log.1, and file handlers are closed safely.

#nginx #policy_rc #invoke_rc #log_rotate #rotate
If for any reason you had to increase uwsgi_pass timeout in nginX you can use uwsgi_read_timeout:

upstream uwsgicluster {
server 127.0.0.1:5000;
}

.
.
.


include uwsgi_params;
uwsgi_pass uwsgicluster;
uwsgi_read_timeout 3000;


You can also increase timeout in uwsgi. If you are using ini file you need to use harakiri parameter like below:

harakiri = 30

Its value is in seconds.

#uwsgi #nginx #uwsgi_pass #harakiri #timeout #uwsgi_read_timeout
Today I had a problem on nginX. I don't know where to start! :|

Fair enough, this is my nginx stanza:

location /geo {
add_header 'Access-Control-Allow-Origin' '*';
if ( $arg_callback ) {
echo_before_body '$arg_callback(';
echo_after_body ');';
}

proxy_pass https://api.example.com/geo;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
}

NOTE: each part of nginX block is called stanza. I bet you didn't know about this one! :D

echo_before_body command will prepend something to the response which will be returned from nginX.

echo_after_body will append something to the response.

proxy_pass will proxy your requests to a backend server.

$arg_callback will get value of parameter callback in URL. So for example if you use $arg_type, you will get the value of type argument which is provided in URL: http://sample.com?type=SOMETHING

So far so good. The problem was that when I give call the URL with callback parameter https://api.example.com/geo?callback=test. It
generates /geo/geo URL instead of /geo. To circumvent the issue I used $request_uri in proxy_pass section proxy_pass https:// api.fax.plus$request_uri;. The route should be OK now, but there is one big problem here now that responses are returned in binary format instead of JSON. I removed Upgrade & Connection & proxy_http_version lines and it worked like a charm!

Don't ask me! I don't know what are Upgrade and Connection headers.

The output is like the below response for a URL similar to http://api.example.com/geo?callback=test:

test(
{
"username": "alireza",
"password": "123456"
}
)

#nginx #stanza #proxy_pass #echo_before_body #echo_after_body
How to implement email tracking solution?

nginX has a module called empty_gif that generates a 1*1 pixel image. It is usually put at the end of campaign emails in order to track how many users have opened the email. The code for nginX is:

location = /empty.gif {
empty_gif;
expires -1;
post_action @track;
}

location @track {
internal;
proxy_pass http://tracking-backend/track$is_args$args;
proxy_set_header x-ip $remote_addr;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
}


The above code check if /empty.gif URL is requested, if answer is yes then serve the image and makes expires to -1 to not cache the image and finally using post_action which calls a sub request after the current request has been fininshed. The parameters you need to pass is put after the image link in email like:

https://www.example.com/empty.gif?token=SOMETHING_TO_TRACK
#nginx #email #empty_gif #email_tracking #pixel
nginX by default does not set CORS headers for requests with error status codes like 401. There are 2 options here to add CORS headers:

1- If your nginX version is new you have the option of always to add to add_header:
add_header 'Access-Control-Allow-Origin' $http_origin always;

2- If you got error of invalid number of arguments in "add_header" directive, then use more_set_headers:
more_set_headers -s '401 400 403 404 500' 'Access-Control-Allow-Origin: $http_origin';

NOTE: when this error happens then in $.ajax or any similar method you wont have access to status codes. So be extremely catious!!!

#nginX #web_server #CORS #more_set_headers #add_header #ajax
While configuration can be tested with command service nginx configtest, there is a more convenient way to do this - and immediately get whats wrong.

There is a command nginx -t, which test configuration and display error messages. Both commands must be used with sudo or permission denied messages might be shown (regarding SSL certificates for example).

Command to test and reload if it's ok:
sudo nginx -t && sudo service nginx reload


#nginx #test #configtest