Make your
Django deliberately doesn’t serve media for you, and it’s designed that way to save you from yourself. If you try to serve media from the same Apache instance that’s serving Django, you’re going to absolutely kill performance. Apache reuses processes between each request, so once a process caches all the code and libraries for Django, those stick around in memory. If you aren’t using that process to service a Django request, all the memory overhead is wasted.
So, set up all your media to be served by a different web server entirely. Ideally, this is a physically separate machine running a high- performance web server like lighttpd or tux. If you can’t afford the separate machine, at least have the media server be a separate process on the same machine.
For more information on how to separate static folder:
- https://docs.djangoproject.com/en/dev/howto/static-files/#howto-static-files
If you can afford it, stick your database server on a separate machine, too. All too often Apache and PostgreSQL (or MySQL or whatever) compete for system resources in a bad way. A separate DB server — ideally one with lots of RAM and fast (10k or better) drives — will seriously improve the number of hits you can dish out.
I don’t totally understand how KeepAlive works, but turning it off on our Django servers increased performance by something like 50%. Of course, don’t do this if the same server is also serving media… but you’re not doing that, right?
Although Django has support for a number of cache backends, none of them perform even half as well as memcached does. If you find yourself needing the cache, do yourself a favor and don’t even play around with the other backends; go straight for memcached.
#python #django #memcached
Django
application blazing fast by doing some tips:1- Use a separate media server:
Django deliberately doesn’t serve media for you, and it’s designed that way to save you from yourself. If you try to serve media from the same Apache instance that’s serving Django, you’re going to absolutely kill performance. Apache reuses processes between each request, so once a process caches all the code and libraries for Django, those stick around in memory. If you aren’t using that process to service a Django request, all the memory overhead is wasted.
So, set up all your media to be served by a different web server entirely. Ideally, this is a physically separate machine running a high- performance web server like lighttpd or tux. If you can’t afford the separate machine, at least have the media server be a separate process on the same machine.
For more information on how to separate static folder:
- https://docs.djangoproject.com/en/dev/howto/static-files/#howto-static-files
2- Use a separate database server:
If you can afford it, stick your database server on a separate machine, too. All too often Apache and PostgreSQL (or MySQL or whatever) compete for system resources in a bad way. A separate DB server — ideally one with lots of RAM and fast (10k or better) drives — will seriously improve the number of hits you can dish out.
3- Turn off KeepAlive:
I don’t totally understand how KeepAlive works, but turning it off on our Django servers increased performance by something like 50%. Of course, don’t do this if the same server is also serving media… but you’re not doing that, right?
4- Use memcached:
Although Django has support for a number of cache backends, none of them perform even half as well as memcached does. If you find yourself needing the cache, do yourself a favor and don’t even play around with the other backends; go straight for memcached.
#python #django #memcached
Django Project
How to manage static files (e.g. images, JavaScript, CSS) | Django documentation
The web framework for perfectionists with deadlines.
How to reverse a string in python?
By slicing it is easy as pie! The general format of string slicing is like below:
We do a little bit of a magic here and make step -1. -1 will read data from the end of the string to the first character and leave begin and end intact:
The output would be like below:
#python #string #slicing #step #reverse
By slicing it is easy as pie! The general format of string slicing is like below:
'YOUR_STRING'[begin : end : step]
We do a little bit of a magic here and make step -1. -1 will read data from the end of the string to the first character and leave begin and end intact:
'Hello'[::-1]
The output would be like below:
>>> 'hello'[::-1]
'olleh'
#python #string #slicing #step #reverse
Transactions in Redis
MULTI
, EXEC
, DISCARD
and WATCH
are the foundation of transactions in Redis
. They allow the execution of a group of commands in a single step, with two important guarantees:- All the commands in a transaction are serialized and executed sequentially. It can never happen that a request issued by another client is served in the middle of the execution of a Redis transaction. This guarantees that the commands are executed as a single isolated operation.
- Either all of the commands or none are processed, so a Redis transaction is also atomic. The EXEC command triggers the execution of all the commands in the transaction, so if a client loses the connection to the server in the context of a transaction before calling the MULTI command none of the operations are performed, instead if the EXEC command is called, all the operations are performed. When using the append-only file Redis makes sure to use a single write(2) syscall to write the transaction on disk. However if the Redis server crashes or is killed by the system administrator in some hard way it is possible that only a partial number of operations are registered. Redis will detect this condition at restart, and will exit with an error. Using the redis-check- aof tool it is possible to fix the append only file that will remove the partial transaction so that the server can start again.
Sample usage of the transaction:
> MULTI
OK
> INCR foo
QUEUED
> INCR bar
QUEUED
> EXEC
1) (integer) 1
2) (integer) 1
As it is possible to see from the session above, EXEC returns an array of replies, where every element is the reply of a single command in the transaction, in the same order the commands were issued.
In the next post we will talk about
WATCH
and DISCARD
commands too.#redis #transaction #multi #exec #discard #watch
Transactions in Redis part2
DISCARD
can be used in order to abort a transaction. In this case, no commands are executed and the state of the connection is restored to normal.We can discard a transaction like below:
> SET foo 1
OK
> MULTI
OK
> INCR foo
QUEUED
> DISCARD
OK
> GET foo
"1"
As you can see
foo
variable has not been incremented and value is set to 1 not 2.Optimistic locking using check-and-set:
WATCH
is used to provide a check-and-set (CAS) behavior to Redis
transactions.WATCHed
keys are monitored in order to detect changes against them. If at least one watched key is modified before the EXEC command, the whole transaction aborts, and EXEC returns a Null reply to notify that the transaction failed.WATCH mykey
val = GET mykey
val = val + 1
MULTI
SET mykey $val
EXEC
Using the above code, if there are race conditions and another client modifies the result of val in the time between our call to
WATCH
and our call to EXEC
, the transaction will fail.#redis #transaction #multi #exec #discard #watch
Transactions in Redis part3
In order to implement transaction in
Python
you need to use pipline
and there is no such a thing as exec
, multi
, etc.r = redis.Redis()
p = r.pipeline()
p.set("transError", var)
p.execute()
MULTI
, SET
, EXEC
sent when p.execute()
is called. To omit the MULTI/EXEC
pair, use r.pipeline(transaction=False)
.More info: http://redis-py.readthedocs.io/en/latest/#redis.Redis.pipeline
#python #redis #transaction #multi #exec
In
I wanted to add
It will find all lines which has different names: .* and put that in a variable. (you can put something you have found in a variable by using parenthesis).
Now we can access the variable using
The above code will put method name using $1 and the append
#pycharm #regex #find #replace
Pycharm
I wrote something like below in multiple lines in the file:method_name='get_account'
I wanted to add
_v2
to all the method names, what I did was to use regex
in PyCharm replace functionality. Press Command+R
in order to open replace dialog. In the dialog there is an option called Regex
, tick the checkbox in front of it and in find section write:method_name='(.*)'
It will find all lines which has different names: .* and put that in a variable. (you can put something you have found in a variable by using parenthesis).
Now we can access the variable using
$1
. We now need to put the below code in replace section:method_name='$1_v2'
The above code will put method name using $1 and the append
_v2
to all the methods.#pycharm #regex #find #replace
Tech C**P
What is Capped Collections in MongoDB? Capped collections are fixed-size collections that support high-throughput operations that insert and retrieve documents based on insertion order. Capped collections work in a way similar to circular buffers: once a…
Earlier we explained about
If you perform a
To retrieve documents in reverse insertion order, issue
#mongodb #mongo #capped_collection #natural #natural_order
capped collection
in MongoDB
. Today we just want to add something more to it.Query a Capped Collection:
If you perform a
find()
on a capped collection
with no ordering specified, MongoDB
guarantees that the ordering of results is the same as the insertion order.To retrieve documents in reverse insertion order, issue
find()
along with the sort()
method with the $natural
parameter set to -1, as shown in the following example:db.cappedCollection.find().sort( { $natural: -1 } )
#mongodb #mongo #capped_collection #natural #natural_order
If you go to your mongodb data directory where all database data will be stored you will see:
If you have given the size of
#mongodb #mongo #capped_collection
$ ls -lh YOUR_DATABASE.*
-rw------- 1 mongodb mongodb 64M Feb 6 07:26 YOUR_DATABASE.0
-rw------- 1 mongodb mongodb 512M Feb 6 07:26 YOUR_DATABASE.1
-rw------- 1 mongodb mongodb 16M Feb 6 07:26 YOUR_DATABASE.ns
If you have given the size of
524288000
in collection creation, then you would see 512MB for your DB size. You can also see the whole size inside of mongo shell.rs0:PRIMARY> show dbs
local 0.203GB
YOUR_DATABASE 0.578GB
#mongodb #mongo #capped_collection
Those who are looking for scaling their cache server
https://github.com/twitter/twemproxy
#cache #redis #memcached #twemproxy
horizontally
(redis/memcached) can use twemproxy
. It is created by twitter
:https://github.com/twitter/twemproxy
#cache #redis #memcached #twemproxy
GitHub
GitHub - twitter/twemproxy: A fast, light-weight proxy for memcached and redis
A fast, light-weight proxy for memcached and redis - twitter/twemproxy
How to monitor network cards on
Yesterday I've been on a task of monitoring network cards of all our servers and infrastructure to check the bandwidth in/out and send alarms based on some criteria. In
Move the script to
Some important usages of the script:
- list interfaces of a specific server (we assume snmp has been installed on the destination server):
The output would be something like below (it can be different in your case):
The interface name is given in front of serial numbers which is
Another mode for the script is
OK, the important part is over and we can list all server network interfaces plus the usage of a specific network interface. In the next part we will explain the
#icinga2 #icinga #nagios #check_nwc_health #network #monitor
Icinga2
? (part-1)Yesterday I've been on a task of monitoring network cards of all our servers and infrastructure to check the bandwidth in/out and send alarms based on some criteria. In
Icigna2
we have a library from nagios
called check_nwc_health
. Download the script from https:// labs.consol.de/nagios/check_nwc_health/index.html.Move the script to
/usr/lib/nagios/plugins
on a server that you have installed Icinga2
. If you run it all alone you will get some helps that you could be useful.Some important usages of the script:
- list interfaces of a specific server (we assume snmp has been installed on the destination server):
./check_nwc_health --mode list-interfaces --hostname YOUR_TARGET_SERVER_IP --community YOUR_COMMUNITY_STRING
The output would be something like below (it can be different in your case):
000001 lo
000002 Device 1af4:0001 2
000003 Device 1af4:0001 3
000004 docker0
OK - have fun
The interface name is given in front of serial numbers which is
lo
, Device 1af4:0001 2
or docker0
. These interface names are important and will be used in icinga2
to add network card to hosts.Another mode for the script is
interface-usage
that shows in/out
bandwidth. The output can be something like follow:OK - interface Device 1af4:0001 2 (alias eth0) usage is in:0.00% (7058.67bit/s) out:0.00% (5603.67bit/s) | 'Device 1af4:0001 2_usage_in'=0%;80;90;0;100 'Device 1af4:0001 2_usage_out'=0%;80;90;0;100 'Device 1af4:0001 2_traffic_in'=7058.67;0;0;0;0 'Device 1af4:0001 2_traffic_out'=5603.67;0;0;0;0
OK, the important part is over and we can list all server network interfaces plus the usage of a specific network interface. In the next part we will explain the
Icinga2
part to add the command and the service to icinga2
.#icinga2 #icinga #nagios #check_nwc_health #network #monitor
How to monitor network cards on
Ok for now we have added the plugin to nagios folder and ran some tests on target server's network interfaces. We need to add a command to
In brief it creates a new command called
Now we need to use this command in a service. We have to create a new service which will be used in our hosts configuration sections
Again in brief the service will be applied on hosts that have a variable section of
The final part is to add this service to your desired host. Go to
Add the service like below into your host:
You can go even further like me :) and add these data into
#icinga2 #icinga #service #host #command #nagios #interface #network
Icinga2
? (part-2)Ok for now we have added the plugin to nagios folder and ran some tests on target server's network interfaces. We need to add a command to
Icinga2
to use it in service section of Icinga2
. To create a new command create a new file in /etc/icinga2/conf.d/commands/check_nwc_command.conf
and with the following content:object CheckCommand "YOUR_COMMAND_NAME" {
import "plugin-check-command"
command = [ PluginDir + "/check_nwc_health", "--mode", "interface-usage" ]
arguments = {
"-H" = "$address$"
"-C" = "$community$"
"--name" = "$int$"
}
}
In brief it creates a new command called
YOUR_COMMAND_NAME
that calls the script check_nwc_health
with interface-usage
argument to get the bandwidth data.Now we need to use this command in a service. We have to create a new service which will be used in our hosts configuration sections
/etc/icinga2/conf.d/services/if_traffic.conf
:apply Service for (display_name => config in host.vars.int) {
import "generic-service"
check_command = "YOUR_COMMAND_NAME"
vars += config
assign where host.vars.int
}
Again in brief the service will be applied on hosts that have a variable section of
int
in their configuration that we will see a little bit later. YOUR_COMMAND_NAME
is the name that we have given in the first part when creating the command.The final part is to add this service to your desired host. Go to
/etc/icinga2/conf.d/hosts
and open the file which relates to your host. Host files content start with:object Host "host-54 (Infra)" {
Add the service like below into your host:
vars.int["YOUR DISPLAY NAME"] = {
int = "Device 1af4:0001 2"
community = "YOUR SERVER COMMUNITY STRING"
}
int
is the part that we give the interface name, this should be given from the output of list-interfaces
in part-1.You can go even further like me :) and add these data into
Grafana
dashboard to have a better understanding of what is happening around you.#icinga2 #icinga #service #host #command #nagios #interface #network