Tech C**P
15 subscribers
161 photos
9 videos
59 files
304 links
مدرس و برنامه نویس پایتون و لینوکس @alirezastack
Download Telegram
Docker can set different log drivers for its logging mechanism it can be json file, syslog, fluentd and so on. The default is set to json-file and these log files are located in /var/lib/docker/containers/. You can check type of your log in docker using:

$ docker inspect -f '{{.HostConfig.LogConfig.Type}}' <CONTAINER>
json-file

Instead of <CONTAINER> put your currently running container id.


To read more about this head on to: https://docs.docker.com/config/containers/logging/configure/#configure-the-logging-driver-for-a-container


#docker #log #log_driver
I usually use linux copy command cp to copy files from my project into a production environment project. I copied wrong files by accident from a different repo into production environment and messed up the repo. Some files got into modified state, many untracked
files added into the project.

To revert all the C**P and make a clean slate of your project again, you just need to do 2 things:

1- git checkout .

2- git clean -f


First command will revert all modified files into their previous state. Second one will remove all untracked files.

Happy copying :)

#git #revert #checkout #clean #git_clean
CoreOS (one of the most used docker base images) has been acquired by Red Hat with $250 million.

Read on:
- https://coreos.com/blog/coreos-agrees-to-join-red-hat/?utm_source=DevOps%27ish&utm_campaign=c766654b17- EMAIL_CAMPAIGN_2018_02_04&utm_medium=email&utm_term=0_eab566bc9f-c766654b17-46016105

#linux #coreos #redhat #docker
Configure Linux iptables Firewall for MongoDB

In order to harden your network infrastructure in linux, you need to monitor all incoming and outgoing traffic and only allow connections from servers that are trusted. For that reason we use iptables in linux. Each record in iptables is either an INPUT record or an OUTPUT record that controls all incoming traffic and all outgoing traffic.

With records below in iptables we explicitly allow traffic to the mongod instance from the application server. In the following examples, replace <ip-address> with the IP address of the application server:

iptables -A INPUT -s <ip-address> -p tcp --destination-port 27017 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -d <ip-address> -p tcp --source-port 27017 -m state --state ESTABLISHED -j ACCEPT

The first rule allows all incoming traffic from <ip-address> on port 27017, which allows the application server to connect to the mongod instance. The second rule, allows outgoing traffic from the mongod to reach the application server.

The default policy for iptables chains is to allow all traffic. After completing all iptables configuration changes, you must change the default policy to DROP so that all traffic that isn’t explicitly allowed as above will not be able to reach components of the MongoDB deployment. Issue the following commands to change this policy:

iptables -P INPUT DROP
iptables -P OUTPUT DROP

DANGER: do not issue the above commands if you are not fully aware of what you are doing!


#mongodb #linux #iptables #security
Make your Django application blazing fast by doing some tips:

1- Use a separate media server:
Django deliberately doesn’t serve media for you, and it’s designed that way to save you from yourself. If you try to serve media from the same Apache instance that’s serving Django, you’re going to absolutely kill performance. Apache reuses processes between each request, so once a process caches all the code and libraries for Django, those stick around in memory. If you aren’t using that process to service a Django request, all the memory overhead is wasted.

So, set up all your media to be served by a different web server entirely. Ideally, this is a physically separate machine running a high- performance web server like lighttpd or tux. If you can’t afford the separate machine, at least have the media server be a separate process on the same machine.

For more information on how to separate static folder:
- https://docs.djangoproject.com/en/dev/howto/static-files/#howto-static-files


2- Use a separate database server:
If you can afford it, stick your database server on a separate machine, too. All too often Apache and PostgreSQL (or MySQL or whatever) compete for system resources in a bad way. A separate DB server — ideally one with lots of RAM and fast (10k or better) drives — will seriously improve the number of hits you can dish out.


3- Turn off KeepAlive:
I don’t totally understand how KeepAlive works, but turning it off on our Django servers increased performance by something like 50%. Of course, don’t do this if the same server is also serving media… but you’re not doing that, right?


4- Use memcached:
Although Django has support for a number of cache backends, none of them perform even half as well as memcached does. If you find yourself needing the cache, do yourself a favor and don’t even play around with the other backends; go straight for memcached.


#python #django #memcached
How to reverse a string in python?

By slicing it is easy as pie! The general format of string slicing is like below:

'YOUR_STRING'[begin : end : step]

We do a little bit of a magic here and make step -1. -1 will read data from the end of the string to the first character and leave begin and end intact:

'Hello'[::-1]

The output would be like below:

>>> 'hello'[::-1]
'olleh'

#python #string #slicing #step #reverse
Transactions in Redis

MULTI, EXEC, DISCARD and WATCH are the foundation of transactions in Redis. They allow the execution of a group of commands in a single step, with two important guarantees:

- All the commands in a transaction are serialized and executed sequentially. It can never happen that a request issued by another client is served in the middle of the execution of a Redis transaction. This guarantees that the commands are executed as a single isolated operation.

- Either all of the commands or none are processed, so a Redis transaction is also atomic. The EXEC command triggers the execution of all the commands in the transaction, so if a client loses the connection to the server in the context of a transaction before calling the MULTI command none of the operations are performed, instead if the EXEC command is called, all the operations are performed. When using the append-only file Redis makes sure to use a single write(2) syscall to write the transaction on disk. However if the Redis server crashes or is killed by the system administrator in some hard way it is possible that only a partial number of operations are registered. Redis will detect this condition at restart, and will exit with an error. Using the redis-check- aof tool it is possible to fix the append only file that will remove the partial transaction so that the server can start again.


Sample usage of the transaction:

> MULTI
OK
> INCR foo
QUEUED
> INCR bar
QUEUED
> EXEC
1) (integer) 1
2) (integer) 1

As it is possible to see from the session above, EXEC returns an array of replies, where every element is the reply of a single command in the transaction, in the same order the commands were issued.

In the next post we will talk about WATCH and DISCARD commands too.

#redis #transaction #multi #exec #discard #watch
Transactions in Redis part2

DISCARD can be used in order to abort a transaction. In this case, no commands are executed and the state of the connection is restored to normal.


We can discard a transaction like below:

> SET foo 1
OK
> MULTI
OK
> INCR foo
QUEUED
> DISCARD
OK
> GET foo
"1"

As you can see foo variable has not been incremented and value is set to 1 not 2.


Optimistic locking using check-and-set:

WATCH is used to provide a check-and-set (CAS) behavior to Redis transactions.

WATCHed keys are monitored in order to detect changes against them. If at least one watched key is modified before the EXEC command, the whole transaction aborts, and EXEC returns a Null reply to notify that the transaction failed.

WATCH mykey
val = GET mykey
val = val + 1
MULTI
SET mykey $val
EXEC

Using the above code, if there are race conditions and another client modifies the result of val in the time between our call to WATCH and our call to EXEC, the transaction will fail.


#redis #transaction #multi #exec #discard #watch
Transactions in Redis part3

In order to implement transaction in Python you need to use pipline and there is no such a thing as exec, multi, etc.

r = redis.Redis()
p = r.pipeline()
p.set("transError", var)
p.execute()

MULTI, SET, EXEC sent when p.execute() is called. To omit the MULTI/EXEC pair, use r.pipeline(transaction=False).

More info: http://redis-py.readthedocs.io/en/latest/#redis.Redis.pipeline

#python #redis #transaction #multi #exec
In Pycharm I wrote something like below in multiple lines in the file:


method_name='get_account'


I wanted to add _v2 to all the method names, what I did was to use regex in PyCharm replace functionality. Press Command+R in order to open replace dialog. In the dialog there is an option called Regex, tick the checkbox in front of it and in find section write:


method_name='(.*)'


It will find all lines which has different names: .* and put that in a variable. (you can put something you have found in a variable by using parenthesis).

Now we can access the variable using $1. We now need to put the below code in replace section:


method_name='$1_v2'


The above code will put method name using $1 and the append _v2 to all the methods.


#pycharm #regex #find #replace
Tech C**P
What is Capped Collections in MongoDB? Capped collections are fixed-size collections that support high-throughput operations that insert and retrieve documents based on insertion order. Capped collections work in a way similar to circular buffers: once a…
Earlier we explained about capped collection in MongoDB. Today we just want to add something more to it.


Query a Capped Collection:
If you perform a find() on a capped collection with no ordering specified, MongoDB guarantees that the ordering of results is the same as the insertion order.

To retrieve documents in reverse insertion order, issue find() along with the sort() method with the $natural parameter set to -1, as shown in the following example:

db.cappedCollection.find().sort( { $natural: -1 } )


#mongodb #mongo #capped_collection #natural #natural_order
If you go to your mongodb data directory where all database data will be stored you will see:

$ ls -lh YOUR_DATABASE.*
-rw------- 1 mongodb mongodb 64M Feb 6 07:26 YOUR_DATABASE.0
-rw------- 1 mongodb mongodb 512M Feb 6 07:26 YOUR_DATABASE.1
-rw------- 1 mongodb mongodb 16M Feb 6 07:26 YOUR_DATABASE.ns

If you have given the size of 524288000 in collection creation, then you would see 512MB for your DB size. You can also see the whole size inside of mongo shell.

rs0:PRIMARY> show dbs
local 0.203GB
YOUR_DATABASE 0.578GB

#mongodb #mongo #capped_collection