If you work with
To install it:
To setup up:
Another
Now to use the cache on a method use its decorator:
#python #flask #cache #redis #memcached
Flask
and have an under heavy load API, you can setup a cache for your endpoints. One of the tools is Flask- Cache
that can work with different cache backends like Memcached
, Redis
, Simple
, etc.To install it:
pip install Flask-Cache
To setup up:
from flask import Flask
from flask.ext.cache import Cache
app = Flask(__name__)
# Check Configuring Flask-Cache section for more details
cache_config = {
"CACHE_TYPE": "redis",
"CACHE_REDIS_HOST": "127.0.0.1",
"CACHE_REDIS_PORT": 6379,
"CACHE_REDIS_DB": 3
}
cache = Cache(app,config=cache_config})
Another
redis
implementation:#: the_app/custom.py
class RedisCache(BaseCache):
def __init__(self, servers, default_timeout=500):
pass
def redis(app, config, args, kwargs):
args.append(app.config['REDIS_SERVERS'])
return RedisCache(*args, **kwargs)
Now to use the cache on a method use its decorator:
@cache.memoize(timeout=50)
def big_foo(a, b):
return a + b + random.randrange(0, 1000)
#python #flask #cache #redis #memcached
Publish
& Subscribe
for dummies:Open 2 different windows (pane) in terminal and go to redis console:
redis-cli
Now to subscribe into a specific channel:
127.0.0.1:6379> SUBSCRIBE first second
Reading messages... (press Ctrl-C to quit)
1) "subscribe"
2) "first"
3) (integer) 1
1) "subscribe"
2) "second"
3) (integer) 2
As you can see we have subscribed to 2 channels called
first
and second
. In another window open redis console again by redis-cli
command and try to publish to those channels:PUBLISH second Hello
Now you should see the output below in the first window where you have subscribed to channels:
1) "message"
2) "second"
3) "Hello"
If you want to know more about
pub-sub
scenario:https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern
#redis #pub_sub #publish #subscribe #redis_cli
Wikipedia
Publish–subscribe pattern
messaging pattern in software design where publishers send messages in categories that subscribers listen to
What is
Insert all the specified values at the head of the list stored at key. If key does not exist, it is created as empty list before performing the push operations. When key holds a value that is not a list, an error is returned.
It is possible to push multiple elements using a single command call just specifying multiple arguments at the end of the command. Elements are inserted one after the other to the head of the list, from the leftmost element to the rightmost element. So for instance the command LPUSH mylist a b c will result into a list containing c as first element, b as second element and a as third element.
Integer reply: the length of the list after the push operations.
For instance:
#redis #list #lpush #push
LPUSH
in REDIS
:Insert all the specified values at the head of the list stored at key. If key does not exist, it is created as empty list before performing the push operations. When key holds a value that is not a list, an error is returned.
It is possible to push multiple elements using a single command call just specifying multiple arguments at the end of the command. Elements are inserted one after the other to the head of the list, from the leftmost element to the rightmost element. So for instance the command LPUSH mylist a b c will result into a list containing c as first element, b as second element and a as third element.
Return value:
Integer reply: the length of the list after the push operations.
For instance:
redis> LPUSH mylist "world"
(integer) 1
redis> LPUSH mylist "hello"
(integer) 2
redis> LRANGE mylist 0 -1
1) "hello"
2) "world"
NOTE1:
time complexity of LPUSH
command is O(1)
. So it is the best from performance point of view.NOTE2: `LRANGE
is used to get list members, if you use 0
to -1
it will return all list elements.#redis #list #lpush #push
As you may remember we have explained
Today we are gonna give a python example of the
I prefer not to clutter the post by pasting the whole script here. :)
module which causes
Usecases are endless! You can use as part of messaging infrastructure in your microservice environment or as a chat system, you name it! :)
#python #redis #pubsub #publish #subscribe #thread
redis pubsub
in which someone subscribe to a specific channel and then another console publish messages to that channel. The publisher has no idea who is listening on the other side, it just publish messages which is received.Today we are gonna give a python example of the
redis pubsub
using threading
and redis
python module. Take a look at the code in the link below:https://gist.github.com/alirezastack/ff2515cc434360f544d8a9341155947e
I prefer not to clutter the post by pasting the whole script here. :)
subscribe
method is used to subscribe to a given channel, here it is called test
. start
method is used as part of the threadingmodule which causes
run
method to be called. Both run
and start
is Thread
super class methods. When you run the script it will subscribe to test
channel and wait for new messages.NOTE:
you can publish to test channel in redis console (`redis-cli`) as below:127.0.0.1:6379> publish test hello_python
(integer) 1
Usecases are endless! You can use as part of messaging infrastructure in your microservice environment or as a chat system, you name it! :)
#python #redis #pubsub #publish #subscribe #thread
Did you know that you can monitor redis commands live from within
Go to redis client by typing
It will log everything that happens on your redis server.
#redis #redis_cli #cli #monitor
redis-cli
console?Go to redis client by typing
redis-cli
in terminal and then type monitor
command and enter:127.0.0.1:6379> monitor
OK
1514301845.678553 [0 127.0.0.1:59388] "COMMAND"
1514301859.676761 [0 127.0.0.1:59388] "HSET" "user" "name" "ali"
It will log everything that happens on your redis server.
#redis #redis_cli #cli #monitor
MYSQL insert
If you have bulk scripts for your email/sms like me, and you are sending to thousands of users, you will difintely will get stuck in the middle of the bulk notification or it will be very slow and unacceptable. First of all you must initiate only
ONE
mysql connection. If you are inserting your data one by one, you're dead again! try to use executemany
that will insert data into mySQL in bulk not one by one:client.executemany(
"""INSERT INTO email (name, spam, email, uid, email_content)
VALUES (%s, %s, %s, %s, %s)""",
[
("Ali", 0, 'mail1@example.com', 1, 'EMAIL_CONTENT'),
("Reza", 1, 'mail2@example.com', 2, 'EMAIL_CONTENT'),
("Mohsen", 1, 'mail3@example.com', 3, 'EMAIL_CONTENT')
] )
Other note for bulk insertion is to avoid disk IO in case possible, and use redis, memcached or so on for inserting some data like user's phone or emails. It will tremendously improve performance of your bulk script.
#python #mysql #executemany #redis #bulk #email #sms
Transactions in Redis
MULTI
, EXEC
, DISCARD
and WATCH
are the foundation of transactions in Redis
. They allow the execution of a group of commands in a single step, with two important guarantees:- All the commands in a transaction are serialized and executed sequentially. It can never happen that a request issued by another client is served in the middle of the execution of a Redis transaction. This guarantees that the commands are executed as a single isolated operation.
- Either all of the commands or none are processed, so a Redis transaction is also atomic. The EXEC command triggers the execution of all the commands in the transaction, so if a client loses the connection to the server in the context of a transaction before calling the MULTI command none of the operations are performed, instead if the EXEC command is called, all the operations are performed. When using the append-only file Redis makes sure to use a single write(2) syscall to write the transaction on disk. However if the Redis server crashes or is killed by the system administrator in some hard way it is possible that only a partial number of operations are registered. Redis will detect this condition at restart, and will exit with an error. Using the redis-check- aof tool it is possible to fix the append only file that will remove the partial transaction so that the server can start again.
Sample usage of the transaction:
> MULTI
OK
> INCR foo
QUEUED
> INCR bar
QUEUED
> EXEC
1) (integer) 1
2) (integer) 1
As it is possible to see from the session above, EXEC returns an array of replies, where every element is the reply of a single command in the transaction, in the same order the commands were issued.
In the next post we will talk about
WATCH
and DISCARD
commands too.#redis #transaction #multi #exec #discard #watch
Transactions in Redis part2
DISCARD
can be used in order to abort a transaction. In this case, no commands are executed and the state of the connection is restored to normal.We can discard a transaction like below:
> SET foo 1
OK
> MULTI
OK
> INCR foo
QUEUED
> DISCARD
OK
> GET foo
"1"
As you can see
foo
variable has not been incremented and value is set to 1 not 2.Optimistic locking using check-and-set:
WATCH
is used to provide a check-and-set (CAS) behavior to Redis
transactions.WATCHed
keys are monitored in order to detect changes against them. If at least one watched key is modified before the EXEC command, the whole transaction aborts, and EXEC returns a Null reply to notify that the transaction failed.WATCH mykey
val = GET mykey
val = val + 1
MULTI
SET mykey $val
EXEC
Using the above code, if there are race conditions and another client modifies the result of val in the time between our call to
WATCH
and our call to EXEC
, the transaction will fail.#redis #transaction #multi #exec #discard #watch
Transactions in Redis part3
In order to implement transaction in
Python
you need to use pipline
and there is no such a thing as exec
, multi
, etc.r = redis.Redis()
p = r.pipeline()
p.set("transError", var)
p.execute()
MULTI
, SET
, EXEC
sent when p.execute()
is called. To omit the MULTI/EXEC
pair, use r.pipeline(transaction=False)
.More info: http://redis-py.readthedocs.io/en/latest/#redis.Redis.pipeline
#python #redis #transaction #multi #exec
Those who are looking for scaling their cache server
https://github.com/twitter/twemproxy
#cache #redis #memcached #twemproxy
horizontally
(redis/memcached) can use twemproxy
. It is created by twitter
:https://github.com/twitter/twemproxy
#cache #redis #memcached #twemproxy
GitHub
GitHub - twitter/twemproxy: A fast, light-weight proxy for memcached and redis
A fast, light-weight proxy for memcached and redis - twitter/twemproxy
There are times that you noway but getting data from a third part library and that library has rate limit on their endpoints. For example I have recently used
This library by default sends its requests to
Now read from cache in case it exists:
Make sure to put
#python #geopy #geo #latitude #longitude #Nominatim #redis #hset #geocoders
geopy
python library to get latitude and longitude by giving city name to the function:from geopy.geocoders import Nominatim
city_name = 'Tehran'
geolocator = Nominatim()
location = geolocator.geocode(city_name)
print location.latitude, location.longitude
This library by default sends its requests to
https://nominatim.openstreetmap.org/search
to get geo location data. It's rate limit is 1 request per second. To circumvent these problems and limitations use redis to cache results in your server and read cached result from your own system:self.redis.hset(city_name, 'lat', lat)
self.redis.hset(city_name, 'long', longitude)
Now read from cache in case it exists:
if self.redis.hexists(city_name, 'lat'):
location = self.redis.hgetall(city_name)
Make sure to put
sleep(1)
when reading from Nominatim
in order to by pass its limitation.NOTE:
instead of Nominatim
other 3rd parties can be used.#python #geopy #geo #latitude #longitude #Nominatim #redis #hset #geocoders
To get an expiration time of a redis key you can use
To read more about setting expiration time and or other options:
- https://redis.io/commands/expire
#redis #expiration_time #expire #ttl
TTL
like below:TTL "YOUR_KEY_NAME"
To read more about setting expiration time and or other options:
- https://redis.io/commands/expire
#redis #expiration_time #expire #ttl