In order to get a random document from MongoDB collection you can use aggregate framework:
Read more here: https://www.mongodb.com/blog/post/how-to-perform-random-queries-on-mongodb
This method is the fastest and most efficient way of getting random data from a huge database like 100 M records.
#mongodb #mongo #aggregate #sample #random
db.users.aggregate( [ { $sample: { size: 1 } } ] )
NOTE:
MongoDB 3.2 introduced $sample
to the aggregation pipeline.Read more here: https://www.mongodb.com/blog/post/how-to-perform-random-queries-on-mongodb
This method is the fastest and most efficient way of getting random data from a huge database like 100 M records.
#mongodb #mongo #aggregate #sample #random
MongoDB
How to Perform Random Queries on MongoDB | MongoDB Blog
in
most important part if this scenario is when you are using micro service architecture and you have tens of modules which works independently from each other and send their requests to
Now if you look at the MongoDB log you would see:
In the above log you would see
#mongodb #mongo #pymongo #appname
pymongo
you can give name to your connections. This definitely helps to debug issues or trace logs when seeing mongoDB logs. Themost important part if this scenario is when you are using micro service architecture and you have tens of modules which works independently from each other and send their requests to
MongoDB
:mc = pymongo.MongoClient(host, port, appname='YOUR_APP_NAME')
Now if you look at the MongoDB log you would see:
I COMMAND [conn173140] command MY_DB.users appName: "YOUR_APP_NAME" command: find { find: "deleted_users", filter: {}, sort: { acquired_date: 1 }, skip: 19973, limit: 1000, $readPreference: { mode: "secondaryPreferred" }, $db: "blahblah" } planSummary: COLLSCAN keysExamined:0 docsExamined:19973 hasSortStage:1 cursorExhausted:1 numYields:312 nreturned:0 reslen:235 locks:{ Global: { acquireCount: { r: 626 } }, Database: { acquireCount: { r: 313 } }, Collection: { acquireCount: { r: 313 } } } protocol:op_query 153ms
In the above log you would see
YOUR_APP_NAME
.#mongodb #mongo #pymongo #appname
https://nickjanetakis.com/blog/15-useful-flask-extensions-and-libraries-that-i-use-in-every-project#flask-limiter
#flask #rate_limiter #mail #celery
#flask #rate_limiter #mail #celery
Nick Janetakis
15 Useful Flask Extensions and Libraries That I Use in Every Project — Nick Janetakis
Part of the benefit of using a popular web framework is the thriving community around it. Here's my favorite Flask extensions.
Send
You need to add
Read more about webhooks for slack here:
- here: https://api.slack.com/incoming-webhooks
That's all! Now will have all your submitted forms in Slack. Voila!
#google #slack #forms #google_forms #webhook #hook #javascript
Google Forms
to Slack# read more about slack web hooks here: https://api.slack.com/incoming-webhooks
var POST_URL = "https://hooks.slack.com/services/YOUR_TOKEN";
function onSubmit(e) {
var response = e.response.getItemResponses();
var toType = function(obj) {
return ({}).toString.call(obj).match(/\s([a-zA-Z]+)/)[1].toLowerCase()
}
// "Form Respondent: " + e.response.getRespondentEmail()
var email = response[0].getResponse();
var field2 = response[1].getResponse();
var field3 = response[2].getResponse();
var field4 = response[3].getResponse();
var d = "*SUBMITTED FORM*\n>>>Email: " + email + "\n";
d += "other fields: \n" + field2 + field3 + field4;
var payload =
{ "payload": '{"text": "' + d + '"}' }
var options =
{
"method" : "post",
"payload" : payload
};
UrlFetchApp.fetch(POST_URL, options);
};
You need to add
javascript
code above to Script Editor
section of google form. When you are in form editing mode click the three dot in top corner and click on Script Editor
. When your're done click on save and give a name to your project script. Now on the script editor page click on edit -> All your triggers
and bind your script to form onSubmit event.Read more about webhooks for slack here:
- here: https://api.slack.com/incoming-webhooks
That's all! Now will have all your submitted forms in Slack. Voila!
#google #slack #forms #google_forms #webhook #hook #javascript
Slack API
Sending messages using incoming webhooks
Create an incoming webhook with a unique URL to which you send a JSON payload with message text and options.
Minio
is an object storage server that is compatible with Amazon S3
. You can run your own object storage server using docker:- https://docs.minio.io/docs/minio-docker-quickstart-guide
And you can use its
Python SDK
in order to talk to its endpoint API:- https://github.com/minio/minio-py
It's usage is very simple and elegant. If you are unfamiliar with object storage read more here:
- https://en.wikipedia.org/wiki/Object_storage
#minio #python #sdk #docker #object_storage
docs.minio.io
MinIO | Learn more about MinIO's Docker Implementation
The MinIO Docker Quickstart Guide offers examples, code and documentation for those using Docker to run MinIO
When you
But there is tiny tip here that needs to be told. If you want to pass parameter to the destination link, which in here is
The variable
#nginx #web_server #redirect #302 #is_args #args
redirect
in nginX
you would use one of 302, 301 code like the below code:location = /singup {
return 302 https://docs.google.com/forms;
}
But there is tiny tip here that needs to be told. If you want to pass parameter to the destination link, which in here is
https:// docs.google.com/forms
it wont work. String parameters are being held in $args
varaible in nginX
so you need to pass this variable like the following code:location = /singup {
return 302 https://docs.google.com/forms$is_args$args;
}
The variable
$is_args
value will be set to "?" if a request line has arguments, or an empty string otherwise.#nginx #web_server #redirect #302 #is_args #args
Create and assin a self-signed certificate with the bash script below:
- https://gist.github.com/alirezastack/30c8849e4add4329dcc2633fbb06a638
#mongodb #ssl #self_signed
- https://gist.github.com/alirezastack/30c8849e4add4329dcc2633fbb06a638
#mongodb #ssl #self_signed
Gist
Use this script to create a self signed certificate for your MongoDB instance
In
Read more here:
- https://docs.docker.com/engine/swarm/manage-nodes/#update-a-node
The great thing about this labeling is in docker compose file that you can tell docker which server should get deployed on which server (node):
#docker #node #swarm #label #role
docker
swarm mode you can list nodes with docker node ls
. If you want to assign a label to each node you can use the below command to update node labels. For example you can assign a key=value
pair like role=storage
to one of your node listed with the first command:docker node update --label-add role=storage YOUR_HOSTNAME
Read more here:
- https://docs.docker.com/engine/swarm/manage-nodes/#update-a-node
The great thing about this labeling is in docker compose file that you can tell docker which server should get deployed on which server (node):
deploy:
replicas: 4
placement:
constraints:
- node.labels.role == storage
NOTE:
role
is something that we ourselves have been defined. You can define your own as requirements vary.#docker #node #swarm #label #role
Docker Documentation
Manage nodes in a swarm
Manage existing nodes in a swarm
پیروزی ایران بر مراکش رو خدمت تمامی فوتبال دوستان و ایران دوستان تبریک میگم. به امید موفقیتهای بیشتر
🇮🇷🇮🇷🇮🇷🇮🇷🇮🇷
🇮🇷🇮🇷🇮🇷🇮🇷🇮🇷
این موضوع نسبتا بی ربط به بحث کانال رو دوستان لطفا بخونند و تحمل کنند، پیشاپیش عذر خواهی میکنم:
If you have worked with
When you run it in linux you would see IO, CPU, RAM, Network bandwidth, latest system errors, etc in one glance! When you run it it displays the heaviest process on top by default. Read about its UI, installation, etc here:
- https://www.tecmint.com/glances-an-advanced-real-time-system-monitoring-tool-for-linux/
#linux #htop #glance
htop
you would definitely love Glance
, an advanced real time system monitoring tool for Linux
.When you run it in linux you would see IO, CPU, RAM, Network bandwidth, latest system errors, etc in one glance! When you run it it displays the heaviest process on top by default. Read about its UI, installation, etc here:
- https://www.tecmint.com/glances-an-advanced-real-time-system-monitoring-tool-for-linux/
#linux #htop #glance
Glances – An Advanced Real Time System Monitoring Tool for Linux
Glances: A Powerful Tool for Monitoring Linux Systems
Glances is a cross-platform curses-based system monitoring tool written in Python language which uses the psutil library to grab information from the system.
Tech C**P
#glance
You can use
Grafana
to display your OS metrics. You can use its API endpoints to get data in JSON or XML and moreover it provides a web UI for you to take look at the graphs.By default when you install
If you take a look at the config rotation of nginX you will see a part called postrotate that run a command, for nginx it is as below:
If you run the command between
Just remove a file related to
Now you can run
Now every log will be directed to its file not
#nginx #policy_rc #invoke_rc #log_rotate #rotate
nginX
on Linux
a logrotate config file will be created in /etc/logrotate.d/nginx
. Sometimes you may see that after a while logs inside of nginX access log is empty and it is logged into the file usually named access.log.1
. This error happens when a process cannot close its file handler and has to write into access.log.1.If you take a look at the config rotation of nginX you will see a part called postrotate that run a command, for nginx it is as below:
postrotate
invoke-rc.d nginx rotate >/dev/null 2>&1
endscript
If you run the command between
postrotate
and endscript
it may gives the below error:invoke-rc.d: action rotate is unknown, but proceeding anyway.
invoke-rc.d: policy-rc.d denied execution of rotate.
Just remove a file related to
i-MSCP
:rm /usr/sbin/policy-rc.d
NOTE:
Or if you want to be safe rename it to something else.Now you can run
invoke-rc.d
command and you should see a result like below:[ ok ] Re-opening nginx log files: nginx.
Now every log will be directed to its file not
it_file_name.log.1
, and file handlers are closed safely.#nginx #policy_rc #invoke_rc #log_rotate #rotate
There are times that you noway but getting data from a third part library and that library has rate limit on their endpoints. For example I have recently used
This library by default sends its requests to
Now read from cache in case it exists:
Make sure to put
#python #geopy #geo #latitude #longitude #Nominatim #redis #hset #geocoders
geopy
python library to get latitude and longitude by giving city name to the function:from geopy.geocoders import Nominatim
city_name = 'Tehran'
geolocator = Nominatim()
location = geolocator.geocode(city_name)
print location.latitude, location.longitude
This library by default sends its requests to
https://nominatim.openstreetmap.org/search
to get geo location data. It's rate limit is 1 request per second. To circumvent these problems and limitations use redis to cache results in your server and read cached result from your own system:self.redis.hset(city_name, 'lat', lat)
self.redis.hset(city_name, 'long', longitude)
Now read from cache in case it exists:
if self.redis.hexists(city_name, 'lat'):
location = self.redis.hgetall(city_name)
Make sure to put
sleep(1)
when reading from Nominatim
in order to by pass its limitation.NOTE:
instead of Nominatim
other 3rd parties can be used.#python #geopy #geo #latitude #longitude #Nominatim #redis #hset #geocoders