Tech C**P
15 subscribers
161 photos
9 videos
59 files
304 links
مدرس و برنامه نویس پایتون و لینوکس @alirezastack
Download Telegram
If you have followed our MongoDB SSL configuration, you should by now know that we can generate SSL certificate using lets encrypt. I have used dehydrated that fully matches with cloud flare.

To make the procedure automatic I have created a sample shell script that after automatic renewal will also renew the PEM files for MongoDB

#! /bin/bash

echo 'Binding new mongo private key PEM file and Cert PEM file...'
cat /etc/dehydrated/certs/mongo.example.com/privkey.pem /etc/dehydrated/certs/mongo.example.com/cert.pem > /etc/ssl/mongo.pem
echo 'Saved the new file in /etc/ssl/mongo.pem'

sudo touch /etc/ssl/ca.pem
sudo chmod 777 /etc/ssl/ca.pem
echo 'truncate ca.pem file and generate a new in /etc/ssl/ca.pem...'
sudo truncate -s 0 /etc/ssl/ca.pem
echo 'generate a ca.pem file using opessl by input -> /etc/ssl/ca.crt'
sudo openssl x509 -in /etc/ssl/ca.crt -out /etc/ssl/ca.pem -outform PEM
echo 'ca.pem is generated successfully in /etc/ssl'

echo 'append the chain.pem content to newly created ca.pem in /etc/ssl/ca.pem'
sudo cat /etc/dehydrated/certs/mongo.example.com/chain.pem >> /etc/ssl/ca.pem
echo 'done!'

#mongodb #mongo #ssl #pem #openssl #lets_encrypt
Today I fixed a really C**Py bug which have been bugged me all days and years, nights and midnights!

I use a scheduler to to get data from MongoDB and one the servers is outside of Iran and another in Iran. When I want to get data sometimes querying the db takes forever and it freezes the data gathering procedure. I had to restart (like windows) to reset the connection. I know it was stupid! :|

I found the below parameter that you can set on your pymongo.MongoClient:

socketTimeoutMS=10000

socketTimeoutMS: (integer or None) Controls how long (in milliseconds) the driver will wait for a response after sending an ordinary (non-monitoring) database operation before concluding that a network error has occurred. Defaults to `None` (no timeout).
When you don't set it it means no timeout! So I set it to 20000 Ms (20 Sec) in order to solve this nasty problem.

#mongodb #mongo #socketTimeoutMS #timeout #socket_timeout
In Grafana if you are connected to MySQL you need to provide 3 value in your select query. One is time which must be called time_sec, the other is countable value which must be called value and the other is the label that is displayed on your graph which must be called metric:

SELECT
UNIX_TIMESTAMP(your_date_field) as time_sec,
count(*) as value,
'your_label' as metric
FROM table
WHERE status='success'
GROUP BY your_date_field
ORDER BY your_date_field ASC


To read more about Grafana head over here:

- http://docs.grafana.org/features/datasources/mysql/#using-mysql-in-grafana


#mongodb #mongo #mysql #grafana #dashboard #chart
Tech C**P
دوستانی که در مورد تدریس خصوصی سوال کرده بودند، خدمتشون عارض هستم که تا آخر سال متاسفانه فرصتش وجود نداره و کمی برای آخر سال سرمون شلوغ هستش. ان شالله بعد از عید در صورتی که فشار کاری کمتر بشه خدمتتون میگم. ممنونم بابت پیام های امید بخشتون برای ادامه کار.…
سلام صبحتون بخیر
دوستانی که قبل از عید در مورد کلاسهای خصوصی سوالاتی می پرسیدند خدمتشون باید عرض کنم که تدریس خصوصی مباحث تخصصی زیر رو میتونم از خرداد به بعد با توجه به مشغله کمتر براشون ارائه بدم:

- پایتون
- مونگو دی بی
- میکرو سرویس

جهت ارتباط با بنده از طریق آی دی @alirezastack ارتباط بگیرید.

مرسی از همراهی همه دوستان 💐
In order to verify that you certificate is generated successfully in openssl:

openssl verify -verbose -CAfile /etc/ssl/ca.pem /etc/ssl/mongo.pem

#openssl #verify #pem #ca #mongodb #ssl
Run newest elasticsearch image on linux using docker:

run -d -p 9200:9200 -v /srv/esdata:/usr/share/elasticsearch/data -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/     elasticsearch/elasticsearch:6.2.4
#docker #es #elasticsearch
Access an application on remote machine without having access to the port from your browser. Sometimes when there is firewalls that block all ports to the outside world or any other reasons, you can to port forwarding from remote machine to local machine in order to be abke to see the application UI. For solving this problem you can use ssh for port forwarding:

ssh -L 5601:localhost:8085 YOUR_HOST

This allows anyone on the remote server to connect to TCP port 5601 on the remote server. The connection will then be tunneled back to the client host, and the client then makes a TCP connection to port 8085 on localhost. Any other host name or IP address could be used instead of localhost to specify the host to connect to.

Now if you head over to your browser you can enter URL localhost:8085 to see the remote application.

#linux #ssh #port_forwarding #forwarding #remote_forwarding
Elasticsearch gives below error:

Config: Error 403 Forbidden: blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];: [cluster_block_exception] blocked by:   [FORBIDDEN/12/index read-only / allow delete (api)];

This error may happen when server storage is totally full and elasticsearch puts your indexes in read only mode. If you have enough
space now and are sure there is no other matter for elasticsearch and it behaves normally, remove read only mode from index block:

curl -XPUT -H "Content-Type: application/json" http://localhost:9200/.monitoring-*/_settings -d '{"index.blocks.                       read_only_allow_delete": null}'

#elasticsearch #read_only #index #cluster_block_exception
Delete elasticsearch indexes older than 1 month:

#!/bin/bash

last_month=`date +%Y%m%d --date '1 month ago'`
old_es_index="faxplus_*-$last_month"
echo "Deleting ES indexes $old_es_index..."
curl -X DELETE 'http://localhost:9200/myindex_*-20180520'
echo ''

NOTE: asterisk in curl command will be anything in between of myindex_ and -20180520. For example myindex_module1-20180520.

#linux #sysadmin #bash #script #es #elasticsearch #DELETE #purge
Get specific date like 2 days ago by bash script:

#!/bin/bash
specific_date=`date --date="3 day ago" +%Y%m%d`
echo $specific_date

#linux #date #bash_script #bash