Write pandas dataframe into Google bigQuery:
https://pandas.pydata.org/pandas-docs/version/0.21/generated/pandas.DataFrame.to_gbq.html#pandas-dataframe-to-gbq
#pandas #bg #bigquery
https://pandas.pydata.org/pandas-docs/version/0.21/generated/pandas.DataFrame.to_gbq.html#pandas-dataframe-to-gbq
#pandas #bg #bigquery
Months ago we have talked about how to get mongoDB data changes. THe problem with that article was that if for any reason your script
was stopped you will lose the data in the downtime period.
Now we have a new solution that you will read from the point in time that have read last time. MongoDB uses bson Timestamp in order for its internal usage like replication oplog logs. We can use the same Timestamp and store it somewhere to read from the exact point
that we have read last time.
In python you can import it like below:
Now to read data from that point read that time stamp from where you have saved it and query the oplog from that point:
After traversing cursors and catching mongoDB changes you can store the new timestamp that resides in
Now use a
If you remember from before we got last changes by the query below:
We read the last ts and read from the last record, that's why we were missing data.
#mongodb #mongo #replication #oplog #timestamp #cursor
was stopped you will lose the data in the downtime period.
Now we have a new solution that you will read from the point in time that have read last time. MongoDB uses bson Timestamp in order for its internal usage like replication oplog logs. We can use the same Timestamp and store it somewhere to read from the exact point
that we have read last time.
In python you can import it like below:
from bson.timestamp import Timestamp
Now to read data from that point read that time stamp from where you have saved it and query the oplog from that point:
ts = YOUR_TIMESTAMP_HERE
cursor = oplog.find({'ts': {'$gt': ts}},
cursor_type=pymongo.CursorType.TAILABLE_AWAIT,
oplog_replay=True)
After traversing cursors and catching mongoDB changes you can store the new timestamp that resides in
ts
field in the document you have fetched from MongoDB oplog.Now use a
while True
and read data until cursor is alive. The point of this post is that you can store ts somewhere and read from the point you have stored ts.If you remember from before we got last changes by the query below:
last = oplog.find().sort('$natural', pymongo.DESCENDING).limit(1).next()
ts = last['ts']
We read the last ts and read from the last record, that's why we were missing data.
#mongodb #mongo #replication #oplog #timestamp #cursor
There is always a risk and also a problem when altering a production mySQL table.
Installation steps on Debian:
1- wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.deb
2- sudo dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.deb
3- sudo apt-get update
4- sudo apt-get install percona-toolkit
Now you have percona toolkit on your Debian server. Use the command
#mysql #percona #schema #alter_table #online_schema_change #percona_toolkit #pt_online_schema_change
Percona
has released a toolkit that contains a command called pt-online-schema-change
. It will change table schema live on production without downtime.Installation steps on Debian:
1- wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.deb
2- sudo dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.deb
3- sudo apt-get update
4- sudo apt-get install percona-toolkit
Now you have percona toolkit on your Debian server. Use the command
pt-online-schema-change
for your table alteration.#mysql #percona #schema #alter_table #online_schema_change #percona_toolkit #pt_online_schema_change
In order to dry run before the real execution use
Now after dry run you can execute the alter command:
#mysql #percona #schema #alter_table #online_schema_change #percona_toolkit #pt_online_schema_change
--dry-run
:pt-online-schema-change --dry-run h=127.0.0.1,D=YOUR_DB,t=YOUR_TABLE --alter "ADD COLUMN (foobar varchar(30) DEFAULT NULL);"
Now after dry run you can execute the alter command:
pt-online-schema-change --execute h=127.0.0.1,D=YOUR_DB,t=YOUR_TABLE --alter "ADD COLUMN (foobar varchar(30) DEFAULT NULL);"
#mysql #percona #schema #alter_table #online_schema_change #percona_toolkit #pt_online_schema_change
https://dba.stackexchange.com/questions/187630/problem-with-aborted-pt-online-schema-change-command
#mysql #trigger #percona #online_schema_change
#mysql #trigger #percona #online_schema_change
Database Administrators Stack Exchange
Problem with aborted PT-online-schema change command
I aborted a pt-online-schema change command to change a table definition. Now, when I run pt-online-schema change again, I get this error:
The table . has trigge...
The table . has trigge...
Tech C**P
https://dba.stackexchange.com/questions/187630/problem-with-aborted-pt-online-schema-change-command #mysql #trigger #percona #online_schema_change
In order to add multiple columns in alter table command use ONE
#percona
--alter
and comma separate ADD COLUMNS
statements.#percona
Do you think
Its security is like shell, as it uses shell authentication mechanism for login. You need to open UDP ports 60000 to 61000. Or you can give
One of the caveats of mosh is that you cannot scroll to previous commands as its buffer is limited to the window you are viewing itself.
Install it using
For further instruction head over to link below:
#ssh #mosh #terminal
SSH
sucks? Do you think SSH
suck specially when you are from an unstable network like what we have in IRAN and when DPI (Deep Packet Inspection) is undergo? OK, I have mosh
for you. mosh
stands for Mobile Shell
. It reconnects itself and you never have to login again. Even if you change your internet connection you are safe and your shell is open :)Its security is like shell, as it uses shell authentication mechanism for login. You need to open UDP ports 60000 to 61000. Or you can give
-p
parameter to connect on a specific port:mosh -p 60010 admin@my_server.com
One of the caveats of mosh is that you cannot scroll to previous commands as its buffer is limited to the window you are viewing itself.
Install it using
apt-get install mosh
.For further instruction head over to link below:
https://mosh.org/
#ssh #mosh #terminal
To test
#mosh #test #reconnect
mosh
, disconnect from internet while you're logged into your server and now wait for a message from mosh
that says you last connected to server 10 secs ago, etc. Now connect to internet again. Voila! You're back again. This is amazing :))))#mosh #test #reconnect
In order to get a random document from MongoDB collection you can use aggregate framework:
Read more here: https://www.mongodb.com/blog/post/how-to-perform-random-queries-on-mongodb
This method is the fastest and most efficient way of getting random data from a huge database like 100 M records.
#mongodb #mongo #aggregate #sample #random
db.users.aggregate( [ { $sample: { size: 1 } } ] )
NOTE:
MongoDB 3.2 introduced $sample
to the aggregation pipeline.Read more here: https://www.mongodb.com/blog/post/how-to-perform-random-queries-on-mongodb
This method is the fastest and most efficient way of getting random data from a huge database like 100 M records.
#mongodb #mongo #aggregate #sample #random
MongoDB
How to Perform Random Queries on MongoDB | MongoDB Blog
in
most important part if this scenario is when you are using micro service architecture and you have tens of modules which works independently from each other and send their requests to
Now if you look at the MongoDB log you would see:
In the above log you would see
#mongodb #mongo #pymongo #appname
pymongo
you can give name to your connections. This definitely helps to debug issues or trace logs when seeing mongoDB logs. Themost important part if this scenario is when you are using micro service architecture and you have tens of modules which works independently from each other and send their requests to
MongoDB
:mc = pymongo.MongoClient(host, port, appname='YOUR_APP_NAME')
Now if you look at the MongoDB log you would see:
I COMMAND [conn173140] command MY_DB.users appName: "YOUR_APP_NAME" command: find { find: "deleted_users", filter: {}, sort: { acquired_date: 1 }, skip: 19973, limit: 1000, $readPreference: { mode: "secondaryPreferred" }, $db: "blahblah" } planSummary: COLLSCAN keysExamined:0 docsExamined:19973 hasSortStage:1 cursorExhausted:1 numYields:312 nreturned:0 reslen:235 locks:{ Global: { acquireCount: { r: 626 } }, Database: { acquireCount: { r: 313 } }, Collection: { acquireCount: { r: 313 } } } protocol:op_query 153ms
In the above log you would see
YOUR_APP_NAME
.#mongodb #mongo #pymongo #appname
https://nickjanetakis.com/blog/15-useful-flask-extensions-and-libraries-that-i-use-in-every-project#flask-limiter
#flask #rate_limiter #mail #celery
#flask #rate_limiter #mail #celery
Nick Janetakis
15 Useful Flask Extensions and Libraries That I Use in Every Project — Nick Janetakis
Part of the benefit of using a popular web framework is the thriving community around it. Here's my favorite Flask extensions.
Send
You need to add
Read more about webhooks for slack here:
- here: https://api.slack.com/incoming-webhooks
That's all! Now will have all your submitted forms in Slack. Voila!
#google #slack #forms #google_forms #webhook #hook #javascript
Google Forms
to Slack# read more about slack web hooks here: https://api.slack.com/incoming-webhooks
var POST_URL = "https://hooks.slack.com/services/YOUR_TOKEN";
function onSubmit(e) {
var response = e.response.getItemResponses();
var toType = function(obj) {
return ({}).toString.call(obj).match(/\s([a-zA-Z]+)/)[1].toLowerCase()
}
// "Form Respondent: " + e.response.getRespondentEmail()
var email = response[0].getResponse();
var field2 = response[1].getResponse();
var field3 = response[2].getResponse();
var field4 = response[3].getResponse();
var d = "*SUBMITTED FORM*\n>>>Email: " + email + "\n";
d += "other fields: \n" + field2 + field3 + field4;
var payload =
{ "payload": '{"text": "' + d + '"}' }
var options =
{
"method" : "post",
"payload" : payload
};
UrlFetchApp.fetch(POST_URL, options);
};
You need to add
javascript
code above to Script Editor
section of google form. When you are in form editing mode click the three dot in top corner and click on Script Editor
. When your're done click on save and give a name to your project script. Now on the script editor page click on edit -> All your triggers
and bind your script to form onSubmit event.Read more about webhooks for slack here:
- here: https://api.slack.com/incoming-webhooks
That's all! Now will have all your submitted forms in Slack. Voila!
#google #slack #forms #google_forms #webhook #hook #javascript
Slack API
Sending messages using incoming webhooks
Create an incoming webhook with a unique URL to which you send a JSON payload with message text and options.
Minio
is an object storage server that is compatible with Amazon S3
. You can run your own object storage server using docker:- https://docs.minio.io/docs/minio-docker-quickstart-guide
And you can use its
Python SDK
in order to talk to its endpoint API:- https://github.com/minio/minio-py
It's usage is very simple and elegant. If you are unfamiliar with object storage read more here:
- https://en.wikipedia.org/wiki/Object_storage
#minio #python #sdk #docker #object_storage
docs.minio.io
MinIO | Learn more about MinIO's Docker Implementation
The MinIO Docker Quickstart Guide offers examples, code and documentation for those using Docker to run MinIO
When you
But there is tiny tip here that needs to be told. If you want to pass parameter to the destination link, which in here is
The variable
#nginx #web_server #redirect #302 #is_args #args
redirect
in nginX
you would use one of 302, 301 code like the below code:location = /singup {
return 302 https://docs.google.com/forms;
}
But there is tiny tip here that needs to be told. If you want to pass parameter to the destination link, which in here is
https:// docs.google.com/forms
it wont work. String parameters are being held in $args
varaible in nginX
so you need to pass this variable like the following code:location = /singup {
return 302 https://docs.google.com/forms$is_args$args;
}
The variable
$is_args
value will be set to "?" if a request line has arguments, or an empty string otherwise.#nginx #web_server #redirect #302 #is_args #args
Create and assin a self-signed certificate with the bash script below:
- https://gist.github.com/alirezastack/30c8849e4add4329dcc2633fbb06a638
#mongodb #ssl #self_signed
- https://gist.github.com/alirezastack/30c8849e4add4329dcc2633fbb06a638
#mongodb #ssl #self_signed
Gist
Use this script to create a self signed certificate for your MongoDB instance