دوستانی که با الگوریتم زیر آشنایی دارند و چهت پروژه ارشد میتونند کمکی کنند لطفا با کاربر زیر در تماس باشند (هزینه توافقی):
Metropolis–Hastings algorithm (Markov chain Monte Carlo (MCMC) method)
👤 @shararehadipour
Metropolis–Hastings algorithm (Markov chain Monte Carlo (MCMC) method)
👤 @shararehadipour
MYSQL insert
If you have bulk scripts for your email/sms like me, and you are sending to thousands of users, you will difintely will get stuck in the middle of the bulk notification or it will be very slow and unacceptable. First of all you must initiate only
ONE
mysql connection. If you are inserting your data one by one, you're dead again! try to use executemany
that will insert data into mySQL in bulk not one by one:client.executemany(
"""INSERT INTO email (name, spam, email, uid, email_content)
VALUES (%s, %s, %s, %s, %s)""",
[
("Ali", 0, 'mail1@example.com', 1, 'EMAIL_CONTENT'),
("Reza", 1, 'mail2@example.com', 2, 'EMAIL_CONTENT'),
("Mohsen", 1, 'mail3@example.com', 3, 'EMAIL_CONTENT')
] )
Other note for bulk insertion is to avoid disk IO in case possible, and use redis, memcached or so on for inserting some data like user's phone or emails. It will tremendously improve performance of your bulk script.
#python #mysql #executemany #redis #bulk #email #sms
It's Dangerous!
Today I had to implement email unsubscription and for that I had to pass some data alongside the link as a token in emails. The best candidate for this scenario is
itsdangerous
. If I don't use it have to store tokens in a redis DB and link those tokens with received token from emails as ubsubscription link. These all adds complexity.Sometimes you just want to send some data to untrusted environments. But how to do this safely? The trick involves signing. Given a key only you know, you can cryptographically sign your data and hand it over to someone else. When you get the data back you can easily ensure that nobody tampered with it.
Granted, the receiver can decode the contents and look into the package, but they can not modify the contents unless they also have your secret key. So if you keep the key secret and complex, you will be fine.
Internally itsdangerous uses HMAC and SHA1 for signing by default and bases the implementation on the Django signing module. It also however supports JSON Web Signatures (JWS). The library is BSD licensed and written by Armin Ronacher though most of the copyright for the design and implementation goes to Simon Willison and the other amazing Django people that made this library possible.
Example Use Cases:
- You can serialize and sign a user ID for unsubscribing of newsletters into URLs. This way you don’t need to generate one- time tokens and store them in the database. Same thing with any kind of activation link for accounts and similar things.
- Signed objects can be stored in cookies or other untrusted sources which means you don’t need to have sessions stored on the server, which reduces the number of necessary database queries.
- Signed information can safely do a roundtrip between server and client in general which makes them useful for passing server-side state to a client and then back.
To install it using
pip
:pip install itsdangerous
Sample code:
>>> from itsdangerous import URLSafeSerializer
>>> s = URLSafeSerializer('secret-key')
>>> s.dumps([1, 2, 3, 4])
'WzEsMiwzLDRd.wSPHqC0gR7VUqivlSukJ0IeTDgo'
>>> s.loads('WzEsMiwzLDRd.wSPHqC0gR7VUqivlSukJ0IeTDgo')
[1, 2, 3, 4]
#python #itsdangerous #URLSafeSerializer
What is cronjob?
cron is a unix, solaris, Linux utility that allows tasks to be automatically run in the background at regular intervals by the cron daemon.
What is crontab?
Crontab (CRON TABle) is a file which contains the schedule of cron entries to be run and at specified times.
What is a cron job
Cron job or cron schedule is a specific set of execution instructions specifing day, time and command to execute.
To see list current cronjobs use
crontab -l
.Edit crontab file, or create one if it doesn’t already exist by issuing the command below:
crontab -e
It may get opened by nano by default. In case you want to change your default editor for crontab use the below command:
export EDITOR=vim
NOTE:
if you want to persist this data you have to put the above export command inside of ~/.bashrc
The general form of a cronjob is like below:
* * * * * command to be executedIn total we have 5 stars. From left to right, first star is the minute that you want to run your cronjob at (min (0 - 59)).
Second star is hour (0 - 23).
Third star is day of month (1 - 31).
Fourth star refers to month (1 - 12).
And last star which refers to day of week (0 - 6). Be careful that 0 is sunday!
Now let's create a sample cronjob that reset our eshop service at 22:00:00 everyday:
0 22 * * * svc -k /etc/service/eshop
The other stars say that run this script every day, every day of month and every month.
#linux #cron #cronjob #crontab
Do you know what kind of
This is just a tip of the iceberg. To know these parameters and memorizing them is hard, so there is an online tool just for this purpose:
http://www.raid-calculator.com/
#hardware #raid #disk #raid_calculator
RAID
technology is suitable for you? Do you know whether RAID 0
is fault tolerant or not? What kind of raid duobles, triples your read performance or write performance?This is just a tip of the iceberg. To know these parameters and memorizing them is hard, so there is an online tool just for this purpose:
http://www.raid-calculator.com/
#hardware #raid #disk #raid_calculator
Raid-Calculator
Free RAID Calculator - Calculate RAID Array Capacity and
Fault Tolerance.
Fault Tolerance.
Online RAID calculator to assist RAID planning. Calculates capacity, speed and fault tolerance characteristics for a RAID0, RAID1, RAID5, RAID6, and RAID10 setups.
To see your current database in
As shown above my current database is
#mysql #database #current_database #current_db
MySQL
:mysql> SELECT DATABASE();
+------------+
| DATABASE() |
+------------+
| my_data |
+------------+
1 row in set (0.00 sec)
As shown above my current database is
my_data
. I usually forgot what db I'm currently on, when I use tmux and my session has been kept open for weeks. That's why! :)#mysql #database #current_database #current_db
Change stream is a new method on MongoDB 3.6 that you can use to watch real-time data modifications to get changes. In older versions you had to tail the oplog.
The ChangeStream iterable blocks until the next change document is returned or an error is raised. If the next() method encounters a network error when retrieving a batch from the server, it will automatically attempt to recreate the cursor such that no change events are missed. Any error encountered during the resume attempt indicates there may be an outage and will be raised.
#mongodb #mongo #mongo36 #change_stream #stream #etl
for change in db.collection.watch():
print(change)
The ChangeStream iterable blocks until the next change document is returned or an error is raised. If the next() method encounters a network error when retrieving a batch from the server, it will automatically attempt to recreate the cursor such that no change events are missed. Any error encountered during the resume attempt indicates there may be an outage and will be raised.
#mongodb #mongo #mongo36 #change_stream #stream #etl
Did you push a very large file into git? Does everyone yell at you about your commit and your uselessness? Are you a junky punky like me that just ruin things? Oh I'm kidding...
Because of that big file cloning the repo again would take a long long time. Removing the file locally and pushing again would not solve the problem as that big file is in Git's history.
If you want to remove the large file from your git history, so that when everyone clone the repo should not wait for that large file, just do as follow:
I should note that you should be in the root of git repo.
If you need to do this, be sure to keep a copy of your repo around in case something goes wrong.
#git #clone #rm #remove #large_file #blob #rebase #filter_branch
Because of that big file cloning the repo again would take a long long time. Removing the file locally and pushing again would not solve the problem as that big file is in Git's history.
If you want to remove the large file from your git history, so that when everyone clone the repo should not wait for that large file, just do as follow:
git filter-branch --tree-filter 'rm path/to/your/bigfile' HEAD
git push origin master --force
I should note that you should be in the root of git repo.
If you need to do this, be sure to keep a copy of your repo around in case something goes wrong.
#git #clone #rm #remove #large_file #blob #rebase #filter_branch
How much do you know rebooting/shutting down your linux server?
* First of all you should be root or sudoer to be able to reboot the server.
The command below reboot the server immediately:
In case you want to reboot in a specific time, you can use shutdown command! Yes you have to use shutdown for rebooting server, it has historical reasons.
So by the explanation given so far you can reboot your system after 5 minutes by the below command:
To see last reboots history log:
To reboot a remote server you can use
#linux #cyberciti #reboot #shutdown #remote_reboot
* First of all you should be root or sudoer to be able to reboot the server.
The command below reboot the server immediately:
reboot
In case you want to reboot in a specific time, you can use shutdown command! Yes you have to use shutdown for rebooting server, it has historical reasons.
shutdown -r time "message"
time
parameter can be now
, or in the format of hh:mm
, or in the format of +m
which stands for minute. now
is a shortcut for +0
. The message part is the part that will be broadcast to all users who are logged in.NOTE:
shutdown command is recommended over reboot command.So by the explanation given so far you can reboot your system after 5 minutes by the below command:
shutdown -r +5 "Server is going down for kernel upgrade. Please save your work ASAP."
To see last reboots history log:
last reboot
To reboot a remote server you can use
ssh
:ssh root@server1 /sbin/reboot
#linux #cyberciti #reboot #shutdown #remote_reboot
Create a
This will compress the contents of source-folder-name to a tar.gz archive named tar-archive-name.tar.gz
To extract a tar.gz compressed archive you can use the following command:
This will extract the archive to the folder tar-archive-name.
#linux #tar #targz #zip #compress
tar.gz
file using tar
command:tar -zcvf tar-archive-name.tar.gz source-folder-name
-z
: The z option is very important and tells the tar command to uncompress the file (gzip).-c
: I don't know! If you know what it does please tell me. Thank you.-v
: The “v” stands for “verbose.” This option will list all of the files one by one in the archive.-f
: This options tells tar that you are going to give it a file name to work with.This will compress the contents of source-folder-name to a tar.gz archive named tar-archive-name.tar.gz
To extract a tar.gz compressed archive you can use the following command:
tar -zxvf tar-archive-name.tar.gz
-x
: This option tells tar
to extract the files.This will extract the archive to the folder tar-archive-name.
#linux #tar #targz #zip #compress
If you want to see how much actual data is stored in your
#mysql #myisam #innodb #storage_engine #se
MyISAM
, InnoDB
:SELECT IFNULL(B.engine,'Total') "Storage Engine",
CONCAT(LPAD(REPLACE(FORMAT(B.DSize/POWER(1024,pw),3),',',''),17,' '),' ',
SUBSTR(' KMGTP',pw+1,1),'B') "Data Size", CONCAT(LPAD(REPLACE(
FORMAT(B.ISize/POWER(1024,pw),3),',',''),17,' '),' ',
SUBSTR(' KMGTP',pw+1,1),'B') "Index Size", CONCAT(LPAD(REPLACE(
FORMAT(B.TSize/POWER(1024,pw),3),',',''),17,' '),' ',
SUBSTR(' KMGTP',pw+1,1),'B') "Table Size"
FROM (SELECT engine,SUM(data_length) DSize,SUM(index_length) ISize,
SUM(data_length+index_length) TSize FROM information_schema.tables
WHERE table_schema NOT IN ('mysql','information_schema','performance_schema')
AND engine IS NOT NULL GROUP BY engine WITH ROLLUP) B,
(SELECT 3 pw) A ORDER BY TSize;
#mysql #myisam #innodb #storage_engine #se
You can kill a process by
corrupt the file, if you are sending RPC messages, you will break the process in between and drop all messages.
To handle signal you can use
- https://gist.github.com/alirezastack/ae4e12a21ccb91264b69e1d14a53c044
The above method will handle
To test it you can run the script in terminal and then try to find the pid number of the process and finally kill it:
The above command will issue
#python #sigint #sigterm #signal #kill
kill
command. But have you thought what would happen to the process which is runing if you issue kill command? It will do something nasty in between if you do not handle the kill signal gracefully. If you are writing to a file, it willcorrupt the file, if you are sending RPC messages, you will break the process in between and drop all messages.
To handle signal you can use
signal
python module. A sample of the signal handling is created as a gist in github below:- https://gist.github.com/alirezastack/ae4e12a21ccb91264b69e1d14a53c044
The above method will handle
SIGINT
, SIGTERM
and end the loop gracefully.To test it you can run the script in terminal and then try to find the pid number of the process and finally kill it:
sudo kill 4773
The above command will issue
SIGTERM
and script will handle it gracefully. SIGINT
on the other hand is issued when you press CTRL+C
.#python #sigint #sigterm #signal #kill
Gist
Gracefully kill python script using signal module
MongoDB data types
String:
You know what it is!Integer
: This type is used to store a numerical value. Integer can be 32 bit or 64 bit depending upon your server.Boolean
: True/FalseDouble
: This type is used to store floating point values.Arrays
: [list, of, elements]Timestamp
: This can be handy for recording when a document has been modified or added.Object
: This datatype is used for embedded documents. Like {"images": {"a": "ali", "b": "reza"}}Null
: This type is used to store a Null value.Date
: This datatype is used to store the current date or time in UNIX time format. You can specify your own date time by creating object of Date and passing day, month, year into it.Object ID
: This datatype is used to store the document’s ID.There are some more like
Code
, Regex
and so which is used less than other data types.#mongodb #data_type #mongo #database #collection #object