MongoDB data types
String:
You know what it is!Integer
: This type is used to store a numerical value. Integer can be 32 bit or 64 bit depending upon your server.Boolean
: True/FalseDouble
: This type is used to store floating point values.Arrays
: [list, of, elements]Timestamp
: This can be handy for recording when a document has been modified or added.Object
: This datatype is used for embedded documents. Like {"images": {"a": "ali", "b": "reza"}}Null
: This type is used to store a Null value.Date
: This datatype is used to store the current date or time in UNIX time format. You can specify your own date time by creating object of Date and passing day, month, year into it.Object ID
: This datatype is used to store the document’s ID.There are some more like
Code
, Regex
and so which is used less than other data types.#mongodb #data_type #mongo #database #collection #object
InnoDB file per table why?
if it is not started and failed what to do?
Today was a big day as a technical point of view in MySQL that saved a lot of storage for me and great deal of maintenance in the future.
To better explain the issue I have to talk a little bit about fundamental behaviour of MySQL InnoDB storage engine!
in past MySQL used MyISAM as its default storage engine. It didn't support transaction. It was not fault tolerant and data was not reliable when power outages occured or server got restarted in the middle of the MySQL actions. By now MySQL uses
In InngoDB by default all tables and all databases resides in a single gigantic file called
storage, our server went out of free space.
There a is mechanism in MySQL that you configure InnoDB to store each tables data into its own file not inside of
The
Do not use optimize table on a table, when you have not configured innodb file per table. Running
- Makes the table's data and indexes contiguous inside ibdata1.
- It makes ibdata1 grow because the contiguous data is appended to ibdata1.
You can segregate Table Data and Table Indexes from ibdata1 and manage them independently using innodb_file_per_table. That way, only MVCC and Table MetaData would reside in ibdata1.
In the next post I explain how to do exactly that.
#mysql #innodb #myisam #ibdata1 #database #innodb_file_per_table
if it is not started and failed what to do?
Today was a big day as a technical point of view in MySQL that saved a lot of storage for me and great deal of maintenance in the future.
To better explain the issue I have to talk a little bit about fundamental behaviour of MySQL InnoDB storage engine!
in past MySQL used MyISAM as its default storage engine. It didn't support transaction. It was not fault tolerant and data was not reliable when power outages occured or server got restarted in the middle of the MySQL actions. By now MySQL uses
InnoDB
as its default storage engine that is battery packed by transactions, fault tolerant and more.In InngoDB by default all tables and all databases resides in a single gigantic file called
ibdata
. When data grows and you alter your tables, the scar gets worse! The size of the ibdata grows very fast. When you alter a table ibdata
file would not shrink. For example we had a 120GB single file on server that altering a single table with a huge data would take a long time and would take longstorage, our server went out of free space.
There a is mechanism in MySQL that you configure InnoDB to store each tables data into its own file not inside of
ibdata
file. This mechnism has great advantages like using OPTIMIZE TABLE
to shrink table size.The
OPTIMIZE TABLE
whith InnoDB
tables, locks the table, copy the data in a new clean table (that's why the result is shrinked), drop the original table and rename the new table with the original name. That why you should care to have the double of the volumetry of your table available in your disk. If you have a 30GB table, optimizing that table needs at least 30GB of free disk space.Do not use optimize table on a table, when you have not configured innodb file per table. Running
OPTIMIZE TABLE
against an InnoDB table stored ibdata1 will make things worse because here is what it does:- Makes the table's data and indexes contiguous inside ibdata1.
- It makes ibdata1 grow because the contiguous data is appended to ibdata1.
You can segregate Table Data and Table Indexes from ibdata1 and manage them independently using innodb_file_per_table. That way, only MVCC and Table MetaData would reside in ibdata1.
In the next post I explain how to do exactly that.
#mysql #innodb #myisam #ibdata1 #database #innodb_file_per_table
Upgrade mongoDB from
Here we persume you are on
1- import public key:
2- create apt sources file:
3- update repo
4- install the MongoDB packages
* it will ask for config overwrite, if you want to take backup take a backup from config and then overwrite it.
#mongodb #mongo #mongodb36 #database #upgrade #mongodb34
3.4
to 3.6
:Here we persume you are on
debian 8 jessie
.1- import public key:
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 2930ADAE8CAF5059EE73BB4B58712A2291FA4AD5
2- create apt sources file:
echo "deb http://repo.mongodb.org/apt/debian jessie/mongodb-org/3.6 main" | sudo tee / etc/apt/sources.list.d/mongodb-org-3.6.list
3- update repo
sudo apt-get update
4- install the MongoDB packages
sudo apt-get install -y mongodb-org=3.6.2 mongodb-org-server=3.6.2 mongodb-org-shell=3. 6.2 mongodb-org-mongos=3.6.2 mongodb-org-tools=3.6.2
* it will ask for config overwrite, if you want to take backup take a backup from config and then overwrite it.
#mongodb #mongo #mongodb36 #database #upgrade #mongodb34
Check
Please make sure that you have permissions on getting grants list, otherwise
#mysql #grants #sql_grants #database
grants
of a specific user on MySQL
:SELECT sql_grants FROM common_schema.sql_show_grants WHERE user='app';
Please make sure that you have permissions on getting grants list, otherwise
permission denied
will be returned back.#mysql #grants #sql_grants #database
OperationalError: (2013, 'Lost connection to MySQL server during query')
Usually it indicates network connectivity trouble and you should check the condition of your network if this error occurs frequently. If the error message includes “during query,” this is probably the case you are experiencing.
Sometimes the “during query” form happens when millions of rows are being sent as part of one or more queries. If you know that this is happening, you should try increasing net_read_timeout from its default of 30 seconds to 60 seconds or longer, sufficient for the data transfer to complete.
More rarely, it can happen when the client is attempting the initial connection to the server. In this case, if your connect_timeout value is set to only a few seconds, you may be able to resolve the problem by increasing it to ten seconds, perhaps more if you have a very long distance or slow connection. You can determine whether you are experiencing this more uncommon cause by using SHOW GLOBAL STATUS LIKE 'Aborted_connects'. It will increase by one for each initial connection attempt that the server aborts. You may see “reading authorization packet” as part of the error message; if so, that also suggests that this is the solution that you need.
If the cause is none of those just described, you may be experiencing a problem with BLOB values that are larger than max_allowed_packet, which can cause this error with some clients. Sometime you may see an ER_NET_PACKET_TOO_LARGE error, and that confirms that you need to increase max_allowed_packet.
#database #mysql #OperationalError #connection
Usually it indicates network connectivity trouble and you should check the condition of your network if this error occurs frequently. If the error message includes “during query,” this is probably the case you are experiencing.
Sometimes the “during query” form happens when millions of rows are being sent as part of one or more queries. If you know that this is happening, you should try increasing net_read_timeout from its default of 30 seconds to 60 seconds or longer, sufficient for the data transfer to complete.
More rarely, it can happen when the client is attempting the initial connection to the server. In this case, if your connect_timeout value is set to only a few seconds, you may be able to resolve the problem by increasing it to ten seconds, perhaps more if you have a very long distance or slow connection. You can determine whether you are experiencing this more uncommon cause by using SHOW GLOBAL STATUS LIKE 'Aborted_connects'. It will increase by one for each initial connection attempt that the server aborts. You may see “reading authorization packet” as part of the error message; if so, that also suggests that this is the solution that you need.
If the cause is none of those just described, you may be experiencing a problem with BLOB values that are larger than max_allowed_packet, which can cause this error with some clients. Sometime you may see an ER_NET_PACKET_TOO_LARGE error, and that confirms that you need to increase max_allowed_packet.
#database #mysql #OperationalError #connection
In order to connect directly to
If your
The interesting thing about this command is that you can give database name that you want to work on:
Now after connecting if you use
#database #mongodb #mongo
MongoDB
from your host:mongo YOUR_REMOTE_MONGO_SERVER:27017
If your
MongoDB
port is different, use your desired port rather than 27017.The interesting thing about this command is that you can give database name that you want to work on:
mongo YOUR_REMOTE_MONGO_SERVER:27017/YOUR_DB
Now after connecting if you use
db
command you should see your current db:rs0:PRIMARY> db
YOUR_DB
rs0:PRIMARY
will be shown when you use replication. Your case may be different.#database #mongodb #mongo
In order to connect to
Sample usages:
Read full details here:
- http://api.mongodb.com/python/current/examples/high_availability.html#connecting-to-a-replica-set
#database #mongodb #mongo #replica_set #replication #pymongo #arbiter #master #primary #slave
MongoDB replica set
in Python
you can give all server node addersses to MongoClient
. Addresses passed to MongoClient()
are called the seeds. As long as at least one of the seeds is online, MongoClient
discovers all the members in the replica set, and determines which is the current primary and which are secondaries or arbiters.Sample usages:
>>> MongoClient('localhost', replicaset='foo')
MongoClient(host=['localhost:27017'], replicaset='foo', ...)
>>> MongoClient('localhost:27018', replicaset='foo')
MongoClient(['localhost:27018'], replicaset='foo', ...)
>>> MongoClient('localhost', 27019, replicaset='foo')
MongoClient(['localhost:27019'], replicaset='foo', ...)
>>> MongoClient('mongodb://localhost:27017,localhost:27018/?replicaSet=foo')
MongoClient(['localhost:27017', 'localhost:27018'], replicaset='foo', ...)
Read full details here:
- http://api.mongodb.com/python/current/examples/high_availability.html#connecting-to-a-replica-set
#database #mongodb #mongo #replica_set #replication #pymongo #arbiter #master #primary #slave
Backup and Restor from couchDB
- https://github.com/danielebailo/couchdb-dump
#database #couchdb #couch #couchBase #backup
GitHub
GitHub - danielebailo/couchdb-dump: Bash command line scripts to dump &restore a couchdb database
Bash command line scripts to dump &restore a couchdb database - danielebailo/couchdb-dump
If you want to make an exact copy of a table from another database into a target database in
The above command will create a table named
#mysql #database #create_table #table #copy_table
mySQL
you could do like below:create table new_table like target_database.target_table
The above command will create a table named
new_table
like target_table
from target_database
database.#mysql #database #create_table #table #copy_table
Turn
Moreover you also need to change column character set:
Be careful that now you have to do more things like set character set after connection initiation in Python:
Now before executing your query you also need to set character set on cursor:
#database #mysql #character_set #utf8mb4 #cursor #emoji
MySQL
table into utf8mb4
to store emojis:ALTER TABLE YOUR_TABLE convert to character set utf8mb4 collate utf8mb4_general_ci;
Moreover you also need to change column character set:
ALTER TABLE YOUR_TABLE CHANGE YOUR_COLUMN_NAME YOUR_COLUMN_NAME VARCHAR(250) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci;
Be careful that now you have to do more things like set character set after connection initiation in Python:
your_mysql_client = MySQLdb.connect(...)
your_mysql_client.set_character_set('utf8mb4')
Now before executing your query you also need to set character set on cursor:
my_cursor.execute("SET NAMES utf8mb4;")
my_cursor.execute(YOUR_QUERY)
#database #mysql #character_set #utf8mb4 #cursor #emoji
With
So be careful with it!
#mysql #mysqldump #port #port_ignorance #3306 #backup #database_backup #sockets #ip_address #localhost
mysqldump
you can export databases. with --port
parameter you can specify which port it should connects. If you provide localhost
for --host
parameter, mySQL will use sockets and port will be ignored.So be careful with it!
#mysql #mysqldump #port #port_ignorance #3306 #backup #database_backup #sockets #ip_address #localhost
Table compression in
https://dev.mysql.com/doc/refman/5.5/en/innodb-compression-usage.html
#database #mysql #compression #innodb
MySQL
:https://dev.mysql.com/doc/refman/5.5/en/innodb-compression-usage.html
#database #mysql #compression #innodb
https://www.jetbrains.com/research/devecosystem-2018/databases/
It seems
#database #survey
It seems
MongoDB
is trying to be the first in the next decade.#database #survey
JetBrains
Databases in 2018 - The State of Developer Ecosystem by JetBrains
Over 6,000+ developers share their insights on modern database programming. Keep up with the most relevant technologies and databases in this infographic!
What does
In the above code 2 queries are issued in DB side. First it gets Entry record and then blog is fetched from DB when
You can follow foreign keys in a similar way to querying them. If you have the following models:
Then a call to
#python #django #select_related #join #database #models
select_related
do in Django
?select_related
does a join in case needed on the DB side and reduce query counts. Let's look at an example:# Hits the database.
e = Entry.objects.get(id=5)
# Hits the database again to get the related Blog object.
b = e.blog
In the above code 2 queries are issued in DB side. First it gets Entry record and then blog is fetched from DB when
e.blog
is called. And here’s select_related lookup:# Hits the database.
e = Entry.objects.select_related('blog').get(id=5)
# Doesn't hit the database, because e.blog has been prepopulated
# in the previous query.
b = e.blog
You can follow foreign keys in a similar way to querying them. If you have the following models:
from django.db import models
class City(models.Model):
# ...
pass
class Person(models.Model):
# ...
hometown = models.ForeignKey(
City,
on_delete=models.SET_NULL,
blank=True,
null=True,
)
class Book(models.Model):
# ...
author = models.ForeignKey(Person, on_delete=models.CASCADE)
Then a call to
Book.objects.select_related('author__hometown').get(id=4)
will cache the related Person and the related City:# Hits the database with joins to the author and hometown tables.
b = Book.objects.select_related('author__hometown').get(id=4)
p = b.author # Doesn't hit the database.
c = p.hometown # Doesn't hit the database.
# Without select_related()...
b = Book.objects.get(id=4) # Hits the database.
p = b.author # Hits the database.
c = p.hometown # Hits the database.
#python #django #select_related #join #database #models
Did you know you can use jsonSchema in MongoDB to search for documents?
Let's say you have
Now let's say you want to find all documents that has a customer_id of type
In Mongo shell:
This schema says look for documents that have
Interesting, right? :)
#database #mongodb #jsonSchema #json_schema
Let's say you have
users
collection with data below:{ "_id" : ObjectId("5f64bd1eca8806f2c04fcbe3"), "customer_id" : 100, "username" : "john" }
{ "_id" : ObjectId("5f64bd1eca8806f2c04fcbe5"), "customer_id" : 206, "username" : "new_customer" }
{ "_id" : ObjectId("60420df441558d6671cf54f2"), "customer_id" : "123", "username" : "Ali" }
Now let's say you want to find all documents that has a customer_id of type
string
instead of int
.In Mongo shell:
let ms = {required: ["customer_id"], properties: {customer_id: {bsonType: "string"}}}
This schema says look for documents that have
customer_id
field with string
type. To search:> db.customers.find({$jsonSchema: ms})
{ "_id" : ObjectId("60420df441558d6671cf54f2"), "customer_id" : "123", "username" : "Ali" }
Interesting, right? :)
#database #mongodb #jsonSchema #json_schema