To see your current database in
As shown above my current database is
#mysql #database #current_database #current_db
MySQL
:mysql> SELECT DATABASE();
+------------+
| DATABASE() |
+------------+
| my_data |
+------------+
1 row in set (0.00 sec)
As shown above my current database is
my_data
. I usually forgot what db I'm currently on, when I use tmux and my session has been kept open for weeks. That's why! :)#mysql #database #current_database #current_db
If you want to see how much actual data is stored in your
#mysql #myisam #innodb #storage_engine #se
MyISAM
, InnoDB
:SELECT IFNULL(B.engine,'Total') "Storage Engine",
CONCAT(LPAD(REPLACE(FORMAT(B.DSize/POWER(1024,pw),3),',',''),17,' '),' ',
SUBSTR(' KMGTP',pw+1,1),'B') "Data Size", CONCAT(LPAD(REPLACE(
FORMAT(B.ISize/POWER(1024,pw),3),',',''),17,' '),' ',
SUBSTR(' KMGTP',pw+1,1),'B') "Index Size", CONCAT(LPAD(REPLACE(
FORMAT(B.TSize/POWER(1024,pw),3),',',''),17,' '),' ',
SUBSTR(' KMGTP',pw+1,1),'B') "Table Size"
FROM (SELECT engine,SUM(data_length) DSize,SUM(index_length) ISize,
SUM(data_length+index_length) TSize FROM information_schema.tables
WHERE table_schema NOT IN ('mysql','information_schema','performance_schema')
AND engine IS NOT NULL GROUP BY engine WITH ROLLUP) B,
(SELECT 3 pw) A ORDER BY TSize;
#mysql #myisam #innodb #storage_engine #se
InnoDB file per table why?
if it is not started and failed what to do?
Today was a big day as a technical point of view in MySQL that saved a lot of storage for me and great deal of maintenance in the future.
To better explain the issue I have to talk a little bit about fundamental behaviour of MySQL InnoDB storage engine!
in past MySQL used MyISAM as its default storage engine. It didn't support transaction. It was not fault tolerant and data was not reliable when power outages occured or server got restarted in the middle of the MySQL actions. By now MySQL uses
In InngoDB by default all tables and all databases resides in a single gigantic file called
storage, our server went out of free space.
There a is mechanism in MySQL that you configure InnoDB to store each tables data into its own file not inside of
The
Do not use optimize table on a table, when you have not configured innodb file per table. Running
- Makes the table's data and indexes contiguous inside ibdata1.
- It makes ibdata1 grow because the contiguous data is appended to ibdata1.
You can segregate Table Data and Table Indexes from ibdata1 and manage them independently using innodb_file_per_table. That way, only MVCC and Table MetaData would reside in ibdata1.
In the next post I explain how to do exactly that.
#mysql #innodb #myisam #ibdata1 #database #innodb_file_per_table
if it is not started and failed what to do?
Today was a big day as a technical point of view in MySQL that saved a lot of storage for me and great deal of maintenance in the future.
To better explain the issue I have to talk a little bit about fundamental behaviour of MySQL InnoDB storage engine!
in past MySQL used MyISAM as its default storage engine. It didn't support transaction. It was not fault tolerant and data was not reliable when power outages occured or server got restarted in the middle of the MySQL actions. By now MySQL uses
InnoDB
as its default storage engine that is battery packed by transactions, fault tolerant and more.In InngoDB by default all tables and all databases resides in a single gigantic file called
ibdata
. When data grows and you alter your tables, the scar gets worse! The size of the ibdata grows very fast. When you alter a table ibdata
file would not shrink. For example we had a 120GB single file on server that altering a single table with a huge data would take a long time and would take longstorage, our server went out of free space.
There a is mechanism in MySQL that you configure InnoDB to store each tables data into its own file not inside of
ibdata
file. This mechnism has great advantages like using OPTIMIZE TABLE
to shrink table size.The
OPTIMIZE TABLE
whith InnoDB
tables, locks the table, copy the data in a new clean table (that's why the result is shrinked), drop the original table and rename the new table with the original name. That why you should care to have the double of the volumetry of your table available in your disk. If you have a 30GB table, optimizing that table needs at least 30GB of free disk space.Do not use optimize table on a table, when you have not configured innodb file per table. Running
OPTIMIZE TABLE
against an InnoDB table stored ibdata1 will make things worse because here is what it does:- Makes the table's data and indexes contiguous inside ibdata1.
- It makes ibdata1 grow because the contiguous data is appended to ibdata1.
You can segregate Table Data and Table Indexes from ibdata1 and manage them independently using innodb_file_per_table. That way, only MVCC and Table MetaData would reside in ibdata1.
In the next post I explain how to do exactly that.
#mysql #innodb #myisam #ibdata1 #database #innodb_file_per_table
When innodb_file_per_table is enabled, InnoDB stores data and indexes for each newly created table in a separate .ibd file instead of the system tablespace.
I have to summarize the steps in order to post all in one post:
1- use
2- Drop all databases (except mysql schema)
3- service mysql stop
4- Add the following lines to /etc/my.cnf
[mysqld]
innodb_file_per_table
innodb_flush_method=O_DIRECT
innodb_log_file_size=1G
innodb_buffer_pool_size=4G
* Sidenote: Whatever your set for innodb_buffer_pool_size, make sure innodb_log_file_size is 25% of innodb_buffer_pool_size.
5- rm -f /var/lib/mysql/ibdata1 /var/lib/mysql/ib_logfile
6- service mysql start
* if mysql does not start it may be due to insufficient memory. Try to reduce innodb_buffer_pool_size and innodb_log_file_size occordingly.
7- Reload SQLData.sql into mysql
Suppose you have an InnoDB table named mydb.mytable. If you go into /var/lib/mysql/mydb, you will see two files representing the table
- mytable.frm (Storage Engine Header)
- mytable.ibd (Home of Table Data and Table Indexes for mydb.mytable)
With the innodb_file_per_table option in /etc/my.cnf, you can run OPTIMIZE TABLE mydb.mytable; and the file /var/lib/mysql/mydb/ mytable.ibd will actually shrink.
#mysql #InnoDB #innodb_file_per_table #optimize_table
I have to summarize the steps in order to post all in one post:
1- use
mysqldump
to export your desired databases (call it SQLData.sql).2- Drop all databases (except mysql schema)
3- service mysql stop
4- Add the following lines to /etc/my.cnf
[mysqld]
innodb_file_per_table
innodb_flush_method=O_DIRECT
innodb_log_file_size=1G
innodb_buffer_pool_size=4G
* Sidenote: Whatever your set for innodb_buffer_pool_size, make sure innodb_log_file_size is 25% of innodb_buffer_pool_size.
5- rm -f /var/lib/mysql/ibdata1 /var/lib/mysql/ib_logfile
6- service mysql start
* if mysql does not start it may be due to insufficient memory. Try to reduce innodb_buffer_pool_size and innodb_log_file_size occordingly.
7- Reload SQLData.sql into mysql
ibdata1
will grow but only contain table metadata. Each InnoDB table will exist outside of ibdata1.Suppose you have an InnoDB table named mydb.mytable. If you go into /var/lib/mysql/mydb, you will see two files representing the table
- mytable.frm (Storage Engine Header)
- mytable.ibd (Home of Table Data and Table Indexes for mydb.mytable)
ibdata1
will never contain InnoDB data and Indexes anymore.With the innodb_file_per_table option in /etc/my.cnf, you can run OPTIMIZE TABLE mydb.mytable; and the file /var/lib/mysql/mydb/ mytable.ibd will actually shrink.
#mysql #InnoDB #innodb_file_per_table #optimize_table
Check
Please make sure that you have permissions on getting grants list, otherwise
#mysql #grants #sql_grants #database
grants
of a specific user on MySQL
:SELECT sql_grants FROM common_schema.sql_show_grants WHERE user='app';
Please make sure that you have permissions on getting grants list, otherwise
permission denied
will be returned back.#mysql #grants #sql_grants #database
OperationalError: (2013, 'Lost connection to MySQL server during query')
Usually it indicates network connectivity trouble and you should check the condition of your network if this error occurs frequently. If the error message includes “during query,” this is probably the case you are experiencing.
Sometimes the “during query” form happens when millions of rows are being sent as part of one or more queries. If you know that this is happening, you should try increasing net_read_timeout from its default of 30 seconds to 60 seconds or longer, sufficient for the data transfer to complete.
More rarely, it can happen when the client is attempting the initial connection to the server. In this case, if your connect_timeout value is set to only a few seconds, you may be able to resolve the problem by increasing it to ten seconds, perhaps more if you have a very long distance or slow connection. You can determine whether you are experiencing this more uncommon cause by using SHOW GLOBAL STATUS LIKE 'Aborted_connects'. It will increase by one for each initial connection attempt that the server aborts. You may see “reading authorization packet” as part of the error message; if so, that also suggests that this is the solution that you need.
If the cause is none of those just described, you may be experiencing a problem with BLOB values that are larger than max_allowed_packet, which can cause this error with some clients. Sometime you may see an ER_NET_PACKET_TOO_LARGE error, and that confirms that you need to increase max_allowed_packet.
#database #mysql #OperationalError #connection
Usually it indicates network connectivity trouble and you should check the condition of your network if this error occurs frequently. If the error message includes “during query,” this is probably the case you are experiencing.
Sometimes the “during query” form happens when millions of rows are being sent as part of one or more queries. If you know that this is happening, you should try increasing net_read_timeout from its default of 30 seconds to 60 seconds or longer, sufficient for the data transfer to complete.
More rarely, it can happen when the client is attempting the initial connection to the server. In this case, if your connect_timeout value is set to only a few seconds, you may be able to resolve the problem by increasing it to ten seconds, perhaps more if you have a very long distance or slow connection. You can determine whether you are experiencing this more uncommon cause by using SHOW GLOBAL STATUS LIKE 'Aborted_connects'. It will increase by one for each initial connection attempt that the server aborts. You may see “reading authorization packet” as part of the error message; if so, that also suggests that this is the solution that you need.
If the cause is none of those just described, you may be experiencing a problem with BLOB values that are larger than max_allowed_packet, which can cause this error with some clients. Sometime you may see an ER_NET_PACKET_TOO_LARGE error, and that confirms that you need to increase max_allowed_packet.
#database #mysql #OperationalError #connection
How to check
If you have local access to mySQL server you can check it by:
If you have remote access to
#mysql #version
MySQL
version:If you have local access to mySQL server you can check it by:
mysql -v
If you have remote access to
MySQL
you can first login to mysql using mysql -u USERNAME -p PASS -h REMOTE_HOST
. And then after successful login issue the below command:SELECT version();
#mysql #version
If you have installed mysql server and want to install
The solution is to install
If you have installed
#mysql #mysql_config #MYSQLpython
MYSQL-python
package, you may encounter the below error like me:EnvironmentError: mysql_config not found
The solution is to install
libmysqlclient-dev
:sudo apt-get install libmysqlclient-dev
If you have installed
MariaDB
then:sudo apt-get install libmariadbclient-dev
#mysql #mysql_config #MYSQLpython
If you want to make an exact copy of a table from another database into a target database in
The above command will create a table named
#mysql #database #create_table #table #copy_table
mySQL
you could do like below:create table new_table like target_database.target_table
The above command will create a table named
new_table
like target_table
from target_database
database.#mysql #database #create_table #table #copy_table