InnoDB file per table why?
if it is not started and failed what to do?
Today was a big day as a technical point of view in MySQL that saved a lot of storage for me and great deal of maintenance in the future.
To better explain the issue I have to talk a little bit about fundamental behaviour of MySQL InnoDB storage engine!
in past MySQL used MyISAM as its default storage engine. It didn't support transaction. It was not fault tolerant and data was not reliable when power outages occured or server got restarted in the middle of the MySQL actions. By now MySQL uses
In InngoDB by default all tables and all databases resides in a single gigantic file called
storage, our server went out of free space.
There a is mechanism in MySQL that you configure InnoDB to store each tables data into its own file not inside of
The
Do not use optimize table on a table, when you have not configured innodb file per table. Running
- Makes the table's data and indexes contiguous inside ibdata1.
- It makes ibdata1 grow because the contiguous data is appended to ibdata1.
You can segregate Table Data and Table Indexes from ibdata1 and manage them independently using innodb_file_per_table. That way, only MVCC and Table MetaData would reside in ibdata1.
In the next post I explain how to do exactly that.
#mysql #innodb #myisam #ibdata1 #database #innodb_file_per_table
if it is not started and failed what to do?
Today was a big day as a technical point of view in MySQL that saved a lot of storage for me and great deal of maintenance in the future.
To better explain the issue I have to talk a little bit about fundamental behaviour of MySQL InnoDB storage engine!
in past MySQL used MyISAM as its default storage engine. It didn't support transaction. It was not fault tolerant and data was not reliable when power outages occured or server got restarted in the middle of the MySQL actions. By now MySQL uses
InnoDB
as its default storage engine that is battery packed by transactions, fault tolerant and more.In InngoDB by default all tables and all databases resides in a single gigantic file called
ibdata
. When data grows and you alter your tables, the scar gets worse! The size of the ibdata grows very fast. When you alter a table ibdata
file would not shrink. For example we had a 120GB single file on server that altering a single table with a huge data would take a long time and would take longstorage, our server went out of free space.
There a is mechanism in MySQL that you configure InnoDB to store each tables data into its own file not inside of
ibdata
file. This mechnism has great advantages like using OPTIMIZE TABLE
to shrink table size.The
OPTIMIZE TABLE
whith InnoDB
tables, locks the table, copy the data in a new clean table (that's why the result is shrinked), drop the original table and rename the new table with the original name. That why you should care to have the double of the volumetry of your table available in your disk. If you have a 30GB table, optimizing that table needs at least 30GB of free disk space.Do not use optimize table on a table, when you have not configured innodb file per table. Running
OPTIMIZE TABLE
against an InnoDB table stored ibdata1 will make things worse because here is what it does:- Makes the table's data and indexes contiguous inside ibdata1.
- It makes ibdata1 grow because the contiguous data is appended to ibdata1.
You can segregate Table Data and Table Indexes from ibdata1 and manage them independently using innodb_file_per_table. That way, only MVCC and Table MetaData would reside in ibdata1.
In the next post I explain how to do exactly that.
#mysql #innodb #myisam #ibdata1 #database #innodb_file_per_table
When innodb_file_per_table is enabled, InnoDB stores data and indexes for each newly created table in a separate .ibd file instead of the system tablespace.
I have to summarize the steps in order to post all in one post:
1- use
2- Drop all databases (except mysql schema)
3- service mysql stop
4- Add the following lines to /etc/my.cnf
[mysqld]
innodb_file_per_table
innodb_flush_method=O_DIRECT
innodb_log_file_size=1G
innodb_buffer_pool_size=4G
* Sidenote: Whatever your set for innodb_buffer_pool_size, make sure innodb_log_file_size is 25% of innodb_buffer_pool_size.
5- rm -f /var/lib/mysql/ibdata1 /var/lib/mysql/ib_logfile
6- service mysql start
* if mysql does not start it may be due to insufficient memory. Try to reduce innodb_buffer_pool_size and innodb_log_file_size occordingly.
7- Reload SQLData.sql into mysql
Suppose you have an InnoDB table named mydb.mytable. If you go into /var/lib/mysql/mydb, you will see two files representing the table
- mytable.frm (Storage Engine Header)
- mytable.ibd (Home of Table Data and Table Indexes for mydb.mytable)
With the innodb_file_per_table option in /etc/my.cnf, you can run OPTIMIZE TABLE mydb.mytable; and the file /var/lib/mysql/mydb/ mytable.ibd will actually shrink.
#mysql #InnoDB #innodb_file_per_table #optimize_table
I have to summarize the steps in order to post all in one post:
1- use
mysqldump
to export your desired databases (call it SQLData.sql).2- Drop all databases (except mysql schema)
3- service mysql stop
4- Add the following lines to /etc/my.cnf
[mysqld]
innodb_file_per_table
innodb_flush_method=O_DIRECT
innodb_log_file_size=1G
innodb_buffer_pool_size=4G
* Sidenote: Whatever your set for innodb_buffer_pool_size, make sure innodb_log_file_size is 25% of innodb_buffer_pool_size.
5- rm -f /var/lib/mysql/ibdata1 /var/lib/mysql/ib_logfile
6- service mysql start
* if mysql does not start it may be due to insufficient memory. Try to reduce innodb_buffer_pool_size and innodb_log_file_size occordingly.
7- Reload SQLData.sql into mysql
ibdata1
will grow but only contain table metadata. Each InnoDB table will exist outside of ibdata1.Suppose you have an InnoDB table named mydb.mytable. If you go into /var/lib/mysql/mydb, you will see two files representing the table
- mytable.frm (Storage Engine Header)
- mytable.ibd (Home of Table Data and Table Indexes for mydb.mytable)
ibdata1
will never contain InnoDB data and Indexes anymore.With the innodb_file_per_table option in /etc/my.cnf, you can run OPTIMIZE TABLE mydb.mytable; and the file /var/lib/mysql/mydb/ mytable.ibd will actually shrink.
#mysql #InnoDB #innodb_file_per_table #optimize_table
Check
Please make sure that you have permissions on getting grants list, otherwise
#mysql #grants #sql_grants #database
grants
of a specific user on MySQL
:SELECT sql_grants FROM common_schema.sql_show_grants WHERE user='app';
Please make sure that you have permissions on getting grants list, otherwise
permission denied
will be returned back.#mysql #grants #sql_grants #database
OperationalError: (2013, 'Lost connection to MySQL server during query')
Usually it indicates network connectivity trouble and you should check the condition of your network if this error occurs frequently. If the error message includes “during query,” this is probably the case you are experiencing.
Sometimes the “during query” form happens when millions of rows are being sent as part of one or more queries. If you know that this is happening, you should try increasing net_read_timeout from its default of 30 seconds to 60 seconds or longer, sufficient for the data transfer to complete.
More rarely, it can happen when the client is attempting the initial connection to the server. In this case, if your connect_timeout value is set to only a few seconds, you may be able to resolve the problem by increasing it to ten seconds, perhaps more if you have a very long distance or slow connection. You can determine whether you are experiencing this more uncommon cause by using SHOW GLOBAL STATUS LIKE 'Aborted_connects'. It will increase by one for each initial connection attempt that the server aborts. You may see “reading authorization packet” as part of the error message; if so, that also suggests that this is the solution that you need.
If the cause is none of those just described, you may be experiencing a problem with BLOB values that are larger than max_allowed_packet, which can cause this error with some clients. Sometime you may see an ER_NET_PACKET_TOO_LARGE error, and that confirms that you need to increase max_allowed_packet.
#database #mysql #OperationalError #connection
Usually it indicates network connectivity trouble and you should check the condition of your network if this error occurs frequently. If the error message includes “during query,” this is probably the case you are experiencing.
Sometimes the “during query” form happens when millions of rows are being sent as part of one or more queries. If you know that this is happening, you should try increasing net_read_timeout from its default of 30 seconds to 60 seconds or longer, sufficient for the data transfer to complete.
More rarely, it can happen when the client is attempting the initial connection to the server. In this case, if your connect_timeout value is set to only a few seconds, you may be able to resolve the problem by increasing it to ten seconds, perhaps more if you have a very long distance or slow connection. You can determine whether you are experiencing this more uncommon cause by using SHOW GLOBAL STATUS LIKE 'Aborted_connects'. It will increase by one for each initial connection attempt that the server aborts. You may see “reading authorization packet” as part of the error message; if so, that also suggests that this is the solution that you need.
If the cause is none of those just described, you may be experiencing a problem with BLOB values that are larger than max_allowed_packet, which can cause this error with some clients. Sometime you may see an ER_NET_PACKET_TOO_LARGE error, and that confirms that you need to increase max_allowed_packet.
#database #mysql #OperationalError #connection
How to check
If you have local access to mySQL server you can check it by:
If you have remote access to
#mysql #version
MySQL
version:If you have local access to mySQL server you can check it by:
mysql -v
If you have remote access to
MySQL
you can first login to mysql using mysql -u USERNAME -p PASS -h REMOTE_HOST
. And then after successful login issue the below command:SELECT version();
#mysql #version
If you have installed mysql server and want to install
The solution is to install
If you have installed
#mysql #mysql_config #MYSQLpython
MYSQL-python
package, you may encounter the below error like me:EnvironmentError: mysql_config not found
The solution is to install
libmysqlclient-dev
:sudo apt-get install libmysqlclient-dev
If you have installed
MariaDB
then:sudo apt-get install libmariadbclient-dev
#mysql #mysql_config #MYSQLpython
If you want to make an exact copy of a table from another database into a target database in
The above command will create a table named
#mysql #database #create_table #table #copy_table
mySQL
you could do like below:create table new_table like target_database.target_table
The above command will create a table named
new_table
like target_table
from target_database
database.#mysql #database #create_table #table #copy_table
Metabase by default uses
gets important when you have loads of data there (let's say hundreds of questions and tens of dashboards).
The safest way for production is to migrate this data to MySQL. It makes
The last command will run metabase and forces it to migrate data to MySQL. In case needed create
#metabase #migration #H2 #mysql
H2 Database
for its internal usage and storing questions and dashboards that you are created. When you want to move from one host to another it can be tricky and it may crash! Here you will lose your dashbaords and your containers. This mattergets important when you have loads of data there (let's say hundreds of questions and tens of dashboards).
The safest way for production is to migrate this data to MySQL. It makes
Metabase
to use MySQL as its backend not H2 Database. To migrate your data from current working H2 DB of Metabase you need to set following variables first:export MB_DB_TYPE=mysql
export MB_DB_DBNAME=metabase
export MB_DB_PORT=3306
export MB_DB_USER=<username>
export MB_DB_PASS=<password>
export MB_DB_HOST=localhost
java -jar metabase.jar load-from-h2
The last command will run metabase and forces it to migrate data to MySQL. In case needed create
metabase
database in mySQL with sufficient privileges. It should come up safely with no much headache. After moving data remove you *.db
metabase h2 DB file.#metabase #migration #H2 #mysql
Turn
Moreover you also need to change column character set:
Be careful that now you have to do more things like set character set after connection initiation in Python:
Now before executing your query you also need to set character set on cursor:
#database #mysql #character_set #utf8mb4 #cursor #emoji
MySQL
table into utf8mb4
to store emojis:ALTER TABLE YOUR_TABLE convert to character set utf8mb4 collate utf8mb4_general_ci;
Moreover you also need to change column character set:
ALTER TABLE YOUR_TABLE CHANGE YOUR_COLUMN_NAME YOUR_COLUMN_NAME VARCHAR(250) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci;
Be careful that now you have to do more things like set character set after connection initiation in Python:
your_mysql_client = MySQLdb.connect(...)
your_mysql_client.set_character_set('utf8mb4')
Now before executing your query you also need to set character set on cursor:
my_cursor.execute("SET NAMES utf8mb4;")
my_cursor.execute(YOUR_QUERY)
#database #mysql #character_set #utf8mb4 #cursor #emoji
In
To read more about
- http://docs.grafana.org/features/datasources/mysql/#using-mysql-in-grafana
#mongodb #mongo #mysql #grafana #dashboard #chart
Grafana
if you are connected to MySQL
you need to provide 3 value in your select query. One is time which must be called time_sec
, the other is countable value which must be called value
and the other is the label that is displayed on your graph which must be called metric
:SELECT
UNIX_TIMESTAMP(your_date_field) as time_sec,
count(*) as value,
'your_label' as metric
FROM table
WHERE status='success'
GROUP BY your_date_field
ORDER BY your_date_field ASC
To read more about
Grafana
head over here:- http://docs.grafana.org/features/datasources/mysql/#using-mysql-in-grafana
#mongodb #mongo #mysql #grafana #dashboard #chart
https://dba.stackexchange.com/questions/41050/is-it-safe-to-delete-mysql-bin-files
#mysql #mysql_bin #bin #bin_file #purge
#mysql #mysql_bin #bin #bin_file #purge
Database Administrators Stack Exchange
Is it safe to delete mysql-bin files?
I have MM Replication in mysql, and I want to squeeze some free space in the box be deleting unnecessary files, I came across these mysql-bin files inside /var/db/mysql/ There are hundreds of those...
There is always a risk and also a problem when altering a production mySQL table.
Installation steps on Debian:
1- wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.deb
2- sudo dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.deb
3- sudo apt-get update
4- sudo apt-get install percona-toolkit
Now you have percona toolkit on your Debian server. Use the command
#mysql #percona #schema #alter_table #online_schema_change #percona_toolkit #pt_online_schema_change
Percona
has released a toolkit that contains a command called pt-online-schema-change
. It will change table schema live on production without downtime.Installation steps on Debian:
1- wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.deb
2- sudo dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.deb
3- sudo apt-get update
4- sudo apt-get install percona-toolkit
Now you have percona toolkit on your Debian server. Use the command
pt-online-schema-change
for your table alteration.#mysql #percona #schema #alter_table #online_schema_change #percona_toolkit #pt_online_schema_change
In order to dry run before the real execution use
Now after dry run you can execute the alter command:
#mysql #percona #schema #alter_table #online_schema_change #percona_toolkit #pt_online_schema_change
--dry-run
:pt-online-schema-change --dry-run h=127.0.0.1,D=YOUR_DB,t=YOUR_TABLE --alter "ADD COLUMN (foobar varchar(30) DEFAULT NULL);"
Now after dry run you can execute the alter command:
pt-online-schema-change --execute h=127.0.0.1,D=YOUR_DB,t=YOUR_TABLE --alter "ADD COLUMN (foobar varchar(30) DEFAULT NULL);"
#mysql #percona #schema #alter_table #online_schema_change #percona_toolkit #pt_online_schema_change
https://dba.stackexchange.com/questions/187630/problem-with-aborted-pt-online-schema-change-command
#mysql #trigger #percona #online_schema_change
#mysql #trigger #percona #online_schema_change
Database Administrators Stack Exchange
Problem with aborted PT-online-schema change command
I aborted a pt-online-schema change command to change a table definition. Now, when I run pt-online-schema change again, I get this error:
The table . has trigge...
The table . has trigge...
With
So be careful with it!
#mysql #mysqldump #port #port_ignorance #3306 #backup #database_backup #sockets #ip_address #localhost
mysqldump
you can export databases. with --port
parameter you can specify which port it should connects. If you provide localhost
for --host
parameter, mySQL will use sockets and port will be ignored.So be careful with it!
#mysql #mysqldump #port #port_ignorance #3306 #backup #database_backup #sockets #ip_address #localhost
Table compression in
https://dev.mysql.com/doc/refman/5.5/en/innodb-compression-usage.html
#database #mysql #compression #innodb
MySQL
:https://dev.mysql.com/doc/refman/5.5/en/innodb-compression-usage.html
#database #mysql #compression #innodb
If you have space problems on a server that hosts
In order to compress data on a table:
The output in my case squeezed data 4X:
After compression:
#mysql #innodb #compression #alter #row_format #compressed
MySQL
database, it's good idea to use compression. Make sure you are using InnoDB
storage engine.In order to compress data on a table:
alter table YOUR_DB_NAME.YOUR_TABLE_NAME ROW_FORMAT=COMPRESSED;
The output in my case squeezed data 4X:
ls -lh users.ibd | awk '{print $5}'
16G
After compression:
ls -lh users.ibd | awk '{print $5}'
3.9G
NOTE:
you have to use innodb_file_per_table
in your configuration. We have previously talked about this procedure step by step.#mysql #innodb #compression #alter #row_format #compressed