β β β ο½ππ»βΊπ«Δπ¬πβ β β β
π¦Binary dump is the fastest way to dump a database. Binary dump files are portable across all platforms, regardless of CPU type.
To perform a binary dump:
1) Start a server on the database to be dumped.
Presumably, since fewer users are on the system, more memory can be allocated to the shared memory pool with the (-B) startup parameter.
> Example:
$ proserve sports -B 100000
2) Start multiple binary dump sessions.
Since these operations are I/O intensive, 3 to 4 sessions per CPU is recommended. Further improvement can be obtained by dumping the data to different disks
Example:
$ proutil sports -C dump customer /disk1/temp/data
$ proutil sports -C dump invoice /disk2/temp/data
π¦To perform a binary load:
1)The binary load is usually the fastest way to load data into the database when the amount of data in the tables is large. When the amount of data in the tables is small (low count of records) the Data Dictionary dump and load may be faster as there is some overhead in parsing the binary header of the binary load file (.bd). When there are only a few records in a table the overhead associated with parsing the header takes longer to load the table data than the Data Dictionary / Data Administration tool
2)Start the database multi-user with no integrity (-i). Understanding that should any error occurs while using the -i flag, there is no ability for the database to perform crash recovery to undo the operations, in which case the load needs to be re-base-lined.
3) Start multiple load sessions, one session per Storage Area
When an Enterprise Database License is in use, start a Before-Image Writer (BIW) and 2- 4 Asynchronous Page Writers (APW's).
The best database block size is 8K, provided records-per-block have been considered.
4) Several Articles discuss building scripts for the binary load:
000021664, How to perform a binary dump and load?
000011828, How to generate scripts to run binary dump and load for all tables?
5)When loading to Type I storage areas, binary load tables with the s mallest records first and run one binary load per storage area to not cause fragmentation during the load.
6) This is no longer as important with the advent of Type II Storage Areas in OpenEdge 10 which are the preferred and recommended storage area structure. Refer to Article: 000022209, Does loading small records first still affect fragmentation in the Type II Storage Area architecture ?
7)Use PROUTIL <db> -C TABANALYS to determine the table(s) with the Smallest records.
8) This strategy reduces scatter because Progress loads as many records as it can into a given block before it moves on to the next when either the records per block or the space in the block are exhausted
9) If larger records are loaded first, there are very likely to be blocks with enough record slots remaining and block space to fit smaller records. This lea)ds to fragmentation because the small records are scattered throughout the area.
10) If only one table needs to be dumped and loaded, binary load it to a new storage area by also reloading the schema definition for this table and its indexes. Loading the table back into a type I storage area will not improve the scatter factor as records will mainly be loaded into the remaining space they left when deleted.
Powered by wiki
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
π¦Binary dump is the fastest way to dump a database. Binary dump files are portable across all platforms, regardless of CPU type.
To perform a binary dump:
1) Start a server on the database to be dumped.
Presumably, since fewer users are on the system, more memory can be allocated to the shared memory pool with the (-B) startup parameter.
> Example:
$ proserve sports -B 100000
2) Start multiple binary dump sessions.
Since these operations are I/O intensive, 3 to 4 sessions per CPU is recommended. Further improvement can be obtained by dumping the data to different disks
Example:
$ proutil sports -C dump customer /disk1/temp/data
$ proutil sports -C dump invoice /disk2/temp/data
π¦To perform a binary load:
1)The binary load is usually the fastest way to load data into the database when the amount of data in the tables is large. When the amount of data in the tables is small (low count of records) the Data Dictionary dump and load may be faster as there is some overhead in parsing the binary header of the binary load file (.bd). When there are only a few records in a table the overhead associated with parsing the header takes longer to load the table data than the Data Dictionary / Data Administration tool
2)Start the database multi-user with no integrity (-i). Understanding that should any error occurs while using the -i flag, there is no ability for the database to perform crash recovery to undo the operations, in which case the load needs to be re-base-lined.
3) Start multiple load sessions, one session per Storage Area
When an Enterprise Database License is in use, start a Before-Image Writer (BIW) and 2- 4 Asynchronous Page Writers (APW's).
The best database block size is 8K, provided records-per-block have been considered.
4) Several Articles discuss building scripts for the binary load:
000021664, How to perform a binary dump and load?
000011828, How to generate scripts to run binary dump and load for all tables?
5)When loading to Type I storage areas, binary load tables with the s mallest records first and run one binary load per storage area to not cause fragmentation during the load.
6) This is no longer as important with the advent of Type II Storage Areas in OpenEdge 10 which are the preferred and recommended storage area structure. Refer to Article: 000022209, Does loading small records first still affect fragmentation in the Type II Storage Area architecture ?
7)Use PROUTIL <db> -C TABANALYS to determine the table(s) with the Smallest records.
8) This strategy reduces scatter because Progress loads as many records as it can into a given block before it moves on to the next when either the records per block or the space in the block are exhausted
9) If larger records are loaded first, there are very likely to be blocks with enough record slots remaining and block space to fit smaller records. This lea)ds to fragmentation because the small records are scattered throughout the area.
10) If only one table needs to be dumped and loaded, binary load it to a new storage area by also reloading the schema definition for this table and its indexes. Loading the table back into a type I storage area will not improve the scatter factor as records will mainly be loaded into the remaining space they left when deleted.
Powered by wiki
β β β ο½ππ»βΊπ«Δπ¬πβ β β β
π¦ Fastest way to rebuild the indexes of a Version 8.x or later database:
1)Indexes must be rebuilt after binary loading the data.
2) In Version 9.1, Progress introduced an option to rebuild the index structure during the binary load phase. An offline idxbuild is still faster particularly since additional parameters were introduced in IDXBUILD Utility for Progress 9.1D07 or higher and OpenEdge 10.2B06 or higher
1)Indexes must be rebuilt after binary loading the data.
2) In Version 9.1, Progress introduced an option to rebuild the index structure during the binary load phase. An offline idxbuild is still faster particularly since additional parameters were introduced in IDXBUILD Utility for Progress 9.1D07 or higher and OpenEdge 10.2B06 or higher
Forwarded from Backup Legal Mega
146 - Router Security Configuration Guide [-PUNISHER-].pdf
3 MB
Forwarded from Backup Legal Mega
147 - Sap Basis Security [-PUNISHER-].pdf
667 KB
Forwarded from Backup Legal Mega
161 - Testing Web Security [-PUNISHER-].pdf
2.6 MB
Forwarded from Backup Legal Mega
162 - The Art Of Intrusion [-PUNISHER-].pdf
1.6 MB
Forwarded from Backup Legal Mega
179 - Thn Sep2011 [-PUNISHER-].pdf
6.6 MB
Forwarded from Backup Legal Mega
115_Low_Voltage_Wiring_Security_Fire_Alarm_Systems_PUNISHER_.pdf
6.9 MB
Forwarded from Backup Legal Mega
116 - Magazine 01 Low [-PUNISHER-].pdf
10.9 MB
Forwarded from Backup Legal Mega
129 - New Linux Course Modules [-PUNISHER-].pdf
370 KB
Forwarded from Backup Legal Mega
130_Ninja_Hacking_Unconventional_Penetration_Testing_Tactics_And.pdf
9.9 MB
Forwarded from Backup Legal Mega
131 - No.Starch.Metasploit.Jul.2011 [-PUNISHER-].pdf
6.9 MB
Forwarded from Backup Legal Mega
132_Oracle_10g_Advanced_Security_Administrartors_Guide_Ww_PUNISHER.pdf
3.2 MB