Programming Notes ✍️
PSI - Pressure Stall Information https://docs.kernel.org/accounting/psi.html
userspace monitor usage ex:
monitoring mem proc with 1s window tracking size and 150ms threshold which uses /proc/pressure to count the events of proc
#include <errno.h>
#include <fcntl.h>
#include <stdio.h>
#include <poll.h>
#include <string.h>
#include <unistd.h>
/*
* Monitor memory partial stall with 1s tracking window size
* and 150ms threshold.
*/
int main() {
const char trig[] = "some 150000 1000000";
struct pollfd fds;
int n;
fds.fd = open("/proc/pressure/memory", O_RDWR | O_NONBLOCK);
if (fds.fd < 0) {
printf("/proc/pressure/memory open error: %s\n",
strerror(errno));
return 1;
}
fds.events = POLLPRI;
if (write(fds.fd, trig, strlen(trig) + 1) < 0) {
printf("/proc/pressure/memory write error: %s\n",
strerror(errno));
return 1;
}
printf("waiting for events...\n");
while (1) {
n = poll(&fds, 1, -1);
if (n < 0) {
printf("poll error: %s\n", strerror(errno));
return 1;
}
if (fds.revents & POLLERR) {
printf("got POLLERR, event source is gone\n");
return 0;
}
if (fds.revents & POLLPRI) {
printf("event triggered!\n");
} else {
printf("unknown event received: 0x%x\n", fds.revents);
return 1;
}
}
return 0;
}
monitoring mem proc with 1s window tracking size and 150ms threshold which uses /proc/pressure to count the events of proc
🕊1
socket alive ssh connection in multiplex env where multi connection have to be made
Host *
ControlMaster auto
ControlPath ~/.ssh/master-socket/%r@%h:%p
#ControlPath /run/user/%i/sshmasterconn-%C
#ControlPath ~/.ssh/%r@%h:%p
ControlPersist 3s
# Local Address Foreign Address State
# one connection
tcp 0 0 192.168.x.y:58913 192.168.x.z:22 ESTABLISHED
# two multiplexed connections
tcp 0 0 192.168.x.y:58913 192.168.x.z:22 ESTABLISHED
# three multiplexed connections
tcp 0 0 192.168.x.y:58913 192.168.x.z:22 ESTABLISHED
Table 2: SSH Connections, Multiplexed
PGCONF-PITR_Mark_Jones_2015-10-28.pdf
427.3 KB
pitr definition
tables:
c table render (materilized ):
c table render (materilized ):
=$ create table size_comparison (json_val json, jsonb_val jsonb);
=$ with s as materialized ( select 1::int4 as v )
insert into size_comparison select to_json(v), to_jsonb(v) from s;
=$ SELECT
*,
pg_column_size( json_val ) AS json_size,
pg_column_size( jsonb_val ) AS jsonb_size
FROM
size_comparison;
json_val | jsonb_val | json_size | jsonb_size
----------+-----------+-----------+------------
1 | 1 | 2 | 17
(1 row)
=$ create table copy_text_pglz (v text compression pglz );
=$ create table copy_json_pglz (v json compression pglz );
=$ create table copy_jsonb_pglz (v jsonb compression pglz );
=$ create table copy_text_lz4 (v text compression lz4 );
=$ create table copy_json_lz4 (v json compression lz4 );
=$ create table copy_jsonb_lz4 (v jsonb compression lz4 );
Programming Notes ✍️
tts
=$ SELECT
test_name,
length( text_pglz ) AS text_length,
pg_column_compression( text_pglz ) AS pglz_comp,
pg_column_size( text_pglz ) AS pglz_size,
format(
'%7s %%',
round( ( 100.0 * pg_column_size( text_pglz ) ) / length( text_pglz ), 2 )
) AS pglz_comp,
pg_column_compression( text_lz4 ) AS lz4_comp,
pg_column_size( text_lz4 ) AS pglz_size,
format(
'%6s %%',
round( ( 100.0 * pg_column_size( text_lz4 ) ) / length( text_lz4 ), 2 )
) AS lz4_comp
FROM
compression_test
ORDER BY
text_length;
test_name | text_length | pglz_comp | pglz_size | pglz_comp | lz4_comp | pglz_size | lz4_comp
---------------------------------+-------------+-----------+-----------+-----------+----------+-----------+----------
random:5 | 5 | [null] | 6 | 120.00 % | [null] | 6 | 120.00 %
random:505 | 505 | [null] | 509 | 100.79 % | [null] | 509 | 100.79 %
random:1005 | 1005 | [null] | 1005 | 100.00 % | [null] | 1009 | 100.40 %
random:1505 | 1505 | [null] | 1505 | 100.00 % | [null] | 1509 | 100.27 %
legalnotice.html | 2065 | [null] | 2071 | 100.29 % | lz4 | 1630 | 78.93 %
release-prior.html | 2655 | pglz | 1214 | 45.73 % | lz4 | 1302 | 49.04 %
sql-dropuser.html | 3188 | pglz | 1462 | 45.86 % | lz4 | 1624 | 50.94 %
pltcl-procnames.html | 3670 | pglz | 1811 | 49.35 % | lz4 | 1970 | 53.68 %
spi-spi-getvalue.html | 4216 | pglz | 1771 | 42.01 % | lz4 | 2020 | 47.91 %
catalog-pg-db-role-setting.html | 4715 | pglz | 1811 | 38.41 % | lz4 | 2088 | 44.28 %
supported-platforms.html | 5235 | pglz | 2584 | 49.36 % | lz4 | 2839 | 54.23 %
datatype-money.html | 5749 | pglz | 2814 | 48.95 % | lz4 | 3066 | 53.33 %
sql-dropsubscription.html | 6289 | pglz | 2838 | 45.13 % | lz4 | 3056 | 48.59 %
pgfreespacemap.html | 6779 | pglz | 3134 | 46.23 % | lz4 | 3248 | 47.91 %
(14 rows)
Recovery Point Objective (RPO): the maximum targeted period in which data might be lost from an IT service due to a major incident.
Recovery Time Objective (RTO): the targeted duration of time and a service level within which a business process must be restored after a disaster (or disruption) in order to avoid unacceptable consequences associated with a breakage in business continuity.
Recovery Time Objective (RTO): the targeted duration of time and a service level within which a business process must be restored after a disaster (or disruption) in order to avoid unacceptable consequences associated with a breakage in business continuity.
https://helpcenter.veeam.com/docs/vbr/userguide/backup_copy_gfs.html?ver=13
The long-term or Grandfather-Father-Son (GFS) retention policy allows you to store VM backups for long periods of time — for weeks, months and years. For this purpose, Veeam Backup & Replication creates synthetic or active full backup files and marks them with GFS flags. These GFS flags can be of three types: weekly, monthly or yearly. Depending on which flag is assigned to the full backup, it will be stored for specified number of weeks, months or years.
The GFS retention also helps you to mitigate risks that the short-term retention policy has, such as large number of subsequent incremental backups. Large number of subsequent incremental backups can increase recovery time, because Veeam Backup & Replication has to read data through the whole backup chain. Also, one corrupted increment can make the whole chain useless. When you configure the GFS retention, Veeam Backup & Replication creates weekly/monthly/yearly full backups, so instead of one backup chain consisting of one full backup and incremental backups, you will have several backup chains.
GFS backups are always full backup files that contain data of the whole machine image as of a specific date. GFS is a tiered retention policy and it uses a number of cycles to retain backups for different periods of time:
Weekly backup cycle
Monthly backup cycle
Yearly backup cycle
In the GFS retention policy, weekly backups are known as ‘sons’, monthly backups are known as ‘fathers’ and yearly backups are known as ‘grandfathers’. Weekly, monthly and yearly backups are also called archive backups.
The long-term or Grandfather-Father-Son (GFS) retention policy allows you to store VM backups for long periods of time — for weeks, months and years. For this purpose, Veeam Backup & Replication creates synthetic or active full backup files and marks them with GFS flags. These GFS flags can be of three types: weekly, monthly or yearly. Depending on which flag is assigned to the full backup, it will be stored for specified number of weeks, months or years.
The GFS retention also helps you to mitigate risks that the short-term retention policy has, such as large number of subsequent incremental backups. Large number of subsequent incremental backups can increase recovery time, because Veeam Backup & Replication has to read data through the whole backup chain. Also, one corrupted increment can make the whole chain useless. When you configure the GFS retention, Veeam Backup & Replication creates weekly/monthly/yearly full backups, so instead of one backup chain consisting of one full backup and incremental backups, you will have several backup chains.
GFS backups are always full backup files that contain data of the whole machine image as of a specific date. GFS is a tiered retention policy and it uses a number of cycles to retain backups for different periods of time:
Weekly backup cycle
Monthly backup cycle
Yearly backup cycle
In the GFS retention policy, weekly backups are known as ‘sons’, monthly backups are known as ‘fathers’ and yearly backups are known as ‘grandfathers’. Weekly, monthly and yearly backups are also called archive backups.
Veeam Help Center
Long-Term Retention Policy (GFS) - Veeam Backup & Replication User Guide
The long-term or Grandfather-Father-Son (GFS) retention policy allows you to store VM backups for long periods of time — for weeks, months and years. For this purpose, Veeam Backup & Replication creates...