[Nosial/flake] Issue opened: #10 Re-deploy Xpress by glitchkill
For this iteration, Garage will be used.
Options:
- Local-only (e.g. over Tailscale)
- Over clearnet (blocked by #8)
For this iteration, Garage will be used.
Options:
- Local-only (e.g. over Tailscale)
- Over clearnet (blocked by #8)
[Nosial/flake] Issue opened: #11 Deploy Stalwart by glitchkill
Due to a lack of a publicly-exposable public IP, Stalwart will be deployed on Edge. Data will be stored remotely, on Xpress (blocked by #10).
Due to a lack of a publicly-exposable public IP, Stalwart will be deployed on Edge. Data will be stored remotely, on Xpress (blocked by #10).
[Nosial/fedora-jail:master] 1 new commit
[470936f] chore: force hostname for shell
Ephemeral nodes are automatically deleted within 30 minutes to 48 hours after they go offline if they weren't shut down gracefully. By default, if a hostname is not specified, the node "leniently" avoids overtaking the hostname. This ensures that it's overtaken anyway. - glitchkill
[470936f] chore: force hostname for shell
Ephemeral nodes are automatically deleted within 30 minutes to 48 hours after they go offline if they weren't shut down gracefully. By default, if a hostname is not specified, the node "leniently" avoids overtaking the hostname. This ensures that it's overtaken anyway. - glitchkill
Package created: fedora-jail:470936f14674ad0beb45569571bbb1fcbad664a5 by glitchkill
👍1
[Nosial/flake] Issue opened: #12 Automate image and nixpkgs updates by glitchkill
[Nosial/flake:master] 1 new commit
[baf215d] chore: bump image version - badPointer
[baf215d] chore: bump image version - badPointer
[Nosial/flake] Issue edited: #6 Theme deployed services by glitchkill
[Nosial/flake] Issue opened: #13 Set up backups by glitchkill
Use BorgBackup to complete backups to Xpress (blocked by #10). Use a pattern of 1 full backup and 10 deltas.
Use BorgBackup to complete backups to Xpress (blocked by #10). Use a pattern of 1 full backup and 10 deltas.
[Nosial/flake] Issue opened: #14 Move to Lanzaboote by glitchkill
Blocked by #9 for Maple, install Secure Boot and Measured Boot on enabled systems.
Blocked by #9 for Maple, install Secure Boot and Measured Boot on enabled systems.
[Nosial/flake] Issue opened: #15 Encrypt data partitions with LUKS by glitchkill
Might require merging /nix with /srv due to much of service configuration and the flake being baked in with generations. Possibly blocked by #9.
Authentication options:
-
- Automated (TPM-only, very likely not supported by most deployed nodes)
- Manual (passwords, passkey files, public-private keys)
Might require merging /nix with /srv due to much of service configuration and the flake being baked in with generations. Possibly blocked by #9.
Authentication options:
-
- Automated (TPM-only, very likely not supported by most deployed nodes)
- Manual (passwords, passkey files, public-private keys)
[Nosial/flake] Issue edited: #15 Encrypt data partitions with LUKS by glitchkill
Might require merging /nix with /srv due to much of service configuration and the flake being baked in with generations. Possibly blocked by #9.
Authentication options:
-
- Automated (TPM-only, very likely not supported by most deployed nodes)
- Manual (passwords, passkey files, public-private keys)
Might require merging /nix with /srv due to much of service configuration and the flake being baked in with generations. Possibly blocked by #9.
Authentication options:
-
- Automated (TPM-only, very likely not supported by most deployed nodes)
- Manual (passwords, passkey files, public-private keys)
[Nosial/flake] Issue opened: #16 Install dm-verity by glitchkill
Likely a white elephant.
Existing implementation: https://github.com/arianvp/server-optimised-nixos
Likely a white elephant.
Existing implementation: https://github.com/arianvp/server-optimised-nixos
[Nosial/flake] Issue opened: #17 DragonflyDB v. Valkey v. Redis by glitchkill
All services are Redis API-compatible, making them plug-and-play.
- Redis has moved back to an open-source license since 8.x. However, no limitations were imposed on its usage by server administrators. If it works, don't fix it.
- Valkey is a fork of Redis which emerged when Redis switched to a non-open-source license. A project of the Linux Foundation, claims to be _somewhat_ faster than Redis.
- DragonflyDB is a custom solution which claims to be ~20x faster than Redis. However, I have yet to see if anyone actually uses it in production as a "drop-in Redis replacement".
All services are Redis API-compatible, making them plug-and-play.
- Redis has moved back to an open-source license since 8.x. However, no limitations were imposed on its usage by server administrators. If it works, don't fix it.
- Valkey is a fork of Redis which emerged when Redis switched to a non-open-source license. A project of the Linux Foundation, claims to be _somewhat_ faster than Redis.
- DragonflyDB is a custom solution which claims to be ~20x faster than Redis. However, I have yet to see if anyone actually uses it in production as a "drop-in Redis replacement".
👍1
[Nosial/flake] Issue opened: #18 Microservices v. monolith by glitchkill
This is mainly in relation to databases and such.
Microservices:
-
Pros:
- Deployment is really simple, no need to keep any variables in mind
- Cleaning up is easy, delete and forget
- Database versioning is versatile
- Complete isolation (database vulnerability won't allow the attacker more data than the service has alloted to it)
- Multiple database drivers make no difference for deployment
Cons:
- Optimization is difficult (requires a per-container scope rather than global)
- Database management is difficult
- Multiple instances to keep in mind
- Managing the database manually for one-offs (e.g. for correcting service options) is nigh impossible
- Depends on Podman containers (related to #1)
Monolith (alike DaaS):
-
Pros:
- Hardware optimization is feasible (unknown impact)
- Just one instance to track
- Managing the database manually is more feasible (setting up a viewer/root credentials is a possibility)
- Host-agnostic
Cons:
- Requires to keep track of existing databases (e.g. "Have I configured Nextcloud already?")
- Clean up is manual (could be resolved with automation)
- Multiple database drivers must be hosted separately
- Database isn't isolated (e.g. in case of a vulnerability, could allow access to all service databases)
- Database versioning will be limited by the weakest-link-in-chain principle (requires keeping track of maximum supported DB version)
This is mainly in relation to databases and such.
Microservices:
-
Pros:
- Deployment is really simple, no need to keep any variables in mind
- Cleaning up is easy, delete and forget
- Database versioning is versatile
- Complete isolation (database vulnerability won't allow the attacker more data than the service has alloted to it)
- Multiple database drivers make no difference for deployment
Cons:
- Optimization is difficult (requires a per-container scope rather than global)
- Database management is difficult
- Multiple instances to keep in mind
- Managing the database manually for one-offs (e.g. for correcting service options) is nigh impossible
- Depends on Podman containers (related to #1)
Monolith (alike DaaS):
-
Pros:
- Hardware optimization is feasible (unknown impact)
- Just one instance to track
- Managing the database manually is more feasible (setting up a viewer/root credentials is a possibility)
- Host-agnostic
Cons:
- Requires to keep track of existing databases (e.g. "Have I configured Nextcloud already?")
- Clean up is manual (could be resolved with automation)
- Multiple database drivers must be hosted separately
- Database isn't isolated (e.g. in case of a vulnerability, could allow access to all service databases)
- Database versioning will be limited by the weakest-link-in-chain principle (requires keeping track of maximum supported DB version)
[Nosial/flake] Issue opened: #19 BBRv2/BBRv3 by glitchkill
BBR is a TCP congestion control algorithm created and used by Google for many services, such as YouTube. The mainline Linux kernel only has BBRv1. Since its merge into the kernel tree, Google has developed two more iterations of the protocol. However, it's difficult to say how helpful it would be with Maple's limited bandwidth. Testing results pending.
- BBRv2 (external): Good
- BBRv3 (external): TBD
BBR is a TCP congestion control algorithm created and used by Google for many services, such as YouTube. The mainline Linux kernel only has BBRv1. Since its merge into the kernel tree, Google has developed two more iterations of the protocol. However, it's difficult to say how helpful it would be with Maple's limited bandwidth. Testing results pending.
- BBRv2 (external): Good
- BBRv3 (external): TBD
👏1
[Nosial/flake] Issue opened: #20 Redeploy RSS news by glitchkill
Things of interest:
- Arch Linux news
- Debian news
- Phoronix
- ???
Things of interest:
- Arch Linux news
- Debian news
- Phoronix
- ???
[Nosial/flake] Issue edited: #20 Re-deploy RSS news by glitchkill
Things of interest:
- Arch Linux news
- Debian news
- Phoronix
- ???
Things of interest:
- Arch Linux news
- Debian news
- Phoronix
- ???
[Nosial/flake] Issue edited: #20 Re-deploy RSS news by glitchkill
Things of interest:
- Arch Linux news
- Debian news
- Phoronix
- Feisty Duck
- LWN.net
Things of interest:
- Arch Linux news
- Debian news
- Phoronix
- Feisty Duck
- LWN.net
[Nosial/flake] Issue edited: #2 Generation rebuild CI by glitchkill
CI modes:
-
- Skip (skips rebuild on all nodes for commit)
- Switch (rebuilds on all _affected_ nodes for commit, switches to new generation)
- Boot (rebuilds on all _affected_ nodes for commit, sets new generation as default for next boot)
- Reboot (rebuilds on all _affected_ nodes for commit, sets new generation as default for next boot and reboots)
- Force-switch (rebuilds on both affected and unaffected nodes for commit, switches to new generation)
- Force-boot (rebuilds on both affected and unaffected nodes for commit, sets new generation as default for next boot)
- Force-reboot (rebuilds on both affected and unaffected nodes for commit, sets new generation as default for next boot and reboots)
Affected node judgment: if any of the modules/files imported by the node are modified, node is marked as affected.
Rebuild process should be a CI pipeline over SSH (ephemeral Tailscale node?)
CI modes:
-
- Skip (skips rebuild on all nodes for commit)
- Switch (rebuilds on all _affected_ nodes for commit, switches to new generation)
- Boot (rebuilds on all _affected_ nodes for commit, sets new generation as default for next boot)
- Reboot (rebuilds on all _affected_ nodes for commit, sets new generation as default for next boot and reboots)
- Force-switch (rebuilds on both affected and unaffected nodes for commit, switches to new generation)
- Force-boot (rebuilds on both affected and unaffected nodes for commit, sets new generation as default for next boot)
- Force-reboot (rebuilds on both affected and unaffected nodes for commit, sets new generation as default for next boot and reboots)
Affected node judgment: if any of the modules/files imported by the node are modified, node is marked as affected.
Rebuild process should be a CI pipeline over SSH (ephemeral Tailscale node?)
[Nosial/flake] Issue opened: #21 Instant generation cleanup by glitchkill
Whilst it's currently hard to say, there doesn't appear to be much reason to keep more than one (two during system rebuild) generation on the system at all times. These generations should be cleaned up as soon as possible. However, `nix.gc` only can clean up on a selected time period (wasteful when no generations to clean up, wasteful when a dozen of generations have been accumulated before the cleanup).
Whilst it's currently hard to say, there doesn't appear to be much reason to keep more than one (two during system rebuild) generation on the system at all times. These generations should be cleaned up as soon as possible. However, `nix.gc` only can clean up on a selected time period (wasteful when no generations to clean up, wasteful when a dozen of generations have been accumulated before the cleanup).