clightning: native database replication
Don't put `clightning.replication` options in `examples/configuration.nix` until it is more "battle-tested."
This commit is contained in:
parent
55fc77d72f
commit
17507835fc
@ -69,7 +69,8 @@ A [configuration preset](modules/presets/secure-node.nix) for setting up a secur
|
|||||||
NixOS modules ([src](modules/modules.nix))
|
NixOS modules ([src](modules/modules.nix))
|
||||||
* Application services
|
* Application services
|
||||||
* [bitcoind](https://github.com/bitcoin/bitcoin)
|
* [bitcoind](https://github.com/bitcoin/bitcoin)
|
||||||
* [clightning](https://github.com/ElementsProject/lightning) with support for announcing an onion service\
|
* [clightning](https://github.com/ElementsProject/lightning) with support for announcing an onion service
|
||||||
|
and [database replication](docs/services.md#setup-clightning-database-replication).\
|
||||||
Available plugins:
|
Available plugins:
|
||||||
* [clboss](https://github.com/ZmnSCPxj/clboss): automated C-Lightning Node Manager
|
* [clboss](https://github.com/ZmnSCPxj/clboss): automated C-Lightning Node Manager
|
||||||
* [commando](https://github.com/lightningd/plugins/tree/master/commando): control your node over lightning
|
* [commando](https://github.com/lightningd/plugins/tree/master/commando): control your node over lightning
|
||||||
|
@ -26,6 +26,104 @@ systemctl cat bitcoind
|
|||||||
systemctl show bitcoind
|
systemctl show bitcoind
|
||||||
```
|
```
|
||||||
|
|
||||||
|
# clightning database replication
|
||||||
|
|
||||||
|
The clightning database can be replicated to a local path
|
||||||
|
or to a remote SSH target.\
|
||||||
|
When remote replication is enabled, nix-bitcoin mounts a SSHFS to a local path.\
|
||||||
|
Optionally, backups can be encrypted via `gocryptfs`.
|
||||||
|
|
||||||
|
Note: You should also backup the static file `hsm_secret` (located at
|
||||||
|
`/var/lib/clightning/bitcoin/hsm_secret` by default), either manually
|
||||||
|
or via the `services.backups` module.
|
||||||
|
|
||||||
|
## Remote target via SSHFS
|
||||||
|
|
||||||
|
1. Add this to your `configuration.nix`:
|
||||||
|
```nix
|
||||||
|
services.clightning.replication = {
|
||||||
|
enable = true;
|
||||||
|
sshfs.destination = "user@hostname:directory";
|
||||||
|
# This is optional
|
||||||
|
encrypt = true;
|
||||||
|
};
|
||||||
|
programs.ssh.knownHosts."hostname".publicKey = "<ssh public key from running `ssh-keyscan` on the host>";
|
||||||
|
```
|
||||||
|
|
||||||
|
Leave out the `encrypt` line if you want to store data on your destination
|
||||||
|
in plaintext.\
|
||||||
|
Adjust `user`, `hostname` and `directory` as necessary.
|
||||||
|
|
||||||
|
2. Deploy
|
||||||
|
|
||||||
|
3. To allow SSH access from the nix-bitcoin node to the target node, either
|
||||||
|
use the remote node config below, or copy the contents of `$secretsDir/clightning-replication-ssh.pub`
|
||||||
|
to the `authorized_keys` file of `user` (or use `ssh-copy-id`).
|
||||||
|
|
||||||
|
4. You can restrict the nix-bitcoin node's capabilities on the SSHFS target
|
||||||
|
using OpenSSH's builtin features, as detailed
|
||||||
|
[here](https://serverfault.com/questions/354615/allow-sftp-but-disallow-ssh).
|
||||||
|
|
||||||
|
To implement this on NixOS, add the following to the NixOS configuration of
|
||||||
|
the SSHFS target node:
|
||||||
|
```nix
|
||||||
|
systemd.tmpfiles.rules = [
|
||||||
|
# Because this directory is chrooted by sshd, it must only be writable by user/group root
|
||||||
|
"d /var/backup/nb-replication 0755 root root - -"
|
||||||
|
"d /var/backup/nb-replication/writable 0700 nb-replication - - -"
|
||||||
|
];
|
||||||
|
|
||||||
|
services.openssh = {
|
||||||
|
extraConfig = ''
|
||||||
|
Match user nb-replication
|
||||||
|
ChrootDirectory /var/backup/nb-replication
|
||||||
|
AllowTcpForwarding no
|
||||||
|
AllowAgentForwarding no
|
||||||
|
ForceCommand internal-sftp
|
||||||
|
PasswordAuthentication no
|
||||||
|
X11Forwarding no
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
|
||||||
|
users.users.nb-replication = {
|
||||||
|
isSystemUser = true;
|
||||||
|
group = "nb-replication";
|
||||||
|
shell = "${pkgs.coreutils}/bin/false";
|
||||||
|
openssh.authorizedKeys.keys = [ "<contents of $secretsDir/clightning-replication-ssh.pub>" ];
|
||||||
|
};
|
||||||
|
users.groups.nb-replication = {};
|
||||||
|
```
|
||||||
|
|
||||||
|
With this setup, the corresponding `sshfs.destination` on the nix-bitcoin
|
||||||
|
node is `"nb-replication@hostname:writable"`.
|
||||||
|
|
||||||
|
## Local directory target
|
||||||
|
|
||||||
|
1. Add this to your `configuration.nix`
|
||||||
|
```nix
|
||||||
|
services.clightning.replication = {
|
||||||
|
enable = true;
|
||||||
|
local.directory = "/var/backup/clightning";
|
||||||
|
encrypt = true;
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
Leave out the `encrypt` line if you want to store data in
|
||||||
|
`local.directory` in plaintext.
|
||||||
|
|
||||||
|
2. Deploy
|
||||||
|
|
||||||
|
clightning will now replicate database files to `local.directory`. This
|
||||||
|
can be used to replicate to an external HDD by mounting it at path
|
||||||
|
`local.directory`.
|
||||||
|
|
||||||
|
## Custom remote destination
|
||||||
|
|
||||||
|
Follow the steps in section "Local directory target" above and mount a custom remote
|
||||||
|
destination (e.g., a NFS or SMB share) to `local.directory`.\
|
||||||
|
You might want to disable `local.setupDirectory` in order to create the mount directory
|
||||||
|
yourself with custom permissions.
|
||||||
|
|
||||||
# Connect to RTL
|
# Connect to RTL
|
||||||
Normally you would connect to RTL via SSH tunneling with a command like this
|
Normally you would connect to RTL via SSH tunneling with a command like this
|
||||||
|
|
||||||
@ -222,7 +320,6 @@ lndconnect-onion --host=mynode.org
|
|||||||
5. Edit your deployment tool's configuration and change the node's address to `localhost` and the ssh port to `<random port of your choosing>`.
|
5. Edit your deployment tool's configuration and change the node's address to `localhost` and the ssh port to `<random port of your choosing>`.
|
||||||
If you use krops as described in the [installation tutorial](./install.md), set `target = "localhost:<random port of your choosing>";` in `krops/deploy.nix`.
|
If you use krops as described in the [installation tutorial](./install.md), set `target = "localhost:<random port of your choosing>";` in `krops/deploy.nix`.
|
||||||
|
|
||||||
|
|
||||||
6. After deploying the new configuration, it will connect through the SSH tunnel you established in step iv. This also allows you to do more complex SSH setups that some deployment tools don't support. An example would be authenticating with [Trezor's SSH agent](https://github.com/romanz/trezor-agent), which provides extra security.
|
6. After deploying the new configuration, it will connect through the SSH tunnel you established in step iv. This also allows you to do more complex SSH setups that some deployment tools don't support. An example would be authenticating with [Trezor's SSH agent](https://github.com/romanz/trezor-agent), which provides extra security.
|
||||||
|
|
||||||
# Initialize a Trezor for Bitcoin Core's Hardware Wallet Interface
|
# Initialize a Trezor for Bitcoin Core's Hardware Wallet Interface
|
||||||
|
227
modules/clightning-replication.nix
Normal file
227
modules/clightning-replication.nix
Normal file
@ -0,0 +1,227 @@
|
|||||||
|
{ config, lib, pkgs, ... }:
|
||||||
|
|
||||||
|
with lib;
|
||||||
|
let
|
||||||
|
options.services.clightning.replication = {
|
||||||
|
enable = mkOption {
|
||||||
|
type = types.bool;
|
||||||
|
default = false;
|
||||||
|
description = ''
|
||||||
|
Enable live replication of the clightning database.
|
||||||
|
This prevents losing off-chain funds when the primary wallet file becomes
|
||||||
|
inaccessible.
|
||||||
|
|
||||||
|
For setting the destination, you can either define option `sshfs.destination`
|
||||||
|
or `local.directory`.
|
||||||
|
|
||||||
|
When `encrypt` is `false`, file `lightningd.sqlite3` is written to the destination.
|
||||||
|
When `encrypt` is `true`, directory `lightningd-db` is written to the destination.
|
||||||
|
It includes the encrypted database and gocryptfs metadata.
|
||||||
|
|
||||||
|
See also: https://github.com/ElementsProject/lightning/blob/master/doc/BACKUP.md
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
sshfs = {
|
||||||
|
destination = mkOption {
|
||||||
|
type = types.nullOr types.str;
|
||||||
|
default = null;
|
||||||
|
example = "user@10.0.0.1:directory";
|
||||||
|
description = ''
|
||||||
|
The SSH destination for which a SSHFS will be mounted.
|
||||||
|
`directory` is relative to the home of `user`.
|
||||||
|
|
||||||
|
A SSH key is automatically generated and stored in file
|
||||||
|
`$secretsDir/clightning-replication-ssh`.
|
||||||
|
The SSH server must allow logins via this key.
|
||||||
|
I.e., the `authorized_keys` file of `user` must contain
|
||||||
|
`$secretsDir/clightning-replication-ssh.pub`.
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
port = mkOption {
|
||||||
|
type = types.port;
|
||||||
|
default = 22;
|
||||||
|
description = "SSH port of the remote server.";
|
||||||
|
};
|
||||||
|
sshOptions = mkOption {
|
||||||
|
type = with types; listOf str;
|
||||||
|
default = [ "reconnect" "ServerAliveInterval=50" ];
|
||||||
|
description = "SSH options used for mounting the SSHFS.";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
local = {
|
||||||
|
directory = mkOption {
|
||||||
|
type = types.nullOr types.path;
|
||||||
|
default = null;
|
||||||
|
example = "/var/backup/clightning";
|
||||||
|
description = ''
|
||||||
|
This option can be specified instead of `sshfs.destination` to enable
|
||||||
|
replication to a local directory.
|
||||||
|
|
||||||
|
If `local.setupDirectory` is disabled, the directory
|
||||||
|
- must already exist when `clightning.service` (or `clightning-replication-mounts.service`
|
||||||
|
if `encrypt` is `true`) starts.
|
||||||
|
- must have write permissions for the `clightning` user.
|
||||||
|
|
||||||
|
This option is also useful if you want to use a custom remote destination,
|
||||||
|
like a NFS or SMB share.
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
setupDirectory = mkOption {
|
||||||
|
type = types.bool;
|
||||||
|
default = true;
|
||||||
|
description = ''
|
||||||
|
Create `local.directory` if it doesn't exist and set write permissions
|
||||||
|
for the `clightning` user.
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
};
|
||||||
|
encrypt = mkOption {
|
||||||
|
type = types.bool;
|
||||||
|
default = false;
|
||||||
|
description = ''
|
||||||
|
Whether to encrypt the replicated database with gocryptfs.
|
||||||
|
The encryption password is automatically generated and stored
|
||||||
|
in file `$secretsDir/clightning-replication-password`.
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
cfg = config.services.clightning.replication;
|
||||||
|
inherit (config.services) clightning;
|
||||||
|
|
||||||
|
secretsDir = config.nix-bitcoin.secretsDir;
|
||||||
|
network = config.services.bitcoind.makeNetworkName "bitcoin" "regtest";
|
||||||
|
user = clightning.user;
|
||||||
|
group = clightning.group;
|
||||||
|
|
||||||
|
useSshfs = cfg.sshfs.destination != null;
|
||||||
|
useMounts = useSshfs || cfg.encrypt;
|
||||||
|
|
||||||
|
localDir = cfg.local.directory;
|
||||||
|
mountsDir = "/var/cache/clightning-replication";
|
||||||
|
sshfsDir = "${mountsDir}/sshfs";
|
||||||
|
plaintextDir = "${mountsDir}/plaintext";
|
||||||
|
destDir =
|
||||||
|
if cfg.encrypt then
|
||||||
|
plaintextDir
|
||||||
|
else if useSshfs then
|
||||||
|
sshfsDir
|
||||||
|
else
|
||||||
|
localDir;
|
||||||
|
in {
|
||||||
|
inherit options;
|
||||||
|
|
||||||
|
config = mkIf cfg.enable {
|
||||||
|
assertions = [
|
||||||
|
{ assertion = useSshfs || (localDir != null);
|
||||||
|
message = ''
|
||||||
|
services.clightning.replication: One of `sshfs.destination` or
|
||||||
|
`local.directory` must be set.
|
||||||
|
'';
|
||||||
|
}
|
||||||
|
{ assertion = !useSshfs || (localDir == null);
|
||||||
|
message = ''
|
||||||
|
services.clightning.replication: Only one of `sshfs.destination` and
|
||||||
|
`local.directory` must be set.
|
||||||
|
'';
|
||||||
|
}
|
||||||
|
];
|
||||||
|
|
||||||
|
environment.systemPackages = optionals cfg.encrypt [ pkgs.gocryptfs ];
|
||||||
|
|
||||||
|
systemd.tmpfiles.rules = optional (localDir != null && cfg.local.setupDirectory)
|
||||||
|
"d '${localDir}' 0770 ${user} ${group} - -";
|
||||||
|
|
||||||
|
services.clightning.wallet = let
|
||||||
|
mainDB = "${clightning.dataDir}/${network}/lightningd.sqlite3";
|
||||||
|
replicaDB = "${destDir}/lightningd.sqlite3";
|
||||||
|
in "sqlite3://${mainDB}:${replicaDB}";
|
||||||
|
|
||||||
|
systemd.services.clightning = {
|
||||||
|
bindsTo = mkIf useMounts [ "clightning-replication-mounts.service" ];
|
||||||
|
serviceConfig.ReadWritePaths = [
|
||||||
|
# We can't simply set `destDir` here because it might point to
|
||||||
|
# a FUSE mount.
|
||||||
|
# FUSE mounts can only be set up as `ReadWritePaths` by systemd when they
|
||||||
|
# are accessible by root. This would require FUSE-mounting with option
|
||||||
|
# `allow_other`.
|
||||||
|
(if useMounts then mountsDir else localDir)
|
||||||
|
];
|
||||||
|
};
|
||||||
|
|
||||||
|
systemd.services.clightning-replication-mounts = mkIf useMounts {
|
||||||
|
requiredBy = [ "clightning.service" ];
|
||||||
|
before = [ "clightning.service" ];
|
||||||
|
wants = [ "nix-bitcoin-secrets.target" ];
|
||||||
|
after = [ "nix-bitcoin-secrets.target" ];
|
||||||
|
path = [
|
||||||
|
# Includes
|
||||||
|
# - The SUID-wrapped `fusermount` binary which enables FUSE
|
||||||
|
# for non-root users
|
||||||
|
# - The SUID-wrapped `mount` binary, used for unmounting
|
||||||
|
"/run/wrappers"
|
||||||
|
] ++ optionals cfg.encrypt [
|
||||||
|
# Includes `logger`, required by gocryptfs
|
||||||
|
pkgs.util-linux
|
||||||
|
];
|
||||||
|
|
||||||
|
script =
|
||||||
|
optionalString useSshfs ''
|
||||||
|
mkdir -p ${sshfsDir}
|
||||||
|
${pkgs.sshfs}/bin/sshfs ${cfg.sshfs.destination} -p ${toString cfg.sshfs.port} ${sshfsDir} \
|
||||||
|
-o ${builtins.concatStringsSep "," ([
|
||||||
|
"IdentityFile='${secretsDir}'/clightning-replication-ssh-key"
|
||||||
|
] ++ cfg.sshfs.sshOptions)}
|
||||||
|
'' +
|
||||||
|
optionalString cfg.encrypt ''
|
||||||
|
cipherDir="${if useSshfs then sshfsDir else localDir}/lightningd-db"
|
||||||
|
mkdir -p "$cipherDir" ${plaintextDir}
|
||||||
|
gocryptfs=(${pkgs.gocryptfs}/bin/gocryptfs -passfile '${secretsDir}/clightning-replication-password')
|
||||||
|
# 1. init
|
||||||
|
if [[ ! -e $cipherDir/gocryptfs.conf ]]; then
|
||||||
|
"''${gocryptfs[@]}" -init "$cipherDir"
|
||||||
|
fi
|
||||||
|
# 2. mount
|
||||||
|
"''${gocryptfs[@]}" "$cipherDir" ${plaintextDir}
|
||||||
|
'';
|
||||||
|
|
||||||
|
postStop =
|
||||||
|
optionalString cfg.encrypt ''
|
||||||
|
umount ${plaintextDir} || true
|
||||||
|
'' +
|
||||||
|
optionalString useSshfs ''
|
||||||
|
umount ${sshfsDir}
|
||||||
|
'';
|
||||||
|
|
||||||
|
serviceConfig = {
|
||||||
|
StopPropagatedFrom = [ "clightning.service" ];
|
||||||
|
CacheDirectory = "clightning-replication";
|
||||||
|
CacheDirectoryMode = "770";
|
||||||
|
User = user;
|
||||||
|
RemainAfterExit = "yes";
|
||||||
|
Type = "oneshot";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
nix-bitcoin = mkMerge [
|
||||||
|
(mkIf useSshfs {
|
||||||
|
secrets.clightning-replication-ssh-key = {
|
||||||
|
user = user;
|
||||||
|
permissions = "400";
|
||||||
|
};
|
||||||
|
generateSecretsCmds.clightning-replication-ssh-key = ''
|
||||||
|
if [[ ! -f clightning-replication-ssh-key ]]; then
|
||||||
|
${pkgs.openssh}/bin/ssh-keygen -t ed25519 -q -N "" -C "" -f clightning-replication-ssh-key
|
||||||
|
fi
|
||||||
|
'';
|
||||||
|
})
|
||||||
|
|
||||||
|
(mkIf cfg.encrypt {
|
||||||
|
secrets.clightning-replication-password.user = user;
|
||||||
|
generateSecretsCmds.clightning-replication-password = ''
|
||||||
|
makePasswordSecret clightning-replication-password
|
||||||
|
'';
|
||||||
|
})
|
||||||
|
];
|
||||||
|
};
|
||||||
|
}
|
@ -40,6 +40,15 @@ let
|
|||||||
default = "${cfg.dataDir}/${network}";
|
default = "${cfg.dataDir}/${network}";
|
||||||
description = "The network data directory.";
|
description = "The network data directory.";
|
||||||
};
|
};
|
||||||
|
wallet = mkOption {
|
||||||
|
type = types.nullOr types.str;
|
||||||
|
default = null;
|
||||||
|
example = "sqlite3:///var/lib/clightning/bitcoin/lightningd.sqlite3";
|
||||||
|
description = ''
|
||||||
|
Wallet data scheme (sqlite3 or postgres) and location/connection
|
||||||
|
parameters, as fully qualified data source name.
|
||||||
|
'';
|
||||||
|
};
|
||||||
extraConfig = mkOption {
|
extraConfig = mkOption {
|
||||||
type = types.lines;
|
type = types.lines;
|
||||||
default = "";
|
default = "";
|
||||||
@ -105,6 +114,7 @@ let
|
|||||||
bitcoin-rpcuser=${config.services.bitcoind.rpc.users.public.name}
|
bitcoin-rpcuser=${config.services.bitcoind.rpc.users.public.name}
|
||||||
rpc-file-mode=0660
|
rpc-file-mode=0660
|
||||||
log-timestamps=false
|
log-timestamps=false
|
||||||
|
${optionalString (cfg.wallet != null) "wallet=${cfg.wallet}"}
|
||||||
${cfg.extraConfig}
|
${cfg.extraConfig}
|
||||||
'';
|
'';
|
||||||
|
|
||||||
|
@ -13,6 +13,7 @@
|
|||||||
./clightning.nix
|
./clightning.nix
|
||||||
./clightning-plugins
|
./clightning-plugins
|
||||||
./clightning-rest.nix
|
./clightning-rest.nix
|
||||||
|
./clightning-replication.nix
|
||||||
./spark-wallet.nix
|
./spark-wallet.nix
|
||||||
./lnd.nix
|
./lnd.nix
|
||||||
./lightning-loop.nix
|
./lightning-loop.nix
|
||||||
|
@ -4,7 +4,7 @@ in
|
|||||||
# Set default values for use without flakes
|
# Set default values for use without flakes
|
||||||
{ pkgs ? import <nixpkgs> { config = {}; overlays = []; }
|
{ pkgs ? import <nixpkgs> { config = {}; overlays = []; }
|
||||||
, pkgsUnstable ? import nixpkgsPinned.nixpkgs-unstable {
|
, pkgsUnstable ? import nixpkgsPinned.nixpkgs-unstable {
|
||||||
inherit (pkgs) system;
|
inherit (pkgs.stdenv) system;
|
||||||
config = {};
|
config = {};
|
||||||
overlays = [];
|
overlays = [];
|
||||||
}
|
}
|
||||||
|
153
test/clightning-replication.nix
Normal file
153
test/clightning-replication.nix
Normal file
@ -0,0 +1,153 @@
|
|||||||
|
# You can run this test via `run-tests.sh -s clightningReplication`
|
||||||
|
|
||||||
|
let
|
||||||
|
nixpkgs = (import ../pkgs/nixpkgs-pinned.nix).nixpkgs;
|
||||||
|
in
|
||||||
|
import "${nixpkgs}/nixos/tests/make-test-python.nix" ({ pkgs, ... }:
|
||||||
|
with pkgs.lib;
|
||||||
|
let
|
||||||
|
keyDir = "${nixpkgs}/nixos/tests/initrd-network-ssh";
|
||||||
|
keys = {
|
||||||
|
server = "${keyDir}/ssh_host_ed25519_key";
|
||||||
|
client = "${keyDir}/id_ed25519";
|
||||||
|
serverPub = readFile "${keys.server}.pub";
|
||||||
|
clientPub = readFile "${keys.client}.pub";
|
||||||
|
};
|
||||||
|
|
||||||
|
clientBaseConfig = {
|
||||||
|
imports = [ ../modules/modules.nix ];
|
||||||
|
|
||||||
|
nix-bitcoin.generateSecrets = true;
|
||||||
|
|
||||||
|
services.clightning = {
|
||||||
|
enable = true;
|
||||||
|
replication.enable = true;
|
||||||
|
|
||||||
|
# TODO-EXTERNAL:
|
||||||
|
# When WAN is disabled, DNS bootstrapping slows down service startup by ~15 s.
|
||||||
|
extraConfig = "disable-dns";
|
||||||
|
};
|
||||||
|
};
|
||||||
|
in
|
||||||
|
{
|
||||||
|
name = "clightning-replication";
|
||||||
|
|
||||||
|
nodes = let nodes = {
|
||||||
|
replicationLocal = {
|
||||||
|
imports = [ clientBaseConfig ];
|
||||||
|
services.clightning.replication.local.directory = "/var/backup/clightning";
|
||||||
|
};
|
||||||
|
|
||||||
|
replicationLocalEncrypted = {
|
||||||
|
imports = [ nodes.replicationLocal ];
|
||||||
|
services.clightning.replication.encrypt = true;
|
||||||
|
};
|
||||||
|
|
||||||
|
replicationRemote = {
|
||||||
|
imports = [ clientBaseConfig ];
|
||||||
|
nix-bitcoin.generateSecretsCmds.clightning-replication-ssh-key = mkForce ''
|
||||||
|
install -m 600 ${keys.client} clightning-replication-ssh-key
|
||||||
|
'';
|
||||||
|
programs.ssh.knownHosts."server".publicKey = keys.serverPub;
|
||||||
|
services.clightning.replication.sshfs.destination = "nb-replication@server:writable";
|
||||||
|
};
|
||||||
|
|
||||||
|
replicationRemoteEncrypted = {
|
||||||
|
imports = [ nodes.replicationRemote ];
|
||||||
|
services.clightning.replication.encrypt = true;
|
||||||
|
};
|
||||||
|
|
||||||
|
server = { ... }: {
|
||||||
|
environment.etc."ssh-host-key" = {
|
||||||
|
source = keys.server;
|
||||||
|
mode = "400";
|
||||||
|
};
|
||||||
|
|
||||||
|
services.openssh = {
|
||||||
|
enable = true;
|
||||||
|
extraConfig = ''
|
||||||
|
Match user nb-replication
|
||||||
|
ChrootDirectory /var/backup/nb-replication
|
||||||
|
AllowTcpForwarding no
|
||||||
|
AllowAgentForwarding no
|
||||||
|
ForceCommand internal-sftp
|
||||||
|
PasswordAuthentication no
|
||||||
|
X11Forwarding no
|
||||||
|
'';
|
||||||
|
hostKeys = mkForce [
|
||||||
|
{
|
||||||
|
path = "/etc/ssh-host-key";
|
||||||
|
type = "ed25519";
|
||||||
|
}
|
||||||
|
];
|
||||||
|
};
|
||||||
|
|
||||||
|
users.users.nb-replication = {
|
||||||
|
isSystemUser = true;
|
||||||
|
group = "nb-replication";
|
||||||
|
shell = "${pkgs.coreutils}/bin/false";
|
||||||
|
openssh.authorizedKeys.keys = [ keys.clientPub ];
|
||||||
|
};
|
||||||
|
users.groups.nb-replication = {};
|
||||||
|
|
||||||
|
systemd.tmpfiles.rules = [
|
||||||
|
# Because this directory is chrooted by sshd, it must only be writable by user/group root
|
||||||
|
"d /var/backup/nb-replication 0755 root root - -"
|
||||||
|
"d /var/backup/nb-replication/writable 0700 nb-replication - - -"
|
||||||
|
];
|
||||||
|
};
|
||||||
|
}; in nodes;
|
||||||
|
|
||||||
|
testScript = { nodes, ... }: let
|
||||||
|
systems = builtins.concatStringsSep ", "
|
||||||
|
(mapAttrsToList (name: node: ''"${name}": "${node.config.system.build.toplevel}"'') nodes);
|
||||||
|
in ''
|
||||||
|
systems = { ${systems} }
|
||||||
|
|
||||||
|
def switch_to_system(system):
|
||||||
|
cmd = f"{systems[system]}/bin/switch-to-configuration test >&2"
|
||||||
|
client.succeed(cmd)
|
||||||
|
|
||||||
|
client = replicationLocal
|
||||||
|
|
||||||
|
if not "is_interactive" in vars():
|
||||||
|
client.start()
|
||||||
|
server.start()
|
||||||
|
|
||||||
|
with subtest("local replication"):
|
||||||
|
client.wait_for_unit("clightning.service")
|
||||||
|
client.succeed("runuser -u clightning -- ls /var/backup/clightning/lightningd.sqlite3")
|
||||||
|
# No other user should be able to read the backup directory
|
||||||
|
client.fail("runuser -u bitcoin -- ls /var/backup/clightning")
|
||||||
|
|
||||||
|
# If `switch_to_system` succeeds then all services, including clightning,
|
||||||
|
# have started successfully
|
||||||
|
switch_to_system("replicationLocalEncrypted")
|
||||||
|
with subtest("local replication encrypted"):
|
||||||
|
replica_db = "/var/cache/clightning-replication/plaintext/lightningd.sqlite3"
|
||||||
|
client.succeed(f"runuser -u clightning -- ls {replica_db}")
|
||||||
|
# No other user should be able to read the unencrypted files
|
||||||
|
client.fail(f"runuser -u bitcoin -- ls {replica_db}")
|
||||||
|
# A gocryptfs has been created
|
||||||
|
client.succeed("ls /var/backup/clightning/lightningd-db/gocryptfs.conf")
|
||||||
|
|
||||||
|
server.wait_for_unit("sshd.service")
|
||||||
|
switch_to_system("replicationRemote")
|
||||||
|
with subtest("remote replication"):
|
||||||
|
replica_db = "/var/cache/clightning-replication/sshfs/lightningd.sqlite3"
|
||||||
|
client.succeed(f"runuser -u clightning -- ls {replica_db}")
|
||||||
|
# No other user should be able to read the unencrypted files
|
||||||
|
client.fail(f"runuser -u bitcoin -- ls {replica_db}")
|
||||||
|
# A clighting db exists on the server
|
||||||
|
server.succeed("ls /var/backup/nb-replication/writable/lightningd.sqlite3")
|
||||||
|
|
||||||
|
switch_to_system("replicationRemoteEncrypted")
|
||||||
|
with subtest("remote replication encrypted"):
|
||||||
|
replica_db = "/var/cache/clightning-replication/plaintext/lightningd.sqlite3"
|
||||||
|
client.succeed(f"runuser -u clightning -- ls {replica_db}")
|
||||||
|
# No other user should be able to read the unencrypted files
|
||||||
|
client.fail(f"runuser -u bitcoin -- ls {replica_db}")
|
||||||
|
# A gocryptfs has been created on the server
|
||||||
|
server.succeed("ls /var/backup/nb-replication/writable/lightningd-db/gocryptfs.conf")
|
||||||
|
'';
|
||||||
|
})
|
@ -55,10 +55,29 @@ name: testConfig:
|
|||||||
container = {
|
container = {
|
||||||
# The container name has a 11 char length limit
|
# The container name has a 11 char length limit
|
||||||
containers.nb-test = { config, ... }: {
|
containers.nb-test = { config, ... }: {
|
||||||
config = {
|
imports = [
|
||||||
extra = config.config.test.container;
|
{
|
||||||
config = testConfig;
|
config = {
|
||||||
};
|
extra = config.config.test.container;
|
||||||
|
config = testConfig;
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
# Enable FUSE inside the container when clightning replication
|
||||||
|
# is enabled.
|
||||||
|
# TODO-EXTERNAL: Remove this when
|
||||||
|
# https://github.com/systemd/systemd/issues/17607
|
||||||
|
# has been resolved. This will also improve security.
|
||||||
|
(
|
||||||
|
let
|
||||||
|
clightning = config.config.services.clightning;
|
||||||
|
in
|
||||||
|
lib.mkIf (clightning.enable && clightning.replication.enable) {
|
||||||
|
bindMounts."/dev/fuse" = { hostPath = "/dev/fuse"; };
|
||||||
|
allowedDevices = [ { node = "/dev/fuse"; modifier = "rw"; } ];
|
||||||
|
}
|
||||||
|
)
|
||||||
|
];
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -295,10 +295,11 @@ basic() {
|
|||||||
# All tests that only consist of building a nix derivation.
|
# All tests that only consist of building a nix derivation.
|
||||||
# Their output is cached in /nix/store.
|
# Their output is cached in /nix/store.
|
||||||
buildable() {
|
buildable() {
|
||||||
basic
|
basic "$@"
|
||||||
scenario=full buildTest "$@"
|
scenario=full buildTest "$@"
|
||||||
scenario=regtest buildTest "$@"
|
scenario=regtest buildTest "$@"
|
||||||
scenario=hardened buildTest "$@"
|
scenario=hardened buildTest "$@"
|
||||||
|
scenario=clightningReplication buildTest "$@"
|
||||||
}
|
}
|
||||||
|
|
||||||
examples() {
|
examples() {
|
||||||
|
@ -49,6 +49,8 @@ let
|
|||||||
};
|
};
|
||||||
|
|
||||||
tests.clightning = cfg.clightning.enable;
|
tests.clightning = cfg.clightning.enable;
|
||||||
|
test.data.clightning-replication = cfg.clightning.replication.enable;
|
||||||
|
|
||||||
# When WAN is disabled, DNS bootstrapping slows down service startup by ~15 s.
|
# When WAN is disabled, DNS bootstrapping slows down service startup by ~15 s.
|
||||||
services.clightning.extraConfig = mkIf config.test.noConnections "disable-dns";
|
services.clightning.extraConfig = mkIf config.test.noConnections "disable-dns";
|
||||||
test.data.clightning-plugins = let
|
test.data.clightning-plugins = let
|
||||||
@ -186,6 +188,11 @@ let
|
|||||||
tests.security = true;
|
tests.security = true;
|
||||||
|
|
||||||
services.clightning.enable = true;
|
services.clightning.enable = true;
|
||||||
|
services.clightning.replication = {
|
||||||
|
enable = true;
|
||||||
|
encrypt = true;
|
||||||
|
local.directory = "/var/backup/clightning";
|
||||||
|
};
|
||||||
test.features.clightningPlugins = true;
|
test.features.clightningPlugins = true;
|
||||||
services.rtl.enable = true;
|
services.rtl.enable = true;
|
||||||
services.spark-wallet.enable = true;
|
services.spark-wallet.enable = true;
|
||||||
@ -354,7 +361,12 @@ let
|
|||||||
};
|
};
|
||||||
makeTest' = import ./lib/make-test.nix pkgs;
|
makeTest' = import ./lib/make-test.nix pkgs;
|
||||||
|
|
||||||
tests = builtins.mapAttrs makeTest allScenarios;
|
tests = builtins.mapAttrs makeTest allScenarios // {
|
||||||
|
clightningReplication.vm = import ./clightning-replication.nix {
|
||||||
|
inherit pkgs;
|
||||||
|
inherit (pkgs.stdenv) system;
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
getTest = name: tests.${name} or (makeTest name {
|
getTest = name: tests.${name} or (makeTest name {
|
||||||
services.${name}.enable = true;
|
services.${name}.enable = true;
|
||||||
|
@ -153,6 +153,14 @@ def _():
|
|||||||
# This is a one-shot service, so this command only succeeds if the service succeeds
|
# This is a one-shot service, so this command only succeeds if the service succeeds
|
||||||
succeed("systemctl start clightning-feeadjuster")
|
succeed("systemctl start clightning-feeadjuster")
|
||||||
|
|
||||||
|
if test_data["clightning-replication"]:
|
||||||
|
replica_db = "/var/cache/clightning-replication/plaintext/lightningd.sqlite3"
|
||||||
|
succeed(f"runuser -u clightning -- ls {replica_db}")
|
||||||
|
# No other user should be able to read the unencrypted files
|
||||||
|
machine.fail(f"runuser -u bitcoin -- ls {replica_db}")
|
||||||
|
# A gocryptfs has been created
|
||||||
|
succeed("ls /var/backup/clightning/lightningd-db/gocryptfs.conf")
|
||||||
|
|
||||||
@test("lnd")
|
@test("lnd")
|
||||||
def _():
|
def _():
|
||||||
assert_running("lnd")
|
assert_running("lnd")
|
||||||
|
Loading…
Reference in New Issue
Block a user