The officially official Devuan Forum!

You are not logged in.

#1 2020-09-02 20:33:40

danuan
Member
Registered: 2020-08-30
Posts: 16  

HOWTO: Devuan ROOT on ZFS and MultiBoot

Took me a while to get around to trying to do this, as i was not sure of how much of a hack the whole root on zfs install would be. But minus the odd install part of the system(due to some license  incompatibility) Which is on par with doing root on NFS which is then managed trough zfs snapshots and cloning on server. No custom scrips, patches, or even extensive modifications to any part of system will be used in this setup .

Here is my starting point References for ROOT on ZOL first one is what i hoped to achieve.

I wanted my rpool on whole disk and not how most  howtos split up in to bpool and rpool (my understanding is, it is due to limitations in grub and some zpool features but if we get it running and not enable them later, all is well ?) This is most i could find at the moment  but a good lead. https://unix.stackexchange.com/question … b-can-read

Aditional info on BSD and Indiana multibooting

Starting with devuan Live image was discarded. It is not persistent, you will have to reinstall and reconfigure zfs and other things to get your system back up  if something goes wrong. I chose a hard drive installation for a rescue  system that is ready to go. Can also boot systems in the pool from outside if needed. Other options can be a usb stick, msata drive, anything that devuan can be installed on and booted.

Interesting option  could be an ssd drive that could serve multiple functions.

  • 1 Rescue System
    2 Swap on ssd for hibernation which is not support on zfs zvol yet
    3 And a persistent l2arc on same ssd when hibernation is used

But that is a bit of a strech for data integrity ideal of zfs. (Moving data, here to swap, outside of zfs control)

To get started

I am doing net install of beowulf,

  • 300 meg boot partition

  • 10Gig root

  • 5Gig swap

Keep it small and manageable. We will use this as our maintenance/rescue system and as a starting clone for first system on zfs root.

Grub-legacy booting only for now

  • no need for X or desktops etc...

  • minimal advanced install  only selecting "standard system utilities".

Some nice things to help out , but not essential

  • To help with copy paste the instructions from another machine on networked system might be a good idea at this point as it will save time later once we get other clones going. moving in and out of different installs will be much faster. (and for some reason without --no-install-recommends it tries to pull in things from icon themes to x11-common , basicaly half the install of X without the X)

    apt-get install --no-install-recommends openssh-server  

    And on a neworked machine that will access this installation, (configure autologin for ssh(and a desktop launcher to make it really easy))
    replace username@machineIP to match your install.

    ssh-keygen -t rsa
    ssh-copy-id user@10.10.50.x
  • to help copy and paste inside the console if needed

    apt-get install gpm
  • if needed

    apt-get install nfs-common
Once the system is up and configured to your liking.
  • Install headers for your kernel. (apt get int the next step seems to pull in wrong ones, so do it manually )
    then install 0.7.12 zfs  if you plan on staying with 4.X kernel.

    apt-get install zfs-dkms zfsutils-linux
  • or keep going and install 0.8.4 version of zfs, uncomment or add beowulf-backports to /etc/apt/sources.list
    and make sure contrib is also there and install.

    apt-get install -t beowulf-backports zfs-dkms zfsutils-linux

    and install 5.X kernel and headers ,(if done before installing backports version of zfs i get install errors of it trying to compile old version of zfs-dkms to 5.x kernel.)

Going from .7 zfs to .8  is a big step that is worth it in features and functionality. Lots of things started working in .8
that did not before, like hotspare drive functionality started working in .8 . I tested it by unplugging drives or
dd if=/dev/urandom of=/dev/sdX garbage to a drive in zpool.

But you do not have to upgrade the pool version itself while upgrading to .8 from .7 unless you need the extra flags
or functions inside the pool , without upgrading the pool version you can keep it backwards compatable. to older linux
kernels or BSD and even Solaris/Indiana (citations ?)

  • Now , clean up, add your favorite aliases in .bashrc etc.. Maybe delete apt deb caches to make it even smaller.
    Every iteration of clones from this point will start adding up and then throw in snapshots on top of that and you could
    double or triple the sizes if no cleanings are done.

If anything from this point on is not clear here are great ZFS specific resources 

Create a pool

I am not setting altroot, the non persisten / mountpoint. As we should start using tempmounts for zfs
from the get go, its what i will use later for cloning and managing. Its does the same thing but with  tempmount you
know exactly what you mounted and where.

ls -al /dev/disk/by-id/ 

and

lsblk

to make sure not to grab the system disk by mistake

pick your disks , choose raidz1, 2, 3, mirror, stripe , mirror/stripe,
or a mirror of raidz3 stripes and -o copies=5 of data ? for special occasions !

Setting  ashift=12 is highly recommended  during the creation(please investigate for yourself)

zpool create rpool mirror \
ata-WDC_WD1600AAJS-75M0A0_WD-WMAV31415162 \
ata-WDC_WD1600AAJS-75M0A0_WD-WMAV31415738 \
mirror \
ata-WDC_WD1600AAJS-75M0A0_WD-WMAV31432691 \
ata-WDC_WD1600AAJS-75M0A0_WD-WMAV31376665

#zpool status
  pool: rpool
state: ONLINE
  scan: none requested
config:

    NAME                                           STATE     READ WRITE CKSUM
    rpool                                          ONLINE       0     0     0
      mirror-0                                     ONLINE       0     0     0
        ata-WDC_WD1600AAJS-75M0A0_WD-WMAV31415162  ONLINE       0     0     0
        ata-WDC_WD1600AAJS-75M0A0_WD-WMAV31415738  ONLINE       0     0     0
      mirror-1                                     ONLINE       0     0     0
        ata-WDC_WD1600AAJS-75M0A0_WD-WMAV31432691  ONLINE       0     0     0
        ata-WDC_WD1600AAJS-75M0A0_WD-WMAV31376665  ONLINE       0     0     0

errors: No known data errors

Set some basics that we want to propagate to child datasets (unless set localy), this is mostly  options and depends on use case ( tuning zfs ) most things can be set later, while others  will only take effect for new files unless you recopy files or simpler to send and recieve dataset with new options. Like recompress  to lz4 from gzip etc...Some options can only be set once during pool creation like ashift, or casesensitivity  on datasets for smb/cifs shares.

can embed this during pool creation but i keep it separate.

zfs set mountpoint=none rpool

Also this can be moved down to rpool/ROOT As i will have other datasets under rpool and probably do not want them inheriting these options

zfs set atime=off rpool
zfs set relatime=on rpool
zfs set compression=lz4 rpool

info on proper zvol swap use https://openzfs.github.io/openzfs-docs/ … wap-device

zfs create -V 10G -b $(getconf PAGESIZE) \
    -o logbias=throughput -o sync=always \
    -o primarycache=metadata -o compression=off\
     rpool/swap

mkswap -L swap /dev/zvol/rpool/swap
Start making datasets

This sets up the dataset tree that will make managing this easier (i guess format is from solaris).

zfs create -o mountpoint=none rpool/ROOT
zfs create -o mountpoint=/ rpool/ROOT/debian1

Not very clear on this one yet, (but lets use it, till we know better)

zpool set bootfs=rpool/ROOT/debian1 rpool

Now need to make a mountpoint for our first system. Usualy zfs does not need mountpoints, It creates them if none exist, and refuses to mount on mountpoints with files unless overridden , but when using tempmount or legacy, it will balk at not having one.

mkdir /mnt/debian1
mount -t zfs -o zfsutil rpool/ROOT/debian1 /mnt/debian1

to make sure rpool/ROOT/debian1 is indeed mounted

df -h 
zfs get all rpool/ROOT/debian1 | grep mount  
Cloning the system in to the zpool

now rpool is ready to accept our first system

apt-get install rsync

Since this is a new system we are not excluding more things that need to be on other systems for cloning like /media etc.... could take out srv too  and mnt since we are staying on one filesystem but for safetys sake.... lets not loop it ( and if you do not have a separate /boot partition , disreguard the second  command)

rsync -aAHXx / --exclude={"/dev/*","/proc/*","/sys/*","/run/*","/mnt/*","/srv/*"} /mnt/debian1/
rsync -aAHXx /boot/* /mnt/debian1/boot/

Now chroot to get in to the system and do a few tasks

mount --rbind /dev /mnt/debian1/dev
mount --rbind /proc /mnt/debian1/proc
mount --rbind /sys /mnt/debian1/sys
mount --rbind /run /mnt/debian1/run

chroot /mnt/debian1 /bin/bash --login

A config file/db of sorts, if does not exist yet make.
Some say use it , some say go without, as all info is within
the pool drives anyways (need to clear up)

mkdir -p /etc/zfs
zpool set cachefile=/etc/zfs/zpool.cache rpool

Will need this step to make sure zfs can mount / right after grub, in managment system we are not runing root on zfs so it can mount the pool later during boot by loading a kernel modue. But here we will need initramfs to do that.

apt-get install -t beowulf-backports zfs-initramfs

Test if grub sees that its zfs ?

grub-probe /boot

i do not think this next step is needed as it creates a double entry for
root=ZFS=rpool in /boot/grub/grub.cfg

edit /etc/default/grub
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/debian1"
and uncomment for more info
GRUB_TERMINAL=console

  • this will write updates to /boot/grub/grub.cfg which will be called by grub chain starting from mbr.
    At the moment it would still boot the initial system unless we do this step.

    update-grub
  • Using cfdisk change 9th partition  on each zfs pool drive to BIOS boot. Which zfs made during pool creation,small 8 meg patition on my system, (need citation for this) but its seems to be for uefi  boot compatibility. And if it was in active part of zfs it would have balked at me using during first zpool scrub or on imports.Here is some info https://www.reddit.com/r/zfs/comments/g … ved_space/
    (or bios_grub flag in gparted)

if previous step is incorrect or not done , this will happend on grub mbr install, replace X with drive(s) in the pool

#grub-install /dev/sdX
Installing for i386-pc platform.
grub-install: warning: this GPT partition label contains no BIOS Boot Partition; embedding won't be possible.
grub-install: error: filesystem `zfs' doesn't support blocklists.

if

grub-install /dev/sdX 

goes without error , repeat for each drive in the pool this way if one drive fails in the pool it can alway boot from another provided they are set sequentially in bios for boot drive order .

edit /etc/fstab
coment out everything about old filesystems as zfs will handle that for now,
but we will return here if we start using legasy mountpoints. And add
/dev/zvol/rpool/swap    none    swap    sw    0    0

edit /etc/initramfs-tools/conf.d/resume
as zfs does not support  hibernation on zvol yet and will hang if you leave old drive there
RESUME=none
(and i wonder if this can be separate to from zvol swap, but not tested yet)

Update initramfs to get  resume update to none.

update-initramfs -c -k all  

and

exit

from chroot

At this point reboot and change the bios boot drive to one of the drives from the pool, and if everything went to plan, the system should be up and running

df -h  to see that the root is mounted from rpool/ROOT/systemXXX that you expect. Do that a few times on every new system you clone, as mistakes in fstab or forgeting update-grub will boot in to a clone source system.

some zfs errors about it not being unable to mount / are ok as it is normal when initrd already mounted it, like on NFS root boot i get same errors.

If booted and everything ok, do

zfs snapshot rpool/ROOT/debian1@initial

#zfs list -t  all
NAME                                   USED  AVAIL     REFER  MOUNTPOINT
rpool                                     12.4G   227G       24K      none
rpool/ROOT                           1.18G   227G       24K      none
rpool/ROOT/debian1               1.18G   227G      1005M    /
rpool/ROOT/debian1@initial    5.5M    -             1000M     -
rpool/swap                             10.6G   277G      12K        -

do not rush on installing anything here yet as this system is another maintenance system , only inside zpool now.
and a new initial clone source for next steps, plus chain loading  all other systems will come from this systems grub.

Last edited by danuan (2020-09-10 23:52:38)

Offline

#2 2020-09-02 20:34:44

danuan
Member
Registered: 2020-08-30
Posts: 16  

Re: HOWTO: Devuan ROOT on ZFS and MultiBoot

making clones and multibooting

( i have reran all theses steps to confirm that it does work without errors)

Lets create  a standard clone

(not to confuse with zfs clone of a snapshot which can be used as a means of branching a snapshot to a read/write dataset but it
stays linked as a dependent of parent untill steps are taken otherwise)

This step will be done from debian1 system on zpool

This takes 2-5 min  at most  (once you have the procedure figured out or automated)-and a new clone is ready for boot.
(could be under a minute if done right even  with hdd drives)(but that is with automation and scripting for the whole procedure )

(-r snapshots children in debian1 if there are any)

zfs snapshot -r rpool/ROOT/debian1@cloning

(-R sends snapshots in debian1 and its children recursively from selected snapshot)
(zfs recieve can use -F to collapse all snapshots if needed)

zfs send -Rv rpool/ROOT/debian1@cloning | zfs receive rpool/ROOT/debian1T

But i like to keep the snapshots and delete or rename later if needed.

Pick a naming scheme to keep track of things

Here i am adding "T" to debian1 for test.
And  i will use

  • debian1's for beuwulf

  • debian2's for chimaera

  • debian3's for ceres.

So a horizontal cloning  move adds a letter and a version number, vertical cloning changes the first number after the name.
And if you also keep alphanumeric order to things (zfs list) or (zfs list -t all) will look like a tree that links descendant clones to parents.

(just realised it should be devuan) not too late, all datasets can be renamed and grub and fstabs updated but has to be done with foresight as not to get locked out which did happend once to me when i renamed initial rpool/ROOT/debian1 from its clone without running grub-update inside chroot
had to break out our managment system and fix it from there.

and rename the snapshot of the cloned system to signify it was cloned (which sort of becomes initial for this clone)
but you can roll this back  to original initial since we kept the preceding snapshots.

zfs rename rpool/ROOT/debian1T@cloning  rpool/ROOT/debian1T@cloned
(zfs managed mountpoints)

For the first clone i will use zfs mountpoints , as they get automounted based on whichever
current ROOT dataset is in use, and during this, unlike normal zfs behavior  it will not mount
every dataset in the pool that has a mountpoint and canmount=auto is on, in fact it will not mount
anything else automaticaly now, only current ROOT's children.  Even (zfs mount -a) does not
mount anything else, even if there are no conflicts. However if you were to exit back in to
our original non zpool system, it would mount every non conflicting mountpoint it could.

So while using zfs managed mountpoints in / on ZFS

  • everything has to be a child, and get automounted

  • or mounted through a script  with ((zfs mount rpool/datasetX)this will mount to whichever mounpoint is set for this dataset)

  • or tempmount with(mount -t zfs -o zfsutil rpool/datasetXXX /mnt/datasetXXX)(mounpoint needs to exist),

  • or as a legacy mount trough fstab

  • or manualy if legacy with(mount -t zfs rpool/datasetX /mnt/datasetX) notice -o zfsutil not there for legacy(mounpoint needs to exist).

And i am not sure on best route yet, as some people use zfs managed some go with legacy , some mix and match.
Only downsides i have seen, is where people report SystemD possibly nonsystemD systems rushing things and without
extra modifications with  zfs managed mounts, systems can write to folders on boot before zfs can mount the  datasets
and once a directory with files exists , by default zfs will not overmount.

And the other odd thing being, that many datasets will have same mountpoints, which would be very odd outside of the root on zfs behavior.
(but inside ROOT on zfs it seems to function as intended ( anyone with experience in solaris/illumos/indiana/(bsd?) please let me know if this is ok ))

Mount the new clone to a tempmount

mkdir /mnt/debian1T
mount -t zfs  -o zfsutil rpool/ROOT/debian1T /mnt/debian1T

now chroot inside the system

mount --rbind /dev /mnt/debian1T/dev
mount --rbind /proc /mnt/debian1T/proc
mount --rbind /sys /mnt/debian1T/sys
mount --rbind /run /mnt/debian1T/run
chroot /mnt/debian1T/ /bin/bash --login

Make some child datasets, first we need to rename old folders that these will replace, otherwise zfs will no automount them at this point.

cd /
mv home home.old
mv var var.old
mv tmp tmp.old

create replacement datasets (should automount as a child of current root in its relative path)

zfs create rpool/ROOT/debian1T/home
zfs create rpool/ROOT/debian1T/var
zfs create rpool/ROOT/debian1T/tmp

move contents to new replacement datasets

mv home.old/* home/
mv var.old/* var/
mv tmp.old/* tmp/

Check that attributes/permissions match from *.old to new versions (datasets)

edit /etc/hostname and change debian1 to identify this new clone as debian1T

to update /boot/grub/grub.cfg for new path of this system in the zpool
(this makes it bootable once initial grub from ROOT/debian1 chainloads /boot/grub/grub.cfg from this ROOT/debian1T)

update-grub

and

exit

chroot

Making new clone bootable

in /etc/grub.d/ of the initial system (rpool/ROOT/debian1) we need to create a file for booting this new system

clone and edit a new grub startup file
i started using 

  • 50s for debian1 systems which are beowulf

  • 60s for debian2 chimaera

  • 70s for debian3 ceres

to identify that it is a chainload of grub to debian1T

cp 40_custom 51_chain_debian1T

and add

menuentry "chainload rpool/ROOT/debian1T"  {
        insmod zfs
        echo    'chain Loading rpool/ROOT/debian1T'
        configfile  /ROOT/debian1T@/boot/grub/grub.cfg
}

or second option, directly tell it to boot specific kernel and initrd with specified root
but you will have to make a new file or menu entry after each kernel upgrade manualy
( keep in mind to check for right initrd and kernel names )

menuentry "/ROOT/debian1T@/boot/vmlinuz-5.7.0-0.bpo.2-amd64"  {
        insmod zfs
        echo    'Loading Linux 5.7.0-0.bpo.2-amd64 ...'
        linux   /ROOT/debian1T@/boot/vmlinuz-5.7.0-0.bpo.2-amd64 root=ZFS=rpool/ROOT/debian1T ro quiet
        echo    'Loading initial ramdisk ...'
        initrd  /ROOT/debian1T@/boot/initrd.img-5.7.0-0.bpo.2-amd64
}

now update /boot/grub/grub.cfg

update-grub

Now all other systems will be booted from here by chanloading grub.cfg files from other datasets

Have not yet figured out an ideal solution in which grub will hunt down all of the
installations under rpool/ROOT/  and add them like it does for non zfs drives

here is what  it can look like after a few updates,snapshots,etc...

NAME                                   USED  AVAIL     REFER  MOUNTPOINT
rpool                                 50.0G   237G       24K  none
rpool/ROOT                            9.15G   237G       24K  none
rpool/ROOT/debian1                    1.16G   237G     1005M  /
rpool/ROOT/debian1@initial            89.2M      -      743M  -
rpool/ROOT/debian1@kernel5.7          68.9M      -      982M  -
rpool/ROOT/debian1@cloning             532K      -     1005M  -
rpool/ROOT/debian1T                   1.39G   237G      768M  /
rpool/ROOT/debian1T@initial           89.2M      -      743M  -
rpool/ROOT/debian1T@kernel5.7         68.9M      -      982M  -
rpool/ROOT/debian1T@cloned            66.0M      -     1005M  -
rpool/ROOT/debian1T/home                34K   237G       34K  /home
rpool/ROOT/debian1T/tmp                 24K   237G       24K  /tmp
rpool/ROOT/debian1T/var                237M   237G      237M  /var
rpool/swap                            10.6G   247G       12K  -

                     

##### Cloning again and changing to legacy mountpoints #####

This step will be done again from debian1 system on zpool  but could be done from system being cloned also.
Naming snapshot cloning2 as cloning1 was used earlier and you  might have done some updates that you  want to propagate, if not skip this and use original cloning1 snapshot.

zfs snapshot -r rpool/ROOT/debian1T@cloning2

next iteration of debian1T add another number to keep things consistent

zfs send -Rv rpool/ROOT/debian1T@cloning2 | zfs receive rpool/ROOT/debian1T2
zfs rename rpool/ROOT/debian1T2@cloning2  rpool/ROOT/debian1T2@cloned2
(Changing zfs managed mountpoints to legacy)

For the first clone we used zfs mountpoints , now we switch to legacy to gain some control back from zfs behavior of not having editable config files.  Example being, i jumped on the zfs does everything bus when i started using it , but later realized it might not be ideal from the managing the system angle. With NFS shares SMB shares and such i started using zfs wrapper commands , but once i realized there was no config file to edit in the right place, and only  way was to give shell commands . I pulled it all out back to /etc/exports and /etc/samba/smb.conf. Same situation here, it might be nice for some things to be managed by a single system (various registryDs come to mind). But not other times when we want to be in control.

So now using legacy managed mountpoints things revert to old system patterns.
Root is mounted from grub as before(and does not need the (zfs set mountpoint=whateverX) set now, as no children will depend on that to get relative mountpoints). And everything else gets mounted from fstab.

Change dataset mountpoints to legacy.

zfs set mountpoint=legacy rpool/ROOT/debian1T2
zfs set mountpoint=legacy rpool/ROOT/debian1T2/home
zfs set mountpoint=legacy rpool/ROOT/debian1T2/var
zfs set mountpoint=legacy rpool/ROOT/debian1T2/tmp

now we have to remove -o zfsutil from the tempmount command as the mountpoint is no longer zfs managed but legacy

mkdir /mnt/debian1T2
mount -t zfs  rpool/ROOT/debian1T2 /mnt/debian1T2

now chroot inside the system again

mount --rbind /dev /mnt/debian1T2/dev
mount --rbind /proc /mnt/debian1T2/proc
mount --rbind /sys /mnt/debian1T2/sys
mount --rbind /run /mnt/debian1T2/run
chroot /mnt/debian1T2/ /bin/bash --login

edit /etc/hostname and change debian1T to identify this new clone as debian1T2

edit /etc/fstab and and add the following lines for new datasets(or patitions in "oldspeak")
rpool/ROOT/debian1T2/home /home zfs  defaults 0 0
rpool/ROOT/debian1T2/var /var zfs  defaults 0 0
rpool/ROOT/debian1T2/tmp /tmp zfs  defaults 0 0

to update /boot/grub/grub.cfg for new path of this system in the zpool

update-grub

and

exit

chroot

Making new clone bootable

in /etc/grub.d/ of the initial system (rpool/ROOT/debian1) we need to create a file for booting this new system

cp 40_custom 52_chain_debian1T2

edit /etc/grub.d/52_chain_debian1T2 and add the following

menuentry "chainload rpool/ROOT/debian1T2"  {
        insmod zfs
        echo    'chain Loading rpool/ROOT/debian1T2'
        configfile  /ROOT/debian1T2@/boot/grub/grub.cfg
}

or second option, directly tell it to boot specific kernel and initrd with specified root
but you will have to make a new file or menu entry after each kernel upgrade manualy

menuentry "/ROOT/debian1T2@/boot/vmlinuz-5.7.0-0.bpo.2-amd64"  {
        insmod zfs
        echo    'Loading Linux 5.7.0-0.bpo.2-amd64 ...'
        linux   /ROOT/debian1T2@/boot/vmlinuz-5.7.0-0.bpo.2-amd64 root=ZFS=rpool/ROOT/debian1T2 ro quiet
        echo    'Loading initial ramdisk ...'
        initrd  /ROOT/debian1T2@/boot/initrd.img-5.7.0-0.bpo.2-amd64
}

now update /boot/grub/grub.cfg to incorporate new system

update-grub

and now  we have 3 systems all should boot and can be temp mounted and chrooted in to from each other.

zfs list -t all
NAME                                   USED  AVAIL     REFER  MOUNTPOINT
rpool                                 50.0G   237G       24K  none
rpool/ROOT                            9.15G   237G       24K  none
rpool/ROOT/debian1                    1.16G   237G     1005M  /
rpool/ROOT/debian1@initial            89.2M      -      743M  -
rpool/ROOT/debian1@kernel5.7          68.9M      -      982M  -
rpool/ROOT/debian1@cloning             532K      -     1005M  -
rpool/ROOT/debian1T                   1.39G   237G      768M  /
rpool/ROOT/debian1T@initial           89.2M      -      743M  -
rpool/ROOT/debian1T@kernel5.7         68.9M      -      982M  -
rpool/ROOT/debian1T@cloned            66.0M      -     1005M  -
rpool/ROOT/debian1T@cloning2             0B      -      768M  -
rpool/ROOT/debian1T/home                34K   237G       34K  /home
rpool/ROOT/debian1T/home@cloning2        0B      -       34K  -
rpool/ROOT/debian1T/tmp                 24K   237G       24K  /tmp
rpool/ROOT/debian1T/tmp@cloning2         0B      -       24K  -
rpool/ROOT/debian1T/var                237M   237G      237M  /var
rpool/ROOT/debian1T/var@cloning2         0B      -      237M  -
rpool/ROOT/debian1T2                  1.39G   237G      768M  legacy
rpool/ROOT/debian1T2@initial          89.2M      -      743M  -
rpool/ROOT/debian1T2@kernel5.7        68.9M      -      982M  -
rpool/ROOT/debian1T2@cloned           66.0M      -     1005M  -
rpool/ROOT/debian1T2@cloned2           288K      -      769M  -
rpool/ROOT/debian1T2/home             51.5K   237G       33K  legacy
rpool/ROOT/debian1T2/home@cloning2    18.5K      -       34K  -
rpool/ROOT/debian1T2/tmp                38K   237G       24K  legacy
rpool/ROOT/debian1T2/tmp@cloning2       14K      -       24K  -
rpool/ROOT/debian1T2/var               237M   237G      237M  legacy
rpool/ROOT/debian1T2/var@cloning2      207K      -      237M  -
rpool/swap                            10.6G   247G       12K  -

you can cleanup the snapshots that are not needed with (zfs destroy rpool/ROOT/debian1T2/tmp@cloning2) this will not delete anything, only the possibility of reverting to that snapshot on that dataset.

Last edited by danuan (2020-09-11 09:07:49)

Offline

#3 2020-09-02 20:35:48

danuan
Member
Registered: 2020-08-30
Posts: 16  

Re: HOWTO: Devuan ROOT on ZFS and MultiBoot

placeholder for part 3

Last edited by danuan (2020-09-03 03:15:33)

Offline

Board footer