The officially official Devuan Forum!

You are not logged in.

#1 2018-11-25 02:04:24

little
Member
Registered: 2017-06-08
Posts: 26  

Installing ZFS on Devuan Mirrored Root

NOTE: Read the whole guide, before starting. Some warnings are AFTER commands are written. Also read the whole attached github guide this is based off of.

This is my log from installing ZFS on the root of Devuan, with a mirrored (two hdd) setup. tl;dr everything works.

This is the main guide
https://github.com/zfsonlinux/zfs/wiki/ … oot-on-ZFS

I see this guide:
https://www.klaus-hartnegg.de/gpo/2017- … evuan.html

but I don't think it's necessary to install from compilation in ASCII.
He mentions (in a side box, which is easy to miss)
"Installing zfs-dkms from contrib
in Devuan 2 works
like described here,
but it is old version 6.5.9."

I don't mind the old version, as it's likely to be stable. That's why it's in the repos.

I'm using an x86-64 devuan ascii minimal live cd. You should be using x86-64.

Steps:

First add contrib archive to existing apt/sources

e.g.
deb http://pkgmaster.devuan.org/merged ascii main
to
deb http://pkgmaster.devuan.org/merged ascii main contrib

I'm using apt-get not apt. Both should work.
apt-get install debootstrap gdisk dpkg-dev linux-headers-$(uname -r)

follow the zfsonlinux guide
I'm using mirrored root.

this command:
sgdisk -a1 -n2:34:2047  -t2:EF02 /dev/disk/by-id/scsi-SATA_disk1

-a means set sector alignment
-n is make new partition. here it is partition 2 with start at sector 34
end at sector 2047
-t is partition type

Note that this command uses the /dev/disk/by-id command.

sgdisk is the same functionally as gdisk and the hex codes for -t are:

0700 Microsoft basic data  0c01 Microsoft reserved    2700 Windows RE
4100 PowerPC PReP boot     4200 Windows LDM data      4201 Windows LDM metadata
7501 IBM GPFS              7f00 ChromeOS kernel       7f01 ChromeOS root
7f02 ChromeOS reserved     8200 Linux swap            8300 Linux filesystem
8301 Linux reserved        8302 Linux /home           8400 Intel Rapid Start
8e00 Linux LVM             a500 FreeBSD disklabel     a501 FreeBSD boot
a502 FreeBSD swap          a503 FreeBSD UFS           a504 FreeBSD ZFS
a505 FreeBSD Vinum/RAID    a580 Midnight BSD data     a581 Midnight BSD boot
a582 Midnight BSD swap     a583 Midnight BSD UFS      a584 Midnight BSD ZFS
a585 Midnight BSD Vinum    a800 Apple UFS             a901 NetBSD swap
a902 NetBSD FFS            a903 NetBSD LFS            a904 NetBSD concatenated
a905 NetBSD encrypted      a906 NetBSD RAID           ab00 Apple boot
af00 Apple HFS/HFS+        af01 Apple RAID            af02 Apple RAID offline
af03 Apple label           af04 AppleTV recovery      af05 Apple Core Storage
be00 Solaris boot          bf00 Solaris root          bf01 Solaris /usr & Mac Z
bf02 Solaris swap          bf03 Solaris backup        bf04 Solaris /var
bf05 Solaris /home         bf06 Solaris alternate se  bf07 Solaris Reserved 1
bf08 Solaris Reserved 2    bf09 Solaris Reserved 3    bf0a Solaris Reserved 4
bf0b Solaris Reserved 5    c001 HP-UX data            c002 HP-UX service
ea00 Freedesktop $BOOT     eb00 Haiku BFS             ed00 Sony system partitio
ef00 EFI System            ef01 MBR partition scheme  ef02 BIOS boot partition
fb00 VMWare VMFS           fb01 VMWare reserved       fc00 VMWare kcore crash p
fd00 Linux RAID

I'm going to be mirroring root so I'll use this command on both drives.

sgdisk -a1 -n2:34:2047  -t2:EF02 ata-WDC_WD10JPVX-5555555_WXA1AA755555
sgdisk -a1 -n2:34:2047  -t2:EF02 ata-WDC_WD10JPVX-5555555_WXC1AA7C5555

He mentions to 'always' use the long /dev/disk/by-id aliases
though this is contrary to what Damian Wojstaw says in
"Introducing ZFS on Linux". So it is open to debate.

I'll be using that book to assist with the github steps.

then this command:
sgdisk     -n1:0:0      -t1:BF01 /dev/disk/by-id/scsi-SATA_disk1
for each hdd

next command for me:

zpool create   \
      -O atime=off -O canmount=off -O compression=lz4 -O normalization=formD \
      -O mountpoint=/ -R /mnt \
      rpool \
        mirror /dev/disk/by-id/scsi-SATA_disk1-part1 /dev/disk/by-id/scsi-SATA_disk2-part1

NOTE: from guide:Make sure to include the -part1 portion of the drive path. If you forget that, you are
specifying the whole disk, which ZFS will then add another partition, then you  have to start over.

NOTE: I am NOT setting ashift, as recommended by the zfs guide. This is from the pre-mentioned book,
who recommends not setting it blindly (unless you know what you are doing - I don't). See the book for more details.

NOTE: I am also not setting acl acceleration with xattr=sa. This may be harmless, but there's
a slight chance
I will move to BSD or Solaris for fun at some point. Probably not, but I don't need the perfo
rmance increase.

if you make a mistake: zpool destroy rpool, then start over.
If necessary, you can gdisk and erase the partitions and start again.
dont use fdisk, or else youll have to add the gpt back with gdisk (invalid partition table error)

check it with # zpool status

root@devuan:/dev/disk/by-id# zpool status
  pool: rpool
state: ONLINE
  scan: none requested
config:

        NAME                                       STATE     READ WRITE CKSUM
        rpool                                      ONLINE       0     0     0
          mirror-0                                 ONLINE       0     0     0
            ata-WDC_WD10JPVX-5555555_WXA1AA755555  ONLINE       0     0     0
            ata-WDC_WD10JPVX-5555555_WXC1AA7C5555  ONLINE       0     0     0

errors: No known data errors

zfs create -o canmount=off -o mountpoint=none rpool/ROOT

I'm going to skip making the datasets section, as I want one partition for root.
so only one create command:
//https://wiki.archlinux.org/index.php/Installing_Arch_Linux_on_ZFS#Format_the_destination_disk
zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/debian

which was that
so /zpool/ROOT/debian is already mounted at /mnt

per a $ mount command
it does NOT have a lost&found folder, so it's simply blank.

though:

root@devuan:/# df -h
Filesystem         Size  Used Avail Use% Mounted on
udev               1.9G     0  1.9G   0% /dev
tmpfs              386M  628K  385M   1% /run
/dev/sr0           363M  363M     0 100% /lib/live/mount/medium
/dev/loop0         344M  344M     0 100% /lib/live/mount/rootfs/filesystem.squashfs
tmpfs              1.9G     0  1.9G   0% /lib/live/mount/overlay
overlay            1.9G  436M  1.5G  23% /
tmpfs              5.0M  4.0K  5.0M   1% /run/lock
tmpfs              771M     0  771M   0% /run/shm
tmpfs              1.9G  4.0K  1.9G   1% /tmp
rpool/ROOT/debian  899G  128K  899G   1% /mnt

it is listed.

debootstrap ascii /mnt
notice it's not stretch, but ascii

and follow the rest of the steps. I had some trouble at the beginning when I used the ZFS create command on the disk and not the part1 partition (oops), but that is resolvable with gpart (and a reboot).

Worked well for me. Boots into ZFS, mirrored root devuan.

So far, so good with ZFS. I'll update if I learn anything else.

Last edited by little (2018-11-25 02:07:03)


give a man an init, he takes an os

Offline

#2 2020-10-13 18:33:38

little
Member
Registered: 2017-06-08
Posts: 26  

Re: Installing ZFS on Devuan Mirrored Root

Update: The ZFS computers are still in operation, however anytime there is an update to ZFS, it requires the kernel to manually re-compile. From what I understand, this is due to the NON GPL and (even) NON BSD licensing used by the ZFS project. I think theres a work around, but it requires manual intervention. On a high end server this may not be a big deal, but on my old computers - it takes a while.

So I wouldn't use ZFS again. Next time I would adopt B-tree FS (BTRFS) which isn't trapped by restrictive licensing or mdadm.

Last edited by little (2020-10-13 18:36:33)


give a man an init, he takes an os

Offline

#3 2020-10-13 20:04:46

Head_on_a_Stick
Member
From: London
Registered: 2019-03-24
Posts: 3,125  
Website

Re: Installing ZFS on Devuan Mirrored Root

It may also be worth noting that the ZFS kernel modules won't be signed with Debian's Secure Boot key so that will have to be disabled (or the ZFS modules will have to be signed with a custom Secure Boot key which must then be enrolled).


Brianna Ghey — Rest In Power

Offline

Board footer