The officially official Devuan Forum!

You are not logged in.

#1 Re: Desktop and Multimedia » devuan xfce: panel elements move » 2023-10-03 18:39:16

Usually there is a separator right after the window buttons with a property of expand enabled.
That pushes the rest of the stuff all the way to the other side , even if there are no windows open.
(makes the window list take up all the space in the middle)

#2 Desktop and Multimedia » REPORT: Is GNOME a stable desktop for you on daedalus ? » 2023-09-29 11:13:23

danuan
Replies: 0

Looking for feedback on usability of GNOME as a full desktop install from tasksel.

Developers are asking about other user experiences besides mine, to determine
if gnome is suitable as a desktop and should/or not be included as one of default desktops in tasksel.

1) Please no discussions about other/better desktops!
2) Interested in GNOME not GNOME-flashback
3) Full GNOME DESKTOP task install with gdm3.

I had personaly too many issues with mostly stock install,
(disabled things like avahi-daemon cups saned etc...)

Went through multiple computers, wayland and xorg.
stock nouveau, proprietary nvidia, and stock amd drivers.

X apps do not work at all through xwayland.

All installs having too many freezes/crashes of desktop(gnome-shell)
Even hard computer freezes without ssh, leading to a full reset.

User switching on top of that makes it even worse.

(could be just me and my old hardware i was testing on)

So , please report if you used gnome as a desktop for some time
(not just install,login and open an app or two)

Looking for genuine desktop experience, install, configure to your liking,
use all the apps you would normally use.

Report should include general hardware specs, and which GPU drivers were used
with which windowing system xorg or wayland?

And what were the issues/nonissues .

thanx

dan

#3 Re: Desktop and Multimedia » XFCE4 config on devuan - differences to debian » 2023-08-04 17:49:01

LightDM installs are broken most of the time for me and do not auto select greeter (lightdm-gtk-greeter)

Make sure lightdm install grabs that package

#4 Re: Desktop and Multimedia » wayland on devuan » 2023-08-04 17:40:41

Second this , tried this recently on Daedalus with gnome on amd gpu.

And could not open Firefox, Chromium or LibreOffice nor xeyes
(no errors in shell, nor wayland logs anywhere to be found)

think this was any non native application that required  Xwayland compatibility
(xwayland was installed)

#5 Re: Installation » zfs on boot causes problems » 2023-07-29 00:12:20

And i see that you have not placed your system folders under syspool/ROOT/devuan

they have to be there to get auto mounted with root on zfs setup .

zfs rename them to move like this

zfs rename syspool/usr syspool/ROOT/devuan/usr
(might need to have /usr mountpoint created in syspool/ROOT/devuan mount but empty dir)

and now make sure there is no /usr/local

(that can exist only in syspool/ROOT/devuan/usr as a mountpoint)
zfs rename syspool/usr/local syspool/ROOT/devuan/usr/local

but zfs should technically create last part of a mountpoint path if does not exit on mounting

etc...

and with that setup you do not have to set -o mountpoint ( they are inherited if sets are named after mountpoints)
zfs inherit -s mountpoint pool/dataset (to reset)

or manually set to legacy and  added to fstab

root on zfs has that behavior , of not auto mounting anything outside of current  root

and by default no over mounting if mountpoint already has file/dirs in it
(that is why you have to make sure to have mountpoint created in right paces if doing  submounts)

#6 Re: Installation » zfs on boot causes problems » 2023-07-28 08:35:58

Try my howto (without a separate bpool)
a bit outdated but devuan specific

https://dev1galaxy.org/viewtopic.php?id=3794

no need to use backports now that zfs version is ok in chimaera

#7 Re: Installation » Howto install dual-boot Windows Linux best method » 2023-07-28 08:16:08

My simplified sequence of install

I have installed every which way but always end up with grub find windows and being the main loader at the end.

depending on sequence and bios vs uefi.

if windows is installed last  then you need to chroot in to you linux install from rescue or live disk or your own usb install
and run grub update , grub install sequence.

if linux installed last , some times windows bugs out even if grub only touched the mbr or uefi partition
and needs a repair from a windows install disk, and again if it was mbr install grub needs a update install

#8 Installation » zfs-auto-mod working toward v1.01 (snap,backup,prune) scripts » 2023-07-27 11:53:36

danuan
Replies: 0

Just pushed a bunch of updates/fixes i have been sitting on

for the bash based zfs-auto-mod snap backup prune scripts
https://github.com/dandudikof/zfs-auto-mod

Looking for feedback on my documentation.
As that can be a big hurdle for someone to get started.

(to me it seems ok, since i have been reading these scripts for years)

But i am sure i am over complicating or over simplifying things ,
i just have no idea anymore .

Maybe splitting README.md in to basic get started and more advanced versions.

dan

#9 Re: Off-topic » SSH tunnel from PC's VNC client to a VM's desktop on separate VM Host » 2023-04-12 00:27:52

Try virt-manager , not only does it make the whole setup easier for beginners on the host.

It can run remotely to manage (over ssh) multiple other hosts vms 
and then connect to remote vm displays/shells over ssh also.

(on a a remote managment machine you do not need to install the whole kvm/qemu bundle )

for client machine that will connect to the server (on chimaera i used)

apt install --no-install-recommends virt-manager ssh-askpass-gnome 
apt install spice-client-gtk gir1.2-spiceclientgtk-3.0 gir1.2-spiceclientglib-2.0

passwordless ssh settup is convinient.

and also it will create the qemu .xml files for your vms so you can start them from virsh interface
without the manual way of starting them trough qemu and that mess of a commandline.

virt-manager helped me learn kvm/qemu libvirt at a much leaner learning curve , and then you can transition to
comandline/headless much faster.

PS: using qemu manualy is the hardest possible way to do to all of this , at the very least
start using libvirts virsh interface to crate/launch/edit virtual machines. virt-manager is the
visual way of doing what virsh interface does on top of libvirt.

#10 Packaging for Devuan » Basic Packaging workflow for new package.(new maintainer) » 2022-05-04 02:06:22

danuan
Replies: 3

New to debian packaging and git.

But finally bit the bullet and learned about basic packaging.
Went trough debian docs ( overwhelming at first, and even a week later)
(but trying to keep it as basic as possible, not a full time maintainer)

Started with a cmake project i need .

https://github.com/jow-/nlbwmon
which wants extra libraries
https://git.openwrt.org/project/libubox.git

Workflow so far has been (nlbwmon itself, libubox is a separate project).

git clone 
git branch -m master upstream
git checkout -b master
dh_make --createorig -p package_00.01.01

clean and edit /debian folder (only minimal needed files for now)
fix debian/package.install
add debian/package.init ( custom init )
add debian/package.links ( link daemon to function as a frontend also)

add missing files/* to codebase and add them to debian/package.install
1: modified config file (otherwise init has to be based on a non standard config file)
2: another file just to rename (as i could not figure out how to do it in package.install)

dpkg-source --commit

git commit

dpkg-buildpackage

This works just fine for me , get my packages that i can install with dpkg -i .

But starting to wonder , if anyone shows interest in this later , or i decide to upload this
for public consumption or even to devuan at some later stage.

What would be missing in my workflow that is required for proper (accepted/certified?) packaging?

Do i need to start using git-buildpackage?
Will that let me automatically connect my git commits to debian patches?

Also upstream sources have no tagging or any kind of versioning to tie releases to.
(what do i do in that case ? create my own versioning?)

thanx

#12 Re: Documentation » HOWTO: Devuan ROOT on ZFS and MultiBoot » 2020-09-02 20:34:44

making clones and multibooting

( i have reran all theses steps to confirm that it does work without errors)

Lets create  a standard clone

(not to confuse with zfs clone of a snapshot which can be used as a means of branching a snapshot to a read/write dataset but it
stays linked as a dependent of parent untill steps are taken otherwise)

This step will be done from debian1 system on zpool

This takes 2-5 min  at most  (once you have the procedure figured out or automated)-and a new clone is ready for boot.
(could be under a minute if done right even  with hdd drives)(but that is with automation and scripting for the whole procedure )

(-r snapshots children in debian1 if there are any)

zfs snapshot -r rpool/ROOT/debian1@cloning

(-R sends snapshots in debian1 and its children recursively from selected snapshot)
(zfs recieve can use -F to collapse all snapshots if needed)

zfs send -Rv rpool/ROOT/debian1@cloning | zfs receive rpool/ROOT/debian1T

But i like to keep the snapshots and delete or rename later if needed.

Pick a naming scheme to keep track of things

Here i am adding "T" to debian1 for test.
And  i will use

  • debian1's for beuwulf

  • debian2's for chimaera

  • debian3's for ceres.

So a horizontal cloning  move adds a letter and a version number, vertical cloning changes the first number after the name.
And if you also keep alphanumeric order to things (zfs list) or (zfs list -t all) will look like a tree that links descendant clones to parents.

(just realised it should be devuan) not too late, all datasets can be renamed and grub and fstabs updated but has to be done with foresight as not to get locked out which did happend once to me when i renamed initial rpool/ROOT/debian1 from its clone without running grub-update inside chroot
had to break out our managment system and fix it from there.

and rename the snapshot of the cloned system to signify it was cloned (which sort of becomes initial for this clone)
but you can roll this back  to original initial since we kept the preceding snapshots.

zfs rename rpool/ROOT/debian1T@cloning  rpool/ROOT/debian1T@cloned
(zfs managed mountpoints)

For the first clone i will use zfs mountpoints , as they get automounted based on whichever
current ROOT dataset is in use, and during this, unlike normal zfs behavior  it will not mount
every dataset in the pool that has a mountpoint and canmount=auto is on, in fact it will not mount
anything else automaticaly now, only current ROOT's children.  Even (zfs mount -a) does not
mount anything else, even if there are no conflicts. However if you were to exit back in to
our original non zpool system, it would mount every non conflicting mountpoint it could.

So while using zfs managed mountpoints in / on ZFS

  • everything has to be a child, and get automounted

  • or mounted through a script  with ((zfs mount rpool/datasetX)this will mount to whichever mounpoint is set for this dataset)

  • or tempmount with(mount -t zfs -o zfsutil rpool/datasetXXX /mnt/datasetXXX)(mounpoint needs to exist),

  • or as a legacy mount trough fstab

  • or manualy if legacy with(mount -t zfs rpool/datasetX /mnt/datasetX) notice -o zfsutil not there for legacy(mounpoint needs to exist).

And i am not sure on best route yet, as some people use zfs managed some go with legacy , some mix and match.
Only downsides i have seen, is where people report SystemD possibly nonsystemD systems rushing things and without
extra modifications with  zfs managed mounts, systems can write to folders on boot before zfs can mount the  datasets
and once a directory with files exists , by default zfs will not overmount.

And the other odd thing being, that many datasets will have same mountpoints, which would be very odd outside of the root on zfs behavior.
(but inside ROOT on zfs it seems to function as intended ( anyone with experience in solaris/illumos/indiana/(bsd?) please let me know if this is ok ))

Mount the new clone to a tempmount

mkdir /mnt/debian1T
mount -t zfs  -o zfsutil rpool/ROOT/debian1T /mnt/debian1T

now chroot inside the system

mount --rbind /dev /mnt/debian1T/dev
mount --rbind /proc /mnt/debian1T/proc
mount --rbind /sys /mnt/debian1T/sys
mount --rbind /run /mnt/debian1T/run
chroot /mnt/debian1T/ /bin/bash --login

Make some child datasets, first we need to rename old folders that these will replace, otherwise zfs will no automount them at this point.

cd /
mv home home.old
mv var var.old
mv tmp tmp.old

create replacement datasets (should automount as a child of current root in its relative path)

zfs create rpool/ROOT/debian1T/home
zfs create rpool/ROOT/debian1T/var
zfs create rpool/ROOT/debian1T/tmp

move contents to new replacement datasets

mv home.old/* home/
mv var.old/* var/
mv tmp.old/* tmp/

Check that attributes/permissions match from *.old to new versions (datasets)

edit /etc/hostname and change debian1 to identify this new clone as debian1T

to update /boot/grub/grub.cfg for new path of this system in the zpool
(this makes it bootable once initial grub from ROOT/debian1 chainloads /boot/grub/grub.cfg from this ROOT/debian1T)

update-grub

and

exit

chroot

Making new clone bootable

in /etc/grub.d/ of the initial system (rpool/ROOT/debian1) we need to create a file for booting this new system

clone and edit a new grub startup file
i started using 

  • 50s for debian1 systems which are beowulf

  • 60s for debian2 chimaera

  • 70s for debian3 ceres

to identify that it is a chainload of grub to debian1T

cp 40_custom 51_chain_debian1T

and add

menuentry "chainload rpool/ROOT/debian1T"  {
        insmod zfs
        echo    'chain Loading rpool/ROOT/debian1T'
        configfile  /ROOT/debian1T@/boot/grub/grub.cfg
}

or second option, directly tell it to boot specific kernel and initrd with specified root
but you will have to make a new file or menu entry after each kernel upgrade manualy
( keep in mind to check for right initrd and kernel names )

menuentry "/ROOT/debian1T@/boot/vmlinuz-5.7.0-0.bpo.2-amd64"  {
        insmod zfs
        echo    'Loading Linux 5.7.0-0.bpo.2-amd64 ...'
        linux   /ROOT/debian1T@/boot/vmlinuz-5.7.0-0.bpo.2-amd64 root=ZFS=rpool/ROOT/debian1T ro quiet
        echo    'Loading initial ramdisk ...'
        initrd  /ROOT/debian1T@/boot/initrd.img-5.7.0-0.bpo.2-amd64
}

now update /boot/grub/grub.cfg

update-grub

Now all other systems will be booted from here by chanloading grub.cfg files from other datasets

Have not yet figured out an ideal solution in which grub will hunt down all of the
installations under rpool/ROOT/  and add them like it does for non zfs drives

here is what  it can look like after a few updates,snapshots,etc...

NAME                                   USED  AVAIL     REFER  MOUNTPOINT
rpool                                 50.0G   237G       24K  none
rpool/ROOT                            9.15G   237G       24K  none
rpool/ROOT/debian1                    1.16G   237G     1005M  /
rpool/ROOT/debian1@initial            89.2M      -      743M  -
rpool/ROOT/debian1@kernel5.7          68.9M      -      982M  -
rpool/ROOT/debian1@cloning             532K      -     1005M  -
rpool/ROOT/debian1T                   1.39G   237G      768M  /
rpool/ROOT/debian1T@initial           89.2M      -      743M  -
rpool/ROOT/debian1T@kernel5.7         68.9M      -      982M  -
rpool/ROOT/debian1T@cloned            66.0M      -     1005M  -
rpool/ROOT/debian1T/home                34K   237G       34K  /home
rpool/ROOT/debian1T/tmp                 24K   237G       24K  /tmp
rpool/ROOT/debian1T/var                237M   237G      237M  /var
rpool/swap                            10.6G   247G       12K  -

                     

##### Cloning again and changing to legacy mountpoints #####

This step will be done again from debian1 system on zpool  but could be done from system being cloned also.
Naming snapshot cloning2 as cloning1 was used earlier and you  might have done some updates that you  want to propagate, if not skip this and use original cloning1 snapshot.

zfs snapshot -r rpool/ROOT/debian1T@cloning2

next iteration of debian1T add another number to keep things consistent

zfs send -Rv rpool/ROOT/debian1T@cloning2 | zfs receive rpool/ROOT/debian1T2
zfs rename rpool/ROOT/debian1T2@cloning2  rpool/ROOT/debian1T2@cloned2
(Changing zfs managed mountpoints to legacy)

For the first clone we used zfs mountpoints , now we switch to legacy to gain some control back from zfs behavior of not having editable config files.  Example being, i jumped on the zfs does everything bus when i started using it , but later realized it might not be ideal from the managing the system angle. With NFS shares SMB shares and such i started using zfs wrapper commands , but once i realized there was no config file to edit in the right place, and only  way was to give shell commands . I pulled it all out back to /etc/exports and /etc/samba/smb.conf. Same situation here, it might be nice for some things to be managed by a single system (various registryDs come to mind). But not other times when we want to be in control.

So now using legacy managed mountpoints things revert to old system patterns.
Root is mounted from grub as before(and does not need the (zfs set mountpoint=whateverX) set now, as no children will depend on that to get relative mountpoints). And everything else gets mounted from fstab.

Change dataset mountpoints to legacy.

zfs set mountpoint=legacy rpool/ROOT/debian1T2
zfs set mountpoint=legacy rpool/ROOT/debian1T2/home
zfs set mountpoint=legacy rpool/ROOT/debian1T2/var
zfs set mountpoint=legacy rpool/ROOT/debian1T2/tmp

now we have to remove -o zfsutil from the tempmount command as the mountpoint is no longer zfs managed but legacy

mkdir /mnt/debian1T2
mount -t zfs  rpool/ROOT/debian1T2 /mnt/debian1T2

now chroot inside the system again

mount --rbind /dev /mnt/debian1T2/dev
mount --rbind /proc /mnt/debian1T2/proc
mount --rbind /sys /mnt/debian1T2/sys
mount --rbind /run /mnt/debian1T2/run
chroot /mnt/debian1T2/ /bin/bash --login

edit /etc/hostname and change debian1T to identify this new clone as debian1T2

edit /etc/fstab and and add the following lines for new datasets(or patitions in "oldspeak")
rpool/ROOT/debian1T2/home /home zfs  defaults 0 0
rpool/ROOT/debian1T2/var /var zfs  defaults 0 0
rpool/ROOT/debian1T2/tmp /tmp zfs  defaults 0 0

to update /boot/grub/grub.cfg for new path of this system in the zpool

update-grub

and

exit

chroot

Making new clone bootable

in /etc/grub.d/ of the initial system (rpool/ROOT/debian1) we need to create a file for booting this new system

cp 40_custom 52_chain_debian1T2

edit /etc/grub.d/52_chain_debian1T2 and add the following

menuentry "chainload rpool/ROOT/debian1T2"  {
        insmod zfs
        echo    'chain Loading rpool/ROOT/debian1T2'
        configfile  /ROOT/debian1T2@/boot/grub/grub.cfg
}

or second option, directly tell it to boot specific kernel and initrd with specified root
but you will have to make a new file or menu entry after each kernel upgrade manualy

menuentry "/ROOT/debian1T2@/boot/vmlinuz-5.7.0-0.bpo.2-amd64"  {
        insmod zfs
        echo    'Loading Linux 5.7.0-0.bpo.2-amd64 ...'
        linux   /ROOT/debian1T2@/boot/vmlinuz-5.7.0-0.bpo.2-amd64 root=ZFS=rpool/ROOT/debian1T2 ro quiet
        echo    'Loading initial ramdisk ...'
        initrd  /ROOT/debian1T2@/boot/initrd.img-5.7.0-0.bpo.2-amd64
}

now update /boot/grub/grub.cfg to incorporate new system

update-grub

and now  we have 3 systems all should boot and can be temp mounted and chrooted in to from each other.

zfs list -t all
NAME                                   USED  AVAIL     REFER  MOUNTPOINT
rpool                                 50.0G   237G       24K  none
rpool/ROOT                            9.15G   237G       24K  none
rpool/ROOT/debian1                    1.16G   237G     1005M  /
rpool/ROOT/debian1@initial            89.2M      -      743M  -
rpool/ROOT/debian1@kernel5.7          68.9M      -      982M  -
rpool/ROOT/debian1@cloning             532K      -     1005M  -
rpool/ROOT/debian1T                   1.39G   237G      768M  /
rpool/ROOT/debian1T@initial           89.2M      -      743M  -
rpool/ROOT/debian1T@kernel5.7         68.9M      -      982M  -
rpool/ROOT/debian1T@cloned            66.0M      -     1005M  -
rpool/ROOT/debian1T@cloning2             0B      -      768M  -
rpool/ROOT/debian1T/home                34K   237G       34K  /home
rpool/ROOT/debian1T/home@cloning2        0B      -       34K  -
rpool/ROOT/debian1T/tmp                 24K   237G       24K  /tmp
rpool/ROOT/debian1T/tmp@cloning2         0B      -       24K  -
rpool/ROOT/debian1T/var                237M   237G      237M  /var
rpool/ROOT/debian1T/var@cloning2         0B      -      237M  -
rpool/ROOT/debian1T2                  1.39G   237G      768M  legacy
rpool/ROOT/debian1T2@initial          89.2M      -      743M  -
rpool/ROOT/debian1T2@kernel5.7        68.9M      -      982M  -
rpool/ROOT/debian1T2@cloned           66.0M      -     1005M  -
rpool/ROOT/debian1T2@cloned2           288K      -      769M  -
rpool/ROOT/debian1T2/home             51.5K   237G       33K  legacy
rpool/ROOT/debian1T2/home@cloning2    18.5K      -       34K  -
rpool/ROOT/debian1T2/tmp                38K   237G       24K  legacy
rpool/ROOT/debian1T2/tmp@cloning2       14K      -       24K  -
rpool/ROOT/debian1T2/var               237M   237G      237M  legacy
rpool/ROOT/debian1T2/var@cloning2      207K      -      237M  -
rpool/swap                            10.6G   247G       12K  -

you can cleanup the snapshots that are not needed with (zfs destroy rpool/ROOT/debian1T2/tmp@cloning2) this will not delete anything, only the possibility of reverting to that snapshot on that dataset.

#13 Documentation » HOWTO: Devuan ROOT on ZFS and MultiBoot » 2020-09-02 20:33:40

danuan
Replies: 2

Took me a while to get around to trying to do this, as i was not sure of how much of a hack the whole root on zfs install would be. But minus the odd install part of the system(due to some license  incompatibility) Which is on par with doing root on NFS which is then managed trough zfs snapshots and cloning on server. No custom scrips, patches, or even extensive modifications to any part of system will be used in this setup .

Here is my starting point References for ROOT on ZOL first one is what i hoped to achieve.

I wanted my rpool on whole disk and not how most  howtos split up in to bpool and rpool (my understanding is, it is due to limitations in grub and some zpool features but if we get it running and not enable them later, all is well ?) This is most i could find at the moment  but a good lead. https://unix.stackexchange.com/question … b-can-read

Aditional info on BSD and Indiana multibooting

Starting with devuan Live image was discarded. It is not persistent, you will have to reinstall and reconfigure zfs and other things to get your system back up  if something goes wrong. I chose a hard drive installation for a rescue  system that is ready to go. Can also boot systems in the pool from outside if needed. Other options can be a usb stick, msata drive, anything that devuan can be installed on and booted.

Interesting option  could be an ssd drive that could serve multiple functions.

  • 1 Rescue System
    2 Swap on ssd for hibernation which is not support on zfs zvol yet
    3 And a persistent l2arc on same ssd when hibernation is used

But that is a bit of a strech for data integrity ideal of zfs. (Moving data, here to swap, outside of zfs control)

To get started

I am doing net install of beowulf,

  • 300 meg boot partition

  • 10Gig root

  • 5Gig swap

Keep it small and manageable. We will use this as our maintenance/rescue system and as a starting clone for first system on zfs root.

Grub-legacy booting only for now

  • no need for X or desktops etc...

  • minimal advanced install  only selecting "standard system utilities".

Some nice things to help out , but not essential

  • To help with copy paste the instructions from another machine on networked system might be a good idea at this point as it will save time later once we get other clones going. moving in and out of different installs will be much faster. (and for some reason without --no-install-recommends it tries to pull in things from icon themes to x11-common , basicaly half the install of X without the X)

    apt-get install --no-install-recommends openssh-server  

    And on a neworked machine that will access this installation, (configure autologin for ssh(and a desktop launcher to make it really easy))
    replace username@machineIP to match your install.

    ssh-keygen -t rsa
    ssh-copy-id user@10.10.50.x
  • to help copy and paste inside the console if needed

    apt-get install gpm
  • if needed

    apt-get install nfs-common
Once the system is up and configured to your liking.
  • Install headers for your kernel. (apt get int the next step seems to pull in wrong ones, so do it manually )
    then install 0.7.12 zfs  if you plan on staying with 4.X kernel.

    apt-get install zfs-dkms zfsutils-linux
  • or keep going and install 0.8.4 version of zfs, uncomment or add beowulf-backports to /etc/apt/sources.list
    and make sure contrib is also there and install.

    apt-get install -t beowulf-backports zfs-dkms zfsutils-linux

    and install 5.X kernel and headers ,(if done before installing backports version of zfs i get install errors of it trying to compile old version of zfs-dkms to 5.x kernel.)

Going from .7 zfs to .8  is a big step that is worth it in features and functionality. Lots of things started working in .8
that did not before, like hotspare drive functionality started working in .8 . I tested it by unplugging drives or
dd if=/dev/urandom of=/dev/sdX garbage to a drive in zpool.

But you do not have to upgrade the pool version itself while upgrading to .8 from .7 unless you need the extra flags
or functions inside the pool , without upgrading the pool version you can keep it backwards compatable. to older linux
kernels or BSD and even Solaris/Indiana (citations ?)

  • Now , clean up, add your favorite aliases in .bashrc etc.. Maybe delete apt deb caches to make it even smaller.
    Every iteration of clones from this point will start adding up and then throw in snapshots on top of that and you could
    double or triple the sizes if no cleanings are done.

If anything from this point on is not clear here are great ZFS specific resources 

Create a pool

I am not setting altroot, the non persisten / mountpoint. As we should start using tempmounts for zfs
from the get go, its what i will use later for cloning and managing. Its does the same thing but with  tempmount you
know exactly what you mounted and where.

ls -al /dev/disk/by-id/ 

and

lsblk

to make sure not to grab the system disk by mistake

pick your disks , choose raidz1, 2, 3, mirror, stripe , mirror/stripe,
or a mirror of raidz3 stripes and -o copies=5 of data ? for special occasions !

Setting  ashift=12 is highly recommended  during the creation(please investigate for yourself)

zpool create rpool mirror \
ata-WDC_WD1600AAJS-75M0A0_WD-WMAV31415162 \
ata-WDC_WD1600AAJS-75M0A0_WD-WMAV31415738 \
mirror \
ata-WDC_WD1600AAJS-75M0A0_WD-WMAV31432691 \
ata-WDC_WD1600AAJS-75M0A0_WD-WMAV31376665

#zpool status
  pool: rpool
state: ONLINE
  scan: none requested
config:

    NAME                                           STATE     READ WRITE CKSUM
    rpool                                          ONLINE       0     0     0
      mirror-0                                     ONLINE       0     0     0
        ata-WDC_WD1600AAJS-75M0A0_WD-WMAV31415162  ONLINE       0     0     0
        ata-WDC_WD1600AAJS-75M0A0_WD-WMAV31415738  ONLINE       0     0     0
      mirror-1                                     ONLINE       0     0     0
        ata-WDC_WD1600AAJS-75M0A0_WD-WMAV31432691  ONLINE       0     0     0
        ata-WDC_WD1600AAJS-75M0A0_WD-WMAV31376665  ONLINE       0     0     0

errors: No known data errors

Set some basics that we want to propagate to child datasets (unless set localy), this is mostly  options and depends on use case ( tuning zfs ) most things can be set later, while others  will only take effect for new files unless you recopy files or simpler to send and recieve dataset with new options. Like recompress  to lz4 from gzip etc...Some options can only be set once during pool creation like ashift, or casesensitivity  on datasets for smb/cifs shares.

can embed this during pool creation but i keep it separate.

zfs set mountpoint=none rpool

Also this can be moved down to rpool/ROOT As i will have other datasets under rpool and probably do not want them inheriting these options

zfs set atime=off rpool
zfs set relatime=on rpool
zfs set compression=lz4 rpool

info on proper zvol swap use https://openzfs.github.io/openzfs-docs/ … wap-device

zfs create -V 10G -b $(getconf PAGESIZE) \
    -o logbias=throughput -o sync=always \
    -o primarycache=metadata -o compression=off\
     rpool/swap

mkswap -L swap /dev/zvol/rpool/swap
Start making datasets

This sets up the dataset tree that will make managing this easier (i guess format is from solaris).

zfs create -o mountpoint=none rpool/ROOT
zfs create -o mountpoint=/ rpool/ROOT/debian1

Not very clear on this one yet, (but lets use it, till we know better)

zpool set bootfs=rpool/ROOT/debian1 rpool

Now need to make a mountpoint for our first system. Usualy zfs does not need mountpoints, It creates them if none exist, and refuses to mount on mountpoints with files unless overridden , but when using tempmount or legacy, it will balk at not having one.

mkdir /mnt/debian1
mount -t zfs -o zfsutil rpool/ROOT/debian1 /mnt/debian1

to make sure rpool/ROOT/debian1 is indeed mounted

df -h 
zfs get all rpool/ROOT/debian1 | grep mount  
Cloning the system in to the zpool

now rpool is ready to accept our first system

apt-get install rsync

Since this is a new system we are not excluding more things that need to be on other systems for cloning like /media etc.... could take out srv too  and mnt since we are staying on one filesystem but for safetys sake.... lets not loop it ( and if you do not have a separate /boot partition , disreguard the second  command)

rsync -aAHXx / --exclude={"/dev/*","/proc/*","/sys/*","/run/*","/mnt/*","/srv/*"} /mnt/debian1/
rsync -aAHXx /boot/* /mnt/debian1/boot/

Now chroot to get in to the system and do a few tasks

mount --rbind /dev /mnt/debian1/dev
mount --rbind /proc /mnt/debian1/proc
mount --rbind /sys /mnt/debian1/sys
mount --rbind /run /mnt/debian1/run

chroot /mnt/debian1 /bin/bash --login

A config file/db of sorts, if does not exist yet make.
Some say use it , some say go without, as all info is within
the pool drives anyways (need to clear up)

mkdir -p /etc/zfs
zpool set cachefile=/etc/zfs/zpool.cache rpool

Will need this step to make sure zfs can mount / right after grub, in managment system we are not runing root on zfs so it can mount the pool later during boot by loading a kernel modue. But here we will need initramfs to do that.

apt-get install -t beowulf-backports zfs-initramfs

Test if grub sees that its zfs ?

grub-probe /boot

i do not think this next step is needed as it creates a double entry for
root=ZFS=rpool in /boot/grub/grub.cfg

edit /etc/default/grub
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/debian1"
and uncomment for more info
GRUB_TERMINAL=console

  • this will write updates to /boot/grub/grub.cfg which will be called by grub chain starting from mbr.
    At the moment it would still boot the initial system unless we do this step.

    update-grub
  • Using cfdisk change 9th partition  on each zfs pool drive to BIOS boot. Which zfs made during pool creation,small 8 meg patition on my system, (need citation for this) but its seems to be for uefi  boot compatibility. And if it was in active part of zfs it would have balked at me using during first zpool scrub or on imports.Here is some info https://www.reddit.com/r/zfs/comments/g … ved_space/
    (or bios_grub flag in gparted)

if previous step is incorrect or not done , this will happend on grub mbr install, replace X with drive(s) in the pool

#grub-install /dev/sdX
Installing for i386-pc platform.
grub-install: warning: this GPT partition label contains no BIOS Boot Partition; embedding won't be possible.
grub-install: error: filesystem `zfs' doesn't support blocklists.

if

grub-install /dev/sdX 

goes without error , repeat for each drive in the pool this way if one drive fails in the pool it can alway boot from another provided they are set sequentially in bios for boot drive order .

edit /etc/fstab
coment out everything about old filesystems as zfs will handle that for now,
but we will return here if we start using legasy mountpoints. And add
/dev/zvol/rpool/swap    none    swap    sw    0    0

edit /etc/initramfs-tools/conf.d/resume
as zfs does not support  hibernation on zvol yet and will hang if you leave old drive there
RESUME=none
(and i wonder if this can be separate to from zvol swap, but not tested yet)

Update initramfs to get  resume update to none.

update-initramfs -c -k all  

and

exit

from chroot

At this point reboot and change the bios boot drive to one of the drives from the pool, and if everything went to plan, the system should be up and running

df -h  to see that the root is mounted from rpool/ROOT/systemXXX that you expect. Do that a few times on every new system you clone, as mistakes in fstab or forgeting update-grub will boot in to a clone source system.

some zfs errors about it not being unable to mount / are ok as it is normal when initrd already mounted it, like on NFS root boot i get same errors.

If booted and everything ok, do

zfs snapshot rpool/ROOT/debian1@initial

#zfs list -t  all
NAME                                   USED  AVAIL     REFER  MOUNTPOINT
rpool                                     12.4G   227G       24K      none
rpool/ROOT                           1.18G   227G       24K      none
rpool/ROOT/debian1               1.18G   227G      1005M    /
rpool/ROOT/debian1@initial    5.5M    -             1000M     -
rpool/swap                             10.6G   277G      12K        -

do not rush on installing anything here yet as this system is another maintenance system , only inside zpool now.
and a new initial clone source for next steps, plus chain loading  all other systems will come from this systems grub.

#14 Re: Forum Feedback » Best place to post a howto_? » 2020-08-31 15:18:19

Head_on_a_Stick

Sure , that guide is one of my references, but it is geared for systemD and not
devuan. And it also splits up zfs on to partitions on disks. I am using 
http://www.thecrosseroads.net/2016/02/b … on-debian/
for my starting point, while using grub and adding cloning of multiple systems
and getting in to managing the actual system after the install.

And to actually  install, adapt  and configure the system from either of the other guides
and 10 other reference howtos, took me 2 days of testing and back and forth on devuan,
even though  ive been using  zfs for a while. and might be another day or two to
get kinks out of the way.

So in other words , i might save a day or more  for those starting in this endeavor .
does that seem fair ?

#15 Forum Feedback » Best place to post a howto_? » 2020-08-31 04:23:52

danuan
Replies: 4

New to the board, wanted to post a howto or two.

Just got zfs on root going starting with beowulf and through to ceres
on whole drive moror/stripe pool, 6 or so clones with various configs.
snapshots, clones, shared datasets etc..
Maybe some input on things that need ironing out or i missed.

And just wanted to see if i can edit my posts later or do i need to prepare
the whole thing as a single shot.

move this post if necessary , just for testing purposes

Board footer

Forum Software