<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
	<channel>
		<atom:link href="http://dev1galaxy.org/extern.php?action=feed&amp;tid=3794&amp;type=rss" rel="self" type="application/rss+xml" />
		<title><![CDATA[Dev1 Galaxy Forum / HOWTO: Devuan ROOT on ZFS and MultiBoot]]></title>
		<link>http://dev1galaxy.org/viewtopic.php?id=3794</link>
		<description><![CDATA[The most recent posts in HOWTO: Devuan ROOT on ZFS and MultiBoot.]]></description>
		<lastBuildDate>Wed, 02 Sep 2020 20:35:48 +0000</lastBuildDate>
		<generator>FluxBB</generator>
		<item>
			<title><![CDATA[Re: HOWTO: Devuan ROOT on ZFS and MultiBoot]]></title>
			<link>http://dev1galaxy.org/viewtopic.php?pid=24412#p24412</link>
			<description><![CDATA[<p>placeholder for part 3</p>]]></description>
			<author><![CDATA[dummy@example.com (danuan)]]></author>
			<pubDate>Wed, 02 Sep 2020 20:35:48 +0000</pubDate>
			<guid>http://dev1galaxy.org/viewtopic.php?pid=24412#p24412</guid>
		</item>
		<item>
			<title><![CDATA[Re: HOWTO: Devuan ROOT on ZFS and MultiBoot]]></title>
			<link>http://dev1galaxy.org/viewtopic.php?pid=24411#p24411</link>
			<description><![CDATA[<h5>making clones and multibooting</h5><p> ( i have reran all theses steps to confirm that it does work without errors)</p><h5>Lets create&#160; a standard clone</h5><p>(not to confuse with zfs clone of a snapshot which can be used as a means of branching a snapshot to a read/write dataset but it <br />stays linked as a dependent of parent untill steps are taken otherwise)</p><p>This step will be done from debian1 system on zpool </p><p>This takes 2-5 min&#160; at most&#160; (once you have the procedure figured out or automated)-and a new clone is ready for boot.<br />(could be under a minute if done right even&#160; with hdd drives)(but that is with automation and scripting for the whole procedure ) </p><p>(-r snapshots children in debian1 if there are any)</p><div class="codebox"><pre><code>zfs snapshot -r rpool/ROOT/debian1@cloning</code></pre></div><p>(-R sends snapshots in debian1 and its children recursively from selected snapshot)<br />(zfs recieve can use -F to collapse all snapshots if needed)</p><div class="codebox"><pre><code>zfs send -Rv rpool/ROOT/debian1@cloning | zfs receive rpool/ROOT/debian1T</code></pre></div><p>But i like to keep the snapshots and delete or rename later if needed.</p><h5>Pick a naming scheme to keep track of things</h5><p>Here i am adding &quot;T&quot; to debian1 for test. <br />And&#160; i will use</p><ul><li><p>debian1&#039;s for beuwulf</p></li><li><p>debian2&#039;s for chimaera</p></li><li><p>debian3&#039;s for ceres.</p></li></ul><p>So a horizontal cloning&#160; move adds a letter and a version number, vertical cloning changes the first number after the name.<br />And if you also keep alphanumeric order to things (zfs list) or (zfs list -t all) will look like a tree that links descendant clones to parents.</p><p><em>(just realised it should be devuan) not too late, all datasets can be renamed and grub and fstabs updated but has to be done with foresight as not to get locked out which did happend once to me when i renamed initial rpool/ROOT/debian1 from its clone without running grub-update inside chroot<br />had to break out our managment system and fix it from there.</em></p><p>and rename the snapshot of the cloned system to signify it was cloned (which sort of becomes initial for this clone)<br />but you can roll this back&#160; to original initial since we kept the preceding snapshots.</p><div class="codebox"><pre><code>zfs rename rpool/ROOT/debian1T@cloning  rpool/ROOT/debian1T@cloned</code></pre></div><h5>(zfs managed mountpoints)</h5><p>For the first clone i will use zfs mountpoints , as they get automounted based on whichever<br />current ROOT dataset is in use, and during this, unlike normal zfs behavior&#160; it will not mount<br />every dataset in the pool that has a mountpoint and canmount=auto is on, in fact it will not mount <br />anything else automaticaly now, only current ROOT&#039;s children.&#160; Even (zfs mount -a) does not<br />mount anything else, even if there are no conflicts. However if you were to exit back in to <br />our original non zpool system, it would mount every non conflicting mountpoint it could. </p><p><strong>So while using zfs managed mountpoints in / on ZFS</strong> </p><ul><li><p>everything has to be a child, and get automounted</p></li><li><p>or mounted through a script&#160; with ((zfs mount rpool/datasetX)this will mount to whichever mounpoint is set for this dataset)</p></li><li><p>or tempmount with(mount -t zfs -o zfsutil rpool/datasetXXX /mnt/datasetXXX)(mounpoint needs to exist),</p></li><li><p>or as a legacy mount trough fstab</p></li><li><p>or manualy if legacy with(mount -t zfs rpool/datasetX /mnt/datasetX) notice -o zfsutil not there for legacy(mounpoint needs to exist).</p></li></ul><p><em>And i am not sure on best route yet, as some people use zfs managed some go with legacy , some mix and match.<br />Only downsides i have seen, is where people report SystemD possibly nonsystemD systems rushing things and without<br />extra modifications with&#160; zfs managed mounts, systems can write to folders on boot before zfs can mount the&#160; datasets<br />and once a directory with files exists , by default zfs will not overmount.</em></p><p><em>And the other odd thing being, that many datasets will have same mountpoints, which would be very odd outside of the root on zfs behavior.<br />(but inside ROOT on zfs it seems to function as intended ( anyone with experience in solaris/illumos/indiana/(bsd?) please let me know if this is ok )) </em></p><p>Mount the new clone to a tempmount</p><div class="codebox"><pre><code>mkdir /mnt/debian1T
mount -t zfs  -o zfsutil rpool/ROOT/debian1T /mnt/debian1T</code></pre></div><p><strong>now chroot inside the system</strong></p><div class="codebox"><pre><code>mount --rbind /dev /mnt/debian1T/dev
mount --rbind /proc /mnt/debian1T/proc
mount --rbind /sys /mnt/debian1T/sys
mount --rbind /run /mnt/debian1T/run
chroot /mnt/debian1T/ /bin/bash --login</code></pre></div><p>Make some child datasets, first we need to rename old folders that these will replace, otherwise zfs will no automount them at this point.</p><div class="codebox"><pre><code>cd /
mv home home.old
mv var var.old
mv tmp tmp.old</code></pre></div><p>create replacement datasets (should automount as a child of current root in its relative path)</p><div class="codebox"><pre><code>zfs create rpool/ROOT/debian1T/home
zfs create rpool/ROOT/debian1T/var
zfs create rpool/ROOT/debian1T/tmp</code></pre></div><p>move contents to new replacement datasets</p><div class="codebox"><pre><code>mv home.old/* home/
mv var.old/* var/
mv tmp.old/* tmp/</code></pre></div><p>Check that attributes/permissions match from *.old to new versions (datasets)</p><p><span class="bbc">edit /etc/hostname and change debian1 to identify this new clone as debian1T</span></p><p>to update /boot/grub/grub.cfg for new path of this system in the zpool <br />(this makes it bootable once initial grub from ROOT/debian1 chainloads /boot/grub/grub.cfg from this ROOT/debian1T)</p><div class="codebox"><pre><code>update-grub</code></pre></div><p>and</p><div class="codebox"><pre><code>exit</code></pre></div><p>chroot</p><h5>Making new clone bootable</h5><p>in /etc/grub.d/ of the initial system (rpool/ROOT/debian1) we need to create a file for booting this new system </p><p><span class="bbc">clone and edit a new grub startup file</span><br />i started using&#160; </p><ul><li><p>50s for debian1 systems which are beowulf</p></li><li><p>60s for debian2 chimaera</p></li><li><p>70s for debian3 ceres</p></li></ul><p>to identify that it is a chainload of grub to debian1T</p><div class="codebox"><pre><code>cp 40_custom 51_chain_debian1T</code></pre></div><p>and add</p><div class="quotebox"><blockquote><div><p>menuentry &quot;chainload rpool/ROOT/debian1T&quot;&#160; {<br />&#160; &#160; &#160; &#160; insmod zfs<br />&#160; &#160; &#160; &#160; echo&#160; &#160; &#039;chain Loading rpool/ROOT/debian1T&#039;<br />&#160; &#160; &#160; &#160; configfile&#160; /ROOT/debian1T@/boot/grub/grub.cfg<br />}</p></div></blockquote></div><p>or second option, directly tell it to boot specific kernel and initrd with specified root<br />but you will have to make a new file or menu entry after each kernel upgrade manualy<br />( keep in mind to check for right initrd and kernel names )</p><div class="quotebox"><blockquote><div><p>menuentry &quot;/ROOT/debian1T@/boot/vmlinuz-5.7.0-0.bpo.2-amd64&quot;&#160; {<br />&#160; &#160; &#160; &#160; insmod zfs<br />&#160; &#160; &#160; &#160; echo&#160; &#160; &#039;Loading Linux 5.7.0-0.bpo.2-amd64 ...&#039;<br />&#160; &#160; &#160; &#160; linux&#160; &#160;/ROOT/debian1T@/boot/vmlinuz-5.7.0-0.bpo.2-amd64 root=ZFS=rpool/ROOT/debian1T ro quiet<br />&#160; &#160; &#160; &#160; echo&#160; &#160; &#039;Loading initial ramdisk ...&#039;<br />&#160; &#160; &#160; &#160; initrd&#160; /ROOT/debian1T@/boot/initrd.img-5.7.0-0.bpo.2-amd64<br />}</p></div></blockquote></div><p>now update /boot/grub/grub.cfg</p><div class="codebox"><pre><code>update-grub</code></pre></div><p>Now all other systems will be booted from here by chanloading grub.cfg files from other datasets </p><div class="quotebox"><blockquote><div><p>Have not yet figured out an ideal solution in which grub will hunt down all of the <br />installations under rpool/ROOT/&#160; and add them like it does for non zfs drives</p></div></blockquote></div><p>here is what&#160; it can look like after a few updates,snapshots,etc...</p><div class="quotebox"><blockquote><div><p>NAME&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160;USED&#160; AVAIL&#160; &#160; &#160;REFER&#160; MOUNTPOINT<br />rpool&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160;50.0G&#160; &#160;237G&#160; &#160; &#160; &#160;24K&#160; none<br />rpool/ROOT&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; 9.15G&#160; &#160;237G&#160; &#160; &#160; &#160;24K&#160; none<br />rpool/ROOT/debian1&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; 1.16G&#160; &#160;237G&#160; &#160; &#160;1005M&#160; /<br />rpool/ROOT/debian1@initial&#160; &#160; &#160; &#160; &#160; &#160; 89.2M&#160; &#160; &#160; -&#160; &#160; &#160; 743M&#160; -<br />rpool/ROOT/debian1@kernel5.7&#160; &#160; &#160; &#160; &#160; 68.9M&#160; &#160; &#160; -&#160; &#160; &#160; 982M&#160; -<br />rpool/ROOT/debian1@cloning&#160; &#160; &#160; &#160; &#160; &#160; &#160;532K&#160; &#160; &#160; -&#160; &#160; &#160;1005M&#160; -<br />rpool/ROOT/debian1T&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160;1.39G&#160; &#160;237G&#160; &#160; &#160; 768M&#160; /<br />rpool/ROOT/debian1T@initial&#160; &#160; &#160; &#160; &#160; &#160;89.2M&#160; &#160; &#160; -&#160; &#160; &#160; 743M&#160; -<br />rpool/ROOT/debian1T@kernel5.7&#160; &#160; &#160; &#160; &#160;68.9M&#160; &#160; &#160; -&#160; &#160; &#160; 982M&#160; -<br />rpool/ROOT/debian1T@cloned&#160; &#160; &#160; &#160; &#160; &#160; 66.0M&#160; &#160; &#160; -&#160; &#160; &#160;1005M&#160; -<br />rpool/ROOT/debian1T/home&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; 34K&#160; &#160;237G&#160; &#160; &#160; &#160;34K&#160; /home<br />rpool/ROOT/debian1T/tmp&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160;24K&#160; &#160;237G&#160; &#160; &#160; &#160;24K&#160; /tmp<br />rpool/ROOT/debian1T/var&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; 237M&#160; &#160;237G&#160; &#160; &#160; 237M&#160; /var<br />rpool/swap&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; 10.6G&#160; &#160;247G&#160; &#160; &#160; &#160;12K&#160; -</p></div></blockquote></div><p>&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; </p><h5> ##### Cloning again and changing to legacy mountpoints ##### </h5><p>This step will be done again from debian1 system on zpool&#160; but could be done from system being cloned also.<br />Naming snapshot cloning2 as cloning1 was used earlier and you&#160; might have done some updates that you&#160; want to propagate, if not skip this and use original cloning1 snapshot.</p><div class="codebox"><pre><code>zfs snapshot -r rpool/ROOT/debian1T@cloning2</code></pre></div><p>next iteration of debian1T add another number to keep things consistent</p><div class="codebox"><pre><code>zfs send -Rv rpool/ROOT/debian1T@cloning2 | zfs receive rpool/ROOT/debian1T2</code></pre></div><div class="codebox"><pre><code>zfs rename rpool/ROOT/debian1T2@cloning2  rpool/ROOT/debian1T2@cloned2</code></pre></div><h5>(Changing zfs managed mountpoints to legacy)</h5><p>For the first clone we used zfs mountpoints , now we switch to legacy to gain some control back from zfs behavior of not having editable config files.&#160; Example being, i jumped on the zfs does everything bus when i started using it , but later realized it might not be ideal from the managing the system angle. With NFS shares SMB shares and such i started using zfs wrapper commands , but once i realized there was no config file to edit in the right place, and only&#160; way was to give shell commands . I pulled it all out back to /etc/exports and /etc/samba/smb.conf. Same situation here, it might be nice for some things to be managed by a single system (various registryDs come to mind). But not other times when we want to be in control.</p><p>So now using legacy managed mountpoints things revert to old system patterns.<br />Root is mounted from grub as before(and does not need the (zfs set mountpoint=whateverX) set now, as no children will depend on that to get relative mountpoints). And everything else gets mounted from fstab.</p><p>Change dataset mountpoints to legacy.</p><div class="codebox"><pre><code>zfs set mountpoint=legacy rpool/ROOT/debian1T2
zfs set mountpoint=legacy rpool/ROOT/debian1T2/home
zfs set mountpoint=legacy rpool/ROOT/debian1T2/var
zfs set mountpoint=legacy rpool/ROOT/debian1T2/tmp</code></pre></div><p>now we have to remove -o zfsutil from the tempmount command as the mountpoint is no longer zfs managed but legacy</p><div class="codebox"><pre><code>mkdir /mnt/debian1T2
mount -t zfs  rpool/ROOT/debian1T2 /mnt/debian1T2</code></pre></div><p>now chroot inside the system again</p><div class="codebox"><pre><code>mount --rbind /dev /mnt/debian1T2/dev
mount --rbind /proc /mnt/debian1T2/proc
mount --rbind /sys /mnt/debian1T2/sys
mount --rbind /run /mnt/debian1T2/run
chroot /mnt/debian1T2/ /bin/bash --login</code></pre></div><p><span class="bbc">edit /etc/hostname and change debian1T to identify this new clone as debian1T2</span></p><p><span class="bbc"> edit /etc/fstab and and add the following lines for new datasets(or patitions in &quot;oldspeak&quot;) </span><br /><span class="bbc">rpool/ROOT/debian1T2/home /home zfs&#160; defaults 0 0</span><br /><span class="bbc">rpool/ROOT/debian1T2/var /var zfs&#160; defaults 0 0</span><br /><span class="bbc">rpool/ROOT/debian1T2/tmp /tmp zfs&#160; defaults 0 0</span></p><p>to update /boot/grub/grub.cfg for new path of this system in the zpool </p><div class="codebox"><pre><code>update-grub</code></pre></div><p>and</p><div class="codebox"><pre><code>exit</code></pre></div><p>chroot</p><h5>Making new clone bootable</h5><p>in /etc/grub.d/ of the initial system (rpool/ROOT/debian1) we need to create a file for booting this new system</p><div class="codebox"><pre><code>cp 40_custom 52_chain_debian1T2</code></pre></div><p><span class="bbc">edit /etc/grub.d/52_chain_debian1T2 and add the following </span></p><div class="quotebox"><blockquote><div><p>menuentry &quot;chainload rpool/ROOT/debian1T2&quot;&#160; {<br />&#160; &#160; &#160; &#160; insmod zfs<br />&#160; &#160; &#160; &#160; echo&#160; &#160; &#039;chain Loading rpool/ROOT/debian1T2&#039;<br />&#160; &#160; &#160; &#160; configfile&#160; /ROOT/debian1T2@/boot/grub/grub.cfg<br />}</p></div></blockquote></div><p>or second option, directly tell it to boot specific kernel and initrd with specified root<br />but you will have to make a new file or menu entry after each kernel upgrade manualy</p><div class="quotebox"><blockquote><div><p>menuentry &quot;/ROOT/debian1T2@/boot/vmlinuz-5.7.0-0.bpo.2-amd64&quot;&#160; {<br />&#160; &#160; &#160; &#160; insmod zfs<br />&#160; &#160; &#160; &#160; echo&#160; &#160; &#039;Loading Linux 5.7.0-0.bpo.2-amd64 ...&#039;<br />&#160; &#160; &#160; &#160; linux&#160; &#160;/ROOT/debian1T2@/boot/vmlinuz-5.7.0-0.bpo.2-amd64 root=ZFS=rpool/ROOT/debian1T2 ro quiet<br />&#160; &#160; &#160; &#160; echo&#160; &#160; &#039;Loading initial ramdisk ...&#039;<br />&#160; &#160; &#160; &#160; initrd&#160; /ROOT/debian1T2@/boot/initrd.img-5.7.0-0.bpo.2-amd64<br />}</p></div></blockquote></div><p>now update /boot/grub/grub.cfg to incorporate new system</p><div class="codebox"><pre><code>update-grub</code></pre></div><p>and now&#160; we have 3 systems all should boot and can be temp mounted and chrooted in to from each other. </p><div class="quotebox"><blockquote><div><p>zfs list -t all<br />NAME&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160;USED&#160; AVAIL&#160; &#160; &#160;REFER&#160; MOUNTPOINT<br />rpool&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160;50.0G&#160; &#160;237G&#160; &#160; &#160; &#160;24K&#160; none<br />rpool/ROOT&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; 9.15G&#160; &#160;237G&#160; &#160; &#160; &#160;24K&#160; none<br />rpool/ROOT/debian1&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; 1.16G&#160; &#160;237G&#160; &#160; &#160;1005M&#160; /<br />rpool/ROOT/debian1@initial&#160; &#160; &#160; &#160; &#160; &#160; 89.2M&#160; &#160; &#160; -&#160; &#160; &#160; 743M&#160; -<br />rpool/ROOT/debian1@kernel5.7&#160; &#160; &#160; &#160; &#160; 68.9M&#160; &#160; &#160; -&#160; &#160; &#160; 982M&#160; -<br />rpool/ROOT/debian1@cloning&#160; &#160; &#160; &#160; &#160; &#160; &#160;532K&#160; &#160; &#160; -&#160; &#160; &#160;1005M&#160; -<br />rpool/ROOT/debian1T&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160;1.39G&#160; &#160;237G&#160; &#160; &#160; 768M&#160; /<br />rpool/ROOT/debian1T@initial&#160; &#160; &#160; &#160; &#160; &#160;89.2M&#160; &#160; &#160; -&#160; &#160; &#160; 743M&#160; -<br />rpool/ROOT/debian1T@kernel5.7&#160; &#160; &#160; &#160; &#160;68.9M&#160; &#160; &#160; -&#160; &#160; &#160; 982M&#160; -<br />rpool/ROOT/debian1T@cloned&#160; &#160; &#160; &#160; &#160; &#160; 66.0M&#160; &#160; &#160; -&#160; &#160; &#160;1005M&#160; -<br />rpool/ROOT/debian1T@cloning2&#160; &#160; &#160; &#160; &#160; &#160; &#160;0B&#160; &#160; &#160; -&#160; &#160; &#160; 768M&#160; -<br />rpool/ROOT/debian1T/home&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; 34K&#160; &#160;237G&#160; &#160; &#160; &#160;34K&#160; /home<br />rpool/ROOT/debian1T/home@cloning2&#160; &#160; &#160; &#160; 0B&#160; &#160; &#160; -&#160; &#160; &#160; &#160;34K&#160; -<br />rpool/ROOT/debian1T/tmp&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160;24K&#160; &#160;237G&#160; &#160; &#160; &#160;24K&#160; /tmp<br />rpool/ROOT/debian1T/tmp@cloning2&#160; &#160; &#160; &#160; &#160;0B&#160; &#160; &#160; -&#160; &#160; &#160; &#160;24K&#160; -<br />rpool/ROOT/debian1T/var&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; 237M&#160; &#160;237G&#160; &#160; &#160; 237M&#160; /var<br />rpool/ROOT/debian1T/var@cloning2&#160; &#160; &#160; &#160; &#160;0B&#160; &#160; &#160; -&#160; &#160; &#160; 237M&#160; -<br />rpool/ROOT/debian1T2&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; 1.39G&#160; &#160;237G&#160; &#160; &#160; 768M&#160; legacy<br />rpool/ROOT/debian1T2@initial&#160; &#160; &#160; &#160; &#160; 89.2M&#160; &#160; &#160; -&#160; &#160; &#160; 743M&#160; -<br />rpool/ROOT/debian1T2@kernel5.7&#160; &#160; &#160; &#160; 68.9M&#160; &#160; &#160; -&#160; &#160; &#160; 982M&#160; -<br />rpool/ROOT/debian1T2@cloned&#160; &#160; &#160; &#160; &#160; &#160;66.0M&#160; &#160; &#160; -&#160; &#160; &#160;1005M&#160; -<br />rpool/ROOT/debian1T2@cloned2&#160; &#160; &#160; &#160; &#160; &#160;288K&#160; &#160; &#160; -&#160; &#160; &#160; 769M&#160; -<br />rpool/ROOT/debian1T2/home&#160; &#160; &#160; &#160; &#160; &#160; &#160;51.5K&#160; &#160;237G&#160; &#160; &#160; &#160;33K&#160; legacy<br />rpool/ROOT/debian1T2/home@cloning2&#160; &#160; 18.5K&#160; &#160; &#160; -&#160; &#160; &#160; &#160;34K&#160; -<br />rpool/ROOT/debian1T2/tmp&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; 38K&#160; &#160;237G&#160; &#160; &#160; &#160;24K&#160; legacy<br />rpool/ROOT/debian1T2/tmp@cloning2&#160; &#160; &#160; &#160;14K&#160; &#160; &#160; -&#160; &#160; &#160; &#160;24K&#160; -<br />rpool/ROOT/debian1T2/var&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160;237M&#160; &#160;237G&#160; &#160; &#160; 237M&#160; legacy<br />rpool/ROOT/debian1T2/var@cloning2&#160; &#160; &#160; 207K&#160; &#160; &#160; -&#160; &#160; &#160; 237M&#160; -<br />rpool/swap&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; 10.6G&#160; &#160;247G&#160; &#160; &#160; &#160;12K&#160; -</p></div></blockquote></div><p>you can cleanup the snapshots that are not needed with (zfs destroy rpool/ROOT/debian1T2/tmp@cloning2) this will not delete anything, only the possibility of reverting to that snapshot on that dataset.</p>]]></description>
			<author><![CDATA[dummy@example.com (danuan)]]></author>
			<pubDate>Wed, 02 Sep 2020 20:34:44 +0000</pubDate>
			<guid>http://dev1galaxy.org/viewtopic.php?pid=24411#p24411</guid>
		</item>
		<item>
			<title><![CDATA[HOWTO: Devuan ROOT on ZFS and MultiBoot]]></title>
			<link>http://dev1galaxy.org/viewtopic.php?pid=24410#p24410</link>
			<description><![CDATA[<p>Took me a while to get around to trying to do this, as i was not sure of how much of a hack the whole root on zfs install would be. But minus the odd install part of the system(due to some license&#160; incompatibility) Which is on par with doing root on NFS which is then managed trough zfs snapshots and cloning on server. <em>No custom scrips, patches, or even extensive modifications to any part of system will be used in this setup .</em></p><p><strong>Here is my starting point References for ROOT on ZOL first one is what i hoped to achieve.</strong></p><ul><li><p><a href="http://www.thecrosseroads.net/2016/02/booting-a-zfs-root-via-uefi-on-debian/" rel="nofollow">http://www.thecrosseroads.net/2016/02/b … on-debian/</a><br /><a href="https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Buster%20Root%20on%20ZFS.html" rel="nofollow">https://openzfs.github.io/openzfs-docs/ … 20ZFS.html</a><br /><a href="https://www.funtoo.org/ZFS_as_Root_Filesystem" rel="nofollow">https://www.funtoo.org/ZFS_as_Root_Filesystem</a><br /><a href="https://www.combustible.me/blog/linux-mint-zfs-root-full-disk-encryption-hibernation-encrypted-swap.html" rel="nofollow">https://www.combustible.me/blog/linux-m … -swap.html</a><br /><a href="https://wiki.archlinux.org/index.php/Install_Arch_Linux_on_ZFS#Install_and_configure_the_bootloader" rel="nofollow">https://wiki.archlinux.org/index.php/In … bootloader</a></p></li></ul><p>I wanted my rpool on whole disk and not how most&#160; howtos split up in to bpool and rpool (my understanding is, it is due to limitations in grub and some zpool features but if we get it running and not enable them later, all is well ?) This is most i could find at the moment&#160; but a good lead. <a href="https://unix.stackexchange.com/questions/447960/how-to-create-a-zfs-zpool-that-grub-can-read" rel="nofollow">https://unix.stackexchange.com/question … b-can-read</a></p><p><strong>Aditional info on BSD and Indiana multibooting</strong></p><ul><li><p><a href="https://blog.karpador.xyz/articles/one-pool-to-rule-them-all/" rel="nofollow">https://blog.karpador.xyz/articles/one- … -them-all/</a><br /><a href="https://ericmccorkleblog.wordpress.com/2016/11/15/cohabiting-freebsd-and-gentoo-linux-on-a-common-zfs-volume/" rel="nofollow">https://ericmccorkleblog.wordpress.com/ … fs-volume/</a></p></li></ul><p>Starting with devuan Live image was discarded. It is not persistent, you will have to reinstall and reconfigure zfs and other things to get your system back up&#160; if something goes wrong. I chose a hard drive installation for a rescue&#160; system that is ready to go. Can also boot systems in the pool from outside if needed. Other options can be a usb stick, msata drive, anything that devuan can be installed on and booted.</p><p>Interesting option&#160; could be an ssd drive that could serve multiple functions.</p><ul><li><p>1 Rescue System<br />2 Swap on ssd for hibernation which is not support on zfs zvol yet<br />3 And a persistent l2arc on same ssd when hibernation is used</p></li></ul><p>But that is a bit of a strech for data integrity ideal of zfs. (Moving data, here to swap, outside of zfs control)</p><h5>To get started</h5><p>I am doing net install of beowulf, </p><ul><li><p>300 meg boot partition</p></li><li><p>10Gig root</p></li><li><p>5Gig swap</p></li></ul><p>Keep it small and manageable. We will use this as our maintenance/rescue system and as a starting clone for first system on zfs root.</p><p><strong>Grub-legacy booting only for now</strong> </p><ul><li><p>no need for X or desktops etc...</p></li><li><p>minimal advanced install&#160; only selecting &quot;standard system utilities&quot;.</p></li></ul><p><strong>Some nice things to help out , but not essential<br /></strong></p><ul><li><p>To help with copy paste the instructions from another machine on networked system might be a good idea at this point as it will save time later once we get other clones going. moving in and out of different installs will be much faster. (and for some reason without --no-install-recommends it tries to pull in things from icon themes to x11-common , basicaly half the install of X without the X)</p><div class="codebox"><pre><code>apt-get install --no-install-recommends openssh-server  </code></pre></div><p>And on a neworked machine that will access this installation, (configure autologin for ssh(and a desktop launcher to make it really easy))<br />replace username@machineIP to match your install.</p><div class="codebox"><pre><code>ssh-keygen -t rsa
ssh-copy-id user@10.10.50.x</code></pre></div></li><li><p>to help copy and paste inside the console if needed</p><div class="codebox"><pre><code>apt-get install gpm</code></pre></div></li><li><p>if needed</p><div class="codebox"><pre><code>apt-get install nfs-common</code></pre></div></li></ul><h5>Once the system is up and configured to your liking.</h5><ul><li><p>Install headers for your kernel. (apt get int the next step seems to pull in wrong ones, so do it manually )<br />then install 0.7.12 zfs&#160; if you plan on staying with 4.X kernel. </p><div class="codebox"><pre><code>apt-get install zfs-dkms zfsutils-linux</code></pre></div></li></ul><ul><li><p>or keep going and install 0.8.4 version of zfs, uncomment or add beowulf-backports to /etc/apt/sources.list <br />and make sure contrib is also there and install. </p><div class="codebox"><pre><code>apt-get install -t beowulf-backports zfs-dkms zfsutils-linux</code></pre></div><p>and install 5.X kernel and headers ,(if done before installing backports version of zfs i get install errors of it trying to compile old version of zfs-dkms to 5.x kernel.)</p></li></ul><p>Going from .7 zfs to .8&#160; is a big step that is worth it in features and functionality. Lots of things started working in .8<br />that did not before, like hotspare drive functionality started working in .8 . I tested it by unplugging drives or<br />dd if=/dev/urandom of=/dev/sdX garbage to a drive in zpool. </p><p>But you do not have to upgrade the pool version itself while upgrading to .8 from .7 unless you need the extra flags<br />or functions inside the pool , without upgrading the pool version you can keep it backwards compatable. to older linux<br />kernels or BSD and even Solaris/Indiana (citations ?)</p><ul><li><p>Now , clean up, add your favorite aliases in .bashrc etc.. Maybe delete apt deb caches to make it even smaller.<br />Every iteration of clones from this point will start adding up and then throw in snapshots on top of that and you could<br />double or triple the sizes if no cleanings are done.</p></li></ul><p><strong>If anything from this point on is not clear here are great ZFS specific resources&#160; </strong></p><ul><li><p><a href="https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/" rel="nofollow">https://pthree.org/2012/04/17/install-z … -gnulinux/</a><br /><a href="https://wiki.archlinux.org/index.php/ZFS" rel="nofollow">https://wiki.archlinux.org/index.php/ZFS</a><br /><a href="https://wiki.debian.org/ZFS" rel="nofollow">https://wiki.debian.org/ZFS</a><br /><a href="https://docs.oracle.com/cd/E36784_01/html/E36835/docinfo.html" rel="nofollow">https://docs.oracle.com/cd/E36784_01/ht … cinfo.html</a><br /><a href="https://www.freebsd.org/doc/handbook/zfs.html" rel="nofollow">https://www.freebsd.org/doc/handbook/zfs.html</a></p></li></ul><h5>Create a pool</h5><p>I am not setting altroot, the non persisten / mountpoint. As we should start using tempmounts for zfs<br />from the get go, its what i will use later for cloning and managing. Its does the same thing but with&#160; tempmount you<br />know exactly what you mounted and where.</p><div class="codebox"><pre><code>ls -al /dev/disk/by-id/ </code></pre></div><p>and </p><div class="codebox"><pre><code>lsblk</code></pre></div><p>to make sure not to grab the system disk by mistake </p><p>pick your disks , choose raidz1, 2, 3, mirror, stripe , mirror/stripe,<br />or a mirror of raidz3 stripes and -o copies=5 of data ? for special occasions !</p><p>Setting&#160; ashift=12 is highly recommended&#160; during the creation(please investigate for yourself)</p><div class="codebox"><pre><code>zpool create rpool mirror \
ata-WDC_WD1600AAJS-75M0A0_WD-WMAV31415162 \
ata-WDC_WD1600AAJS-75M0A0_WD-WMAV31415738 \
mirror \
ata-WDC_WD1600AAJS-75M0A0_WD-WMAV31432691 \
ata-WDC_WD1600AAJS-75M0A0_WD-WMAV31376665</code></pre></div><div class="quotebox"><blockquote><div><p>#zpool status<br />&#160; pool: rpool<br /> state: ONLINE<br />&#160; scan: none requested<br />config:</p><p>&#160; &#160; NAME&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160;STATE&#160; &#160; &#160;READ WRITE CKSUM<br />&#160; &#160; rpool&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; ONLINE&#160; &#160; &#160; &#160;0&#160; &#160; &#160;0&#160; &#160; &#160;0<br />&#160; &#160; &#160; mirror-0&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160;ONLINE&#160; &#160; &#160; &#160;0&#160; &#160; &#160;0&#160; &#160; &#160;0<br />&#160; &#160; &#160; &#160; ata-WDC_WD1600AAJS-75M0A0_WD-WMAV31415162&#160; ONLINE&#160; &#160; &#160; &#160;0&#160; &#160; &#160;0&#160; &#160; &#160;0<br />&#160; &#160; &#160; &#160; ata-WDC_WD1600AAJS-75M0A0_WD-WMAV31415738&#160; ONLINE&#160; &#160; &#160; &#160;0&#160; &#160; &#160;0&#160; &#160; &#160;0<br />&#160; &#160; &#160; mirror-1&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160;ONLINE&#160; &#160; &#160; &#160;0&#160; &#160; &#160;0&#160; &#160; &#160;0<br />&#160; &#160; &#160; &#160; ata-WDC_WD1600AAJS-75M0A0_WD-WMAV31432691&#160; ONLINE&#160; &#160; &#160; &#160;0&#160; &#160; &#160;0&#160; &#160; &#160;0<br />&#160; &#160; &#160; &#160; ata-WDC_WD1600AAJS-75M0A0_WD-WMAV31376665&#160; ONLINE&#160; &#160; &#160; &#160;0&#160; &#160; &#160;0&#160; &#160; &#160;0</p><p>errors: No known data errors</p></div></blockquote></div><p>Set some basics that we want to propagate to child datasets (unless set localy), this is mostly&#160; options and depends on use case ( tuning zfs ) most things can be set later, while others&#160; will only take effect for new files unless you recopy files or simpler to send and recieve dataset with new options. Like recompress&#160; to lz4 from gzip etc...Some options can only be set once during pool creation like ashift, or casesensitivity&#160; on datasets for smb/cifs shares.</p><p>can embed this during pool creation but i keep it separate. </p><div class="codebox"><pre><code>zfs set mountpoint=none rpool</code></pre></div><p>Also this can be moved down to rpool/ROOT As i will have other datasets under rpool and probably do not want them inheriting these options </p><div class="codebox"><pre><code>zfs set atime=off rpool
zfs set relatime=on rpool
zfs set compression=lz4 rpool</code></pre></div><p>info on proper zvol swap use <a href="https://openzfs.github.io/openzfs-docs/Project%20and%20Community/FAQ.html#using-a-zvol-for-a-swap-device" rel="nofollow">https://openzfs.github.io/openzfs-docs/ … wap-device</a></p><div class="codebox"><pre><code>zfs create -V 10G -b $(getconf PAGESIZE) \
    -o logbias=throughput -o sync=always \
    -o primarycache=metadata -o compression=off\
     rpool/swap

mkswap -L swap /dev/zvol/rpool/swap</code></pre></div><h5>Start making datasets</h5><p>This sets up the dataset tree that will make managing this easier (i guess format is from solaris).</p><div class="codebox"><pre><code>zfs create -o mountpoint=none rpool/ROOT
zfs create -o mountpoint=/ rpool/ROOT/debian1</code></pre></div><p>Not very clear on this one yet, (but lets use it, till we know better)</p><div class="codebox"><pre><code>zpool set bootfs=rpool/ROOT/debian1 rpool</code></pre></div><p>Now need to make a mountpoint for our first system. Usualy zfs does not need mountpoints, It creates them if none exist, and refuses to mount on mountpoints with files unless overridden , but when using tempmount or legacy, it will balk at not having one.</p><div class="codebox"><pre><code>mkdir /mnt/debian1
mount -t zfs -o zfsutil rpool/ROOT/debian1 /mnt/debian1</code></pre></div><p>to make sure rpool/ROOT/debian1 is indeed mounted</p><div class="codebox"><pre><code>df -h 
zfs get all rpool/ROOT/debian1 | grep mount  </code></pre></div><h5>Cloning the system in to the zpool</h5><p>now rpool is ready to accept our first system</p><div class="codebox"><pre><code>apt-get install rsync</code></pre></div><p>Since this is a new system we are not excluding more things that need to be on other systems for cloning like /media etc.... could take out srv too&#160; and mnt since we are staying on one filesystem but for safetys sake.... lets not loop it ( and if you do not have a separate /boot partition , disreguard the second&#160; command)</p><div class="codebox"><pre><code>rsync -aAHXx / --exclude={&quot;/dev/*&quot;,&quot;/proc/*&quot;,&quot;/sys/*&quot;,&quot;/run/*&quot;,&quot;/mnt/*&quot;,&quot;/srv/*&quot;} /mnt/debian1/
rsync -aAHXx /boot/* /mnt/debian1/boot/</code></pre></div><p>Now chroot to get in to the system and do a few tasks</p><div class="codebox"><pre><code>mount --rbind /dev /mnt/debian1/dev
mount --rbind /proc /mnt/debian1/proc
mount --rbind /sys /mnt/debian1/sys
mount --rbind /run /mnt/debian1/run

chroot /mnt/debian1 /bin/bash --login</code></pre></div><p>A config file/db of sorts, if does not exist yet make.<br />Some say use it , some say go without, as all info is within <br />the pool drives anyways (need to clear up)</p><div class="codebox"><pre><code>mkdir -p /etc/zfs
zpool set cachefile=/etc/zfs/zpool.cache rpool</code></pre></div><p>Will need this step to make sure zfs can mount / right after grub, in managment system we are not runing root on zfs so it can mount the pool later during boot by loading a kernel modue. But here we will need initramfs to do that.</p><div class="codebox"><pre><code>apt-get install -t beowulf-backports zfs-initramfs</code></pre></div><p>Test if grub sees that its zfs ?</p><div class="codebox"><pre><code>grub-probe /boot</code></pre></div><div class="quotebox"><blockquote><div><p>i do not think this next step is needed as it creates a double entry for <br />root=ZFS=rpool in /boot/grub/grub.cfg </p><p><span class="bbc">edit /etc/default/grub</span><br />GRUB_CMDLINE_LINUX=&quot;root=ZFS=rpool/ROOT/debian1&quot;<br />and uncomment for more info<br />GRUB_TERMINAL=console</p></div></blockquote></div><ul><li><p>this will write updates to /boot/grub/grub.cfg which will be called by grub chain starting from mbr.<br />At the moment it would still boot the initial system unless we do this step.</p><div class="codebox"><pre><code>update-grub</code></pre></div></li></ul><ul><li><p><strong>Using cfdisk change 9th partition&#160; on each zfs pool drive to BIOS boot. </strong>Which zfs made during pool creation,small 8 meg patition on my system, (need citation for this) but its seems to be for uefi&#160; boot compatibility. And if it was in active part of zfs it would have balked at me using during first zpool scrub or on imports.Here is some info <a href="https://www.reddit.com/r/zfs/comments/g1rtca/automatic_vs_manual_creation_of_reserved_space/" rel="nofollow">https://www.reddit.com/r/zfs/comments/g … ved_space/</a><br />(or bios_grub flag in gparted)</p></li></ul><p>if previous step is incorrect or not done , this will happend on grub mbr install, replace X with drive(s) in the pool</p><div class="quotebox"><blockquote><div><p>#grub-install /dev/sdX<br />Installing for i386-pc platform.<br />grub-install: warning: this GPT partition label contains no BIOS Boot Partition; embedding won&#039;t be possible.<br />grub-install: error: filesystem `zfs&#039; doesn&#039;t support blocklists.</p></div></blockquote></div><p>if </p><div class="codebox"><pre><code>grub-install /dev/sdX </code></pre></div><p> goes without error , repeat for each drive in the pool this way if one drive fails in the pool it can alway boot from another provided they are set sequentially in bios for boot drive order .</p><p><span class="bbc">edit /etc/fstab</span><br />coment out everything about old filesystems as zfs will handle that for now,<br />but we will return here if we start using legasy mountpoints. And add<br /><span class="bbc">/dev/zvol/rpool/swap&#160; &#160; none&#160; &#160; swap&#160; &#160; sw&#160; &#160; 0&#160; &#160; 0</span></p><p><span class="bbc">edit /etc/initramfs-tools/conf.d/resume</span><br />as zfs does not support&#160; hibernation on zvol yet and will hang if you leave old drive there<br /><span class="bbc">RESUME=none</span><br />(and i wonder if this can be separate to from zvol swap, but not tested yet)</p><p>Update initramfs to get&#160; resume update to none.</p><div class="codebox"><pre><code>update-initramfs -c -k all  </code></pre></div><p>and</p><div class="codebox"><pre><code>exit</code></pre></div><p>from chroot</p><p>At this point reboot and change the bios boot drive to one of the drives from the pool, and if everything went to plan, the system should be up and running </p><p>df -h&#160; to see that the root is mounted from rpool/ROOT/systemXXX that you expect. Do that a few times on every new system you clone, as mistakes in fstab or forgeting update-grub will boot in to a clone source system.</p><p>some zfs errors about it not being unable to mount / are ok as it is normal when initrd already mounted it, like on NFS root boot i get same errors.</p><p>If booted and everything ok, do</p><div class="codebox"><pre><code>zfs snapshot rpool/ROOT/debian1@initial</code></pre></div><div class="quotebox"><blockquote><div><p>#zfs list -t&#160; all<br />NAME&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160;USED&#160; AVAIL&#160; &#160; &#160;REFER&#160; MOUNTPOINT<br />rpool&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160;12.4G&#160; &#160;227G&#160; &#160; &#160; &#160;24K&#160; &#160; &#160; none<br />rpool/ROOT&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160;1.18G&#160; &#160;227G&#160; &#160; &#160; &#160;24K&#160; &#160; &#160; none<br />rpool/ROOT/debian1&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160;1.18G&#160; &#160;227G&#160; &#160; &#160; 1005M&#160; &#160; /<br />rpool/ROOT/debian1@initial&#160; &#160; 5.5M&#160; &#160; -&#160; &#160; &#160; &#160; &#160; &#160; &#160;1000M&#160; &#160; &#160;-<br />rpool/swap&#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160; &#160;10.6G&#160; &#160;277G&#160; &#160; &#160; 12K&#160; &#160; &#160; &#160; -</p></div></blockquote></div><p>do not rush on installing anything here yet as this system is another maintenance system , only inside zpool now. <br />and a new initial clone source for next steps, plus chain loading&#160; all other systems will come from this systems grub.</p>]]></description>
			<author><![CDATA[dummy@example.com (danuan)]]></author>
			<pubDate>Wed, 02 Sep 2020 20:33:40 +0000</pubDate>
			<guid>http://dev1galaxy.org/viewtopic.php?pid=24410#p24410</guid>
		</item>
	</channel>
</rss>
