<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
	<channel>
		<atom:link href="http://dev1galaxy.org/extern.php?action=feed&amp;tid=5689&amp;type=rss" rel="self" type="application/rss+xml" />
		<title><![CDATA[Dev1 Galaxy Forum / zfs on boot causes problems]]></title>
		<link>http://dev1galaxy.org/viewtopic.php?id=5689</link>
		<description><![CDATA[The most recent posts in zfs on boot causes problems.]]></description>
		<lastBuildDate>Sat, 29 Jul 2023 00:12:20 +0000</lastBuildDate>
		<generator>FluxBB</generator>
		<item>
			<title><![CDATA[Re: zfs on boot causes problems]]></title>
			<link>http://dev1galaxy.org/viewtopic.php?pid=43111#p43111</link>
			<description><![CDATA[<p>And i see that you have not placed your system folders under syspool/ROOT/devuan </p><p>they have to be there to get auto mounted with root on zfs setup .</p><p>zfs rename them to move like this </p><p>zfs rename syspool/usr syspool/ROOT/devuan/usr<br />(might need to have /usr mountpoint created in syspool/ROOT/devuan mount but empty dir)</p><p>and now make sure there is no /usr/local </p><p>(that can exist only in syspool/ROOT/devuan/usr as a mountpoint)<br />zfs rename syspool/usr/local syspool/ROOT/devuan/usr/local</p><p>but zfs should technically create last part of a mountpoint path if does not exit on mounting</p><p>etc...</p><p>and with that setup you do not have to set -o mountpoint ( they are inherited if sets are named after mountpoints)<br />zfs inherit -s mountpoint pool/dataset (to reset)</p><p>or manually set to legacy and&#160; added to fstab </p><p>root on zfs has that behavior , of not auto mounting anything outside of current&#160; root</p><p>and by default no over mounting if mountpoint already has file/dirs in it<br />(that is why you have to make sure to have mountpoint created in right paces if doing&#160; submounts)</p>]]></description>
			<author><![CDATA[dummy@example.com (danuan)]]></author>
			<pubDate>Sat, 29 Jul 2023 00:12:20 +0000</pubDate>
			<guid>http://dev1galaxy.org/viewtopic.php?pid=43111#p43111</guid>
		</item>
		<item>
			<title><![CDATA[Re: zfs on boot causes problems]]></title>
			<link>http://dev1galaxy.org/viewtopic.php?pid=43086#p43086</link>
			<description><![CDATA[<p>Try my howto (without a separate bpool)<br />a bit outdated but devuan specific </p><p><a href="https://dev1galaxy.org/viewtopic.php?id=3794" rel="nofollow">https://dev1galaxy.org/viewtopic.php?id=3794</a></p><p>no need to use backports now that zfs version is ok in chimaera</p>]]></description>
			<author><![CDATA[dummy@example.com (danuan)]]></author>
			<pubDate>Fri, 28 Jul 2023 08:35:58 +0000</pubDate>
			<guid>http://dev1galaxy.org/viewtopic.php?pid=43086#p43086</guid>
		</item>
		<item>
			<title><![CDATA[zfs on boot causes problems]]></title>
			<link>http://dev1galaxy.org/viewtopic.php?pid=42091#p42091</link>
			<description><![CDATA[<p>Hello!</p><p>I am using KVM to try to install zfs on boot with chimaera,<br />I&#039;ve done this since years with debian, using bios-boot<br />and separate partitions for the bpool and rpool (bootpol<br />and syspool - my names).<br />I followed the instructions for debian(!) bullseye, knowing,<br />that the missing systemd-part may leave &quot;a hole&quot;, that I<br />cannot understand with sysvinit (wether OpenRC nor runit installed).<br />On a first glance, everything looks ok, but a red message<br />at shutdown/reboot indicates a (un)mount problem.<br />But network shares (cifs) do NOT mount at boot, as usual<br />(have hardware with chimaera to compare), which is a strong<br />requiremnent for me.<br />Additionally, if your last command was &#039;shutdown -r now&#039;<br />and you login afterwards and use the up-arrow key, exact<br />this command must be back - but isnt. Something with the<br />root account not flushed storage/handled properly bevor shutdown.<br />The message at shutdown is nowhere to find (syslog/daemon).<br />bootlog (re-)installed (dont remember, if this was installed<br />by default) in the hope to get less logs to lookup (systemd<br />is unbeatable in this regard).<br />bootlogd shows mount errors at boot time:</p><div class="codebox"><pre><code>mount: /home/shared: mount point does not exist.
mount: /ops/tools: mount point does not exist.
mount: /ops/install/global: mount point does not exist.
mount: /home/shared: mount point does not exist.
mount: /ops/tools: mount point does not exist.
mount: /ops/install/global: mount point does not exist.</code></pre></div><p>This means, something looked for the mountpoints before<br />the syspool has been imported?<br />The logline above says:</p><div class="codebox"><pre><code>Configuring network interfaces...if-up.d/mountnfs[eth1]: waiting for interface eth0 before doing NFS mounts ... (warning).</code></pre></div><p>Additionally, the file /etc/network/if-up.d/mountnfs is not executed<br />(I added logger-statements).</p><p>My syspool is as follows:</p><div class="codebox"><pre><code>zfs list -o name,canmount,mounted,overlay,mountpoint
NAME                               CANMOUNT  MOUNTED  OVERLAY  MOUNTPOINT
bootpool                           off       no       on       /boot
bootpool/BOOT                      off       no       on       none
bootpool/BOOT/devuan               on        yes      on       /boot
syspool                            off       no       on       /
syspool/ROOT                       off       no       on       none
syspool/ROOT/devuan                noauto    yes      on       /
syspool/home                       on        yes      on       /home
syspool/home/root                  on        yes      on       /root
syspool/home/shared                on        yes      on       /home/shared
syspool/ops                        on        yes      on       /ops
syspool/ops/install                on        yes      on       /ops/install
syspool/ops/install/global         on        yes      on       /ops/install/global
syspool/ops/install/local          on        yes      on       /ops/install/local
syspool/ops/install/local/zfsdone  on        yes      on       /ops/install/local/zfsdone
syspool/ops/tools                  on        yes      on       /ops/tools
syspool/usr                        off       no       on       /usr
syspool/usr/local                  on        yes      on       /usr/local
syspool/var                        off       no       on       /var
syspool/var/lib                    off       no       on       /var/lib
syspool/var/log                    on        yes      on       /var/log
syspool/var/mail                   on        yes      on       /var/mail
syspool/var/spool                  on        yes      on       /var/spool</code></pre></div><p>Like in the ZOL docs for bullseye-on-root, some filesystems<br />(syspool/var, syspool/var/lib, syspool/usr) have been created like this:</p><div class="codebox"><pre><code>zfs create -o canmount=off	syspool/var/lib</code></pre></div><p>(can be seen in column CANMOUNT).</p><p>At initramfs-prompt, the pool can be listed, but nothing has been mounted so far.<br />This could mean, some sysvinit services come into the game and that is, where<br />I fail. I remember the times of debian-squeeze, where the zfs-mountpoints<br />were set to &quot;legacy&quot; and the fstab contains mounts of it (I am, at the moment,<br />not able to find my old notes), <ins>currently, no zfs mounts in fstab</ins>.<br />What I see is a wrong mount-oder - zfs, to be considered local, must be mounted<br />first!<br />Astoundig: Using &#039;service networking --full-restart&#039; invokes netmounts<br />(and due to my inserted logging statements, I can see it) and it succeeds.<br />BTW, looking for a way to enhance logging-level!</p><p>NB: There is another problem with the network (not part of my question) wich<br />looks like a KVM problem: The interfaces come up in another order and so, the<br />static network configuration is not working. Most of the time, another boot fixed it.<br />Found many discussion about this (the VM gets a SR-IOV VF as NIC), without<br />final conclusion (bug or feature).<br /><ins><br />All things said above were noted, when the network was up properly.<br /></ins><br />Some help would be great and I can send full logs.</p><p>Thanks,<br />Manfred</p>]]></description>
			<author><![CDATA[dummy@example.com (webman)]]></author>
			<pubDate>Mon, 01 May 2023 14:29:34 +0000</pubDate>
			<guid>http://dev1galaxy.org/viewtopic.php?pid=42091#p42091</guid>
		</item>
	</channel>
</rss>
