You are not logged in.
Pages: 1
Hello:
The OEM* HDD in my U24 box went bad and had to be binned.
After recovering the platters, neodymium magnets and SS screws or course.
*A Seagate 250Gb ST32500NS which I had never had any trust in so it did not hold anything important.
After cleaning up/shuffling data between other HDDs I put a retired IBM-ESXS SAS 78Gb on-line to hold what was left.
Now I had a Hitachi Ultrastar SAS 300Gb 15K drive to use for cloning my system.
The idea being to see how to permanently extirpate XFCE and replace it with some other desktop environment or a WM.
Not sure yet what I'll do, this is just a test bed.
I cloned the system's Kingston 120Gb SSD (~29K hours, no problems) via clonezilla and tested by taking the SSD off-line to boot from the cloned drive. No issues to report.
So far so good.
Now, both HDDs have the same UUIDs.
As a result, they cannot live in the same box unless one of the drives gets new UUIDs.
I think (?) that is the case even if they are not in each other's /etc/fstab file.
Many years ago, I would install different versions W2000/SPSP3 on different HDDs and choose which one to boot from via the box's BIOS.
Later in Linux, I would install different distributions on two or three HDDs (one at the time, main one off-line) and GRUB from the main system drive would find it and give me the options to boot. Tricky but workable.
But now I have a clone which I want to boot from by choosing it from the GRUB screen.
Going into the setup and changing the drive priority would be a (cumbersome) option but then the issue of the duplicate UUIDs for all the partitions will surely cause problems.
So to start off, UUIDs have to be changed and that can be done with gparted but I have only done it on HDDs with a single partition.
Any pointers would be appreciated.
Best,
A.
Offline
Boot from the clone drive, become root, then this command:
blkid /dev/sd* >> /etc/fstabThis lists all disks and every partition on that disk with their respective UUID's in /etc/fstab. If you type in the command by hand, make sure you have the double arrow! Otherwise your fstab file is gone :-\
Next, open /etc/fstab in a graphical text editor like Pluma, so you can easily copy/paste the UUID's into the correct entries. Once you're certain everything is in the right place, either remove all UUID entries from the earlier command or better, comment them out so you have something to fall back on in case something went haywire :-{ Save the fstab file and reboot.
Online
So to start off, UUIDs have to be changed and that can be done with gparted but I have only done it on HDDs with a single partition.
Any pointers would be appreciated.
tune2fs -U e3eb81d0-bc67-492b-b57d-94eb1f892d9e /dev/sdb3Do it for each and every partition you need to change the UUID on changing to a different UUID for each one. For a fat32 partition used for the EFI.
mkfs.vfat -F32 -n EFI -i 0x11111111 /dev/sd??Gives UUID="1111-1111" for the fat32 EFI partition when you format it, the only way I have ever seen to do it is by a fresh formatting.
Online
Hello:
Thanks for the prompt reply.
Boot from the clone drive, become root, then this command:
blkid /dev/sd* >> /etc/fstab
Right ...
... lists all disks and every partition on that disk with their respective UUID's in /etc/fstab.
Right.
$ sudo blkid /dev/sda* ## source HDD - edited for clarity ##
/dev/sda: PTUUID="0004a8f4" PTTYPE="dos"
/dev/sda1: UUID="d6841f29-e39b-4c87-9c52-3a9c3bafe2d3" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="0004a8f4-01"
/dev/sda2: PTUUID="bfb4d548" PTTYPE="dos" PARTUUID="0004a8f4-02"
/dev/sda3: UUID="f0187ff0-be52-4bbc-9461-40f744554b85" TYPE="swap" PARTUUID="0004a8f4-03"
/dev/sda5: UUID="c22304ec-0b30-428a-a6ac-500785614702" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="0004a8f4-05"
/dev/sda6: UUID="807e1ce7-72b4-48a3-8f34-65947ea9fd70" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="0004a8f4-06"
$ Yes, but ...
Doesn't the clone have the same UUIDs (filesystem/partition) as the source drive?
And as a result that data is already in each drive's identical /etc/fstab file?
From what I understand (?) the cloned drive needs a new set of UUIDs so that it is seen by the system as a different drive.
Not too sure but I expect that it would be like this:
clone /dev/sda <- needs new PTUUID
clone /dev/sda1 <- needs new UUID + PARTUUID
clone /dev/sda2 <- needs new PTUUID + PARTUUID
clone /dev/sda3 <- needs new UUID + PARTUUID
clone /dev/sda5 <- needs new UUID + PARTUUID
clone /dev/sda6 <- needs new UUID + PARTUUIDThe UUIDs of the other five drives need to remain exactly the same for both /etc/fstab files.
Thanks for your input.
Best,
A.
Offline
From what I understand (?) the cloned drive needs a new set of UUIDs so that it is seen by the system as a different drive.
Not too sure but I expect that it would be like this:
You understand some of it correctly, the UUID needs to be changed the partition UUID does not matter it is never used. The whole drive UUID does not need to changed again it is never used and I notice you use a swap UUID so you would need to format and change its UUID in the fstab on the drive you are doing all the changes on or the system will see identical UUIDs and get confused..
Online
Hello:
Thanks for the prompt reply.
Yes, I have seen that you can use tunefs to change the UUID of a partition to a random/specific one, eg:
# tune2fs -U random /dev/sdb1 But what about those PTUUID and PARTUUID numbers?
They don't change in a source HDD / clone HDD scenario?
Thanks for your input.
Best,
A.
Offline
they cannot live in the same box unless one of the drives gets new UUIDs.
If you're referencing them by UID. If you're using something else (e.g. filesystem label) it won't matter... until you inevitably forget about this and it bites you in the arse years later.
I think (?) that is the case even if they are not in each other's /etc/fstab file.
Whatever you use to differentiate the disks needs to be unique, otherwise you will get random (or more accurately, timing/initialisation order sensitive) behaviour.
The most entertaining effect is where you load the kernel and initramfs from one drive, but it mounts the root partition on the other. Unless you have set things up differently, grub, initramfs and fstab will all be using filesystem uuids.
I have only done it on HDDs with a single partition.
What does number of partitions per drive have to do with anything? Partitions and filesystems should all have unique ids if you want them to be uniquely identifiable, that is all.
Partition uuids can be changed with gdisk (GPT) or fdisk (MBR), filesystem uuid (assuming ext[2,3,4]) with tune2fs. The latter is the only one that matters.
Last edited by steve_v (2026-01-04 23:44:01)
Once is happenstance. Twice is coincidence. Three times is enemy action. Four times is Official GNOME Policy.
Offline
But what about those PTUUID and PARTUUID numbers?
They don't change in a source HDD / clone HDD scenario?
As I said in my previous posting they are never used by you when booting so the system has nothing to be confused by if they remain the same. Only the UUID is checked by the system when the UUID= is used in the fstab.
Online
Only the UUID is checked by the system when the UUID= is used in the fstab.
Filsystem uuids will end up in in grub.cfg if you use grub-mkconfig / update grub and haven't set GRUB_DISABLE_LINUX_UUID=true, and GRUB will pass them to the kernel and initramfs before fstab is read.
Partition and disk uuids are, as you say, irrelevant (unless you specifically choose to use partuuid as a search hint for grub)
Last edited by steve_v (2026-01-04 23:51:34)
Once is happenstance. Twice is coincidence. Three times is enemy action. Four times is Official GNOME Policy.
Offline
Hello:
... referencing them by UID.
... disks needs to be unique ...
Thought as much.
If I recall correctly, I've been doing it that way since [ascii].
What does number of partitions per drive have to do ...
Directly related to my ignorance in the matter?
But now I know that only UUIDs matter. 8^)
Thanks for your input.
Best,
A.
Offline
Hello:
Only the UUID is checked by the system when the UUID= is used in the fstab.
Filsystem uuids will end up in in grub.cfg if you use grub-mkconfig / update grub ...
Partition and disk uuids are, as you say, irrelevant ...
Good, got that straightened out.
Thank to both for your input.
Best,
A.
Offline
You are welcome and now I am seeing the GRUB mentioned again in the grub.cfg in the EFI partition debian directory and in the new changed drive /boot/grub/grub.cfg you need to change to the new UUID o it will boot the old drives configuration. The EFI is really easy only one instance to change in that file the boot at least a dozen or more, what I use in a script for keeping it all straight when cloning to my various machines and drives is this.
sed -i "s/OLD_UUID/NEW_CHANGED_UUID/g" /path/to/new/changed/boot/grub/grub.cfg Online
Hello:
... in the grub.cfg in the EFI partition ...
Well ...
Fortunately for me, my Sun U24 WS is a BIOS boot (only) rig.
As I have no near-future plans to change this hardware, none of that UEFI crap for me.
The deed is done:
I have cloned my 120Gb SATA SSD system drive to a 300Gb SAS HDD and both have their own 'unique' UUIDs.
I can now run tests and experiment on getting rid of XFCE.
update-grub has been executed but it has not picked up the new system present in the cloned HDD.
I still have to check on that, I seem to recall that it stopped being the default action some time ago.
Thanks for your input.
Best,
A.
Last edited by Altoid (2026-01-05 18:29:22)
Offline
update-grub has been executed but it has not picked up the new system present in the cloned HDD.
BIOS have not used one of them systems in good decade now, I think a grub-install --recheck /dev/sd? might be in order. Ah and the OS-prober is probably not enabled as well, check in /etc/default/grub, that would prevent the update-grub from checking for other OSs to add to the list now I think of that one.
root@9600k:~# cat /etc/default/grub |grep -i probe
#GRUB_DISABLE_OS_PROBER=falseOnline
Hello:
... not used one of them systems in good decade ...
Have not had to deal with the UEFI crap.
I expect this box to last me at least another five/seven years without having to do much.
Who knows what I will be up to by then.
[OT]
My first SCSI drives ever were pulls from decommissioned IBM servers destined to be cut in pieces with a blowtorch.
A lot of eight 9.1Gb 68pin U160 HDDs purchased for a song from a usual suspect: the bloke in charge of the blowtorch.
On testing, only one was faulty.
Around six years later, when I upgraded box+HDDs to SAS, five (IIRC) were in perfect working order, no defects.
I actually made a profit selling them to a chap who ran a CT scanner service.
Good hardware is good hardware, there's no two ways about it.
[/OT]
... OS-prober is probably not enabled ...
... prevent the update-grub from checking for other OSs ...
Yes, that was it.
Problem solved.
Had forgotten to disable timeshift and backintime on the cloned drive so I'll have to check if and what was done.
Noticed that I had also neglected to to change display background so as not to forget which system I was working on. 8^°
Now to see about the XFCE surgery ...
Thank you for your input.
Best,
A.
Last edited by Altoid (2026-01-06 10:28:39)
Offline
You are welcome good to read you got it solved.
Online
Pages: 1