You are not logged in.
When you use one of those 'whole partition' transfer utilities to copy a system from an old HD to an SSD, does the OS notice the change in hardware? I seem to recall there are differences in how an OS should behave on one vs the other, but I don't know if that is something fixed at install time, or if it should be manually changed, or if it's automatically recognized.
Offline
I don't have any experience with "partition transfer utilities", but I have done these at one time or another:
- Whole disk cloning. Something like "dd if=/dev/sdx of=/dev/sdy".
- Copy the OS files from one partition to another. I think it was something like "cp -ax /path/to/old_drive_partition/* /path/to/new_drive_partition". This required either changing the new partition's UUID to match the old one, or modifying /etc/fstab to match the new partition UUID.
- Copy the OS files from a single disk to a RAID array. A RAID array is multiple storage drives working together to behave like a single giant storage drive. This required modifying some files so that the RAID array started first on boot, then it could be mounted to the file system.
In all cases, Linux didn't care about the change in the storage drive. I think all Linux cares about is finding the file system's UUID, and successfully mounting it based on the file system type (example: ext4).
One reason I can think of why the OS might behave differently is if an SSD loaded files so quickly during boot, it exposes some race condition, as some apps have dependencies that haven't been fully initialized yet. A a slower HDD would hide these issues because the apps are loaded more slowly, giving the dependencies more time to initialize properly.
The only time I can remember seeing a problem with cloning hard drives is on my Win7 work laptop. That's because one of the software licensing apps used the storage drive name or whatever to generate its license key. When I replaced the laptop HDD with a cloned SSD, the license app said my license was no longer valid. I just requested and installed a new license, and that was all.
Offline
Well, my first thought was logging, which is supposed to be different on Windows when using an SSD compared to an HD. There is supposed to be much less, to reduce the wear on SSDs. Logging of everything has been so automatic in *nix for so long I wondered if there was any concession to hardware wear. And then there is the automatic disk optimization in Windows, but Linux doesn't do that at all. There may be other operational differences, though, so I was just wondering.
Offline
I don't know anything about the logging system. But your post reminded me of back in the early 2010s when SSDs were getting cheap enough for not-so-rich computer geeks to buy. I remember there were Linux tweaks for reducing unnecessary disk writes by specifying "relatime" or "noatime" or something like that to reduce or stop updating a file's "access time" every time it's read. Back then, those tweaks had to be done manually.
If I had to speculate about today, I would suspect you'd still have to do those tweaks manually. My reasoning is that whatever the logging system an app or Linux is using would have no idea what type of storage device the log files are being written to. It could be an HDD, SSD, a RAID array that's a MIX of HDD+SSD (weird, but possible), a RAM disk, even a network shared folder on another PC. Since there's no way to know (or at least, no EASY way to know), I conclude that it's up to the sys admin to make those disk optimization tweaks manually.
Of course, a real sys admin can tell us for sure.
Offline
Don't worry about wear on a modern SSD. They have such a large TWB that normal use on a workstation does not kill or evcen limit the lifetime. More than a decade ago when SSDs were new I used to put /var and /tmp onto a HDD. I have stopped that along time ago.
Offline