You are not logged in.
Hello ,
pondering on install a new operating system
so i did a test to see how the 120 GB sata SSD would fare;
it has the expected read speed 450-500 MB/s
however, the write speed is only abou 120 MB/s with ext4 fileformat;
if with gparted would format the disk instead with NTFS, the write speed would be more in line with capabilities probably of around 400 MB/s.
now my question: is this a fault with Devuan/Debian that it is so slow , or what could i do about it?
when i use gparted, the options iam getting is exfat - ext2/3/4 - fat16/32 - ntfs - minix - lvm2 - linux-small
it is my understanding, that the Debian/Devuan is using by standard the ext4 fileformat;
thank you for the help and eventual insight.
EDIT: sorry, i forgot, the Devuan itself lies as encrypted install (LVM automaitcaly install with encryption for beginner) on a 120GB Sata3 SSD , but the tested Sata SSD is also internal and has no data, pristinely formatted inside Devuan.
Last edited by kapqa (2026-02-13 16:32:42)
Offline
it has the expected read speed 450-500 MB/s
however, the write speed is only abou 120 MB/s with ext4 fileformat;
How are you running these tests? The method may be giving skewed results, and/or there may be a bottleneck elsewhere than at the drive.
Online
with the tool kdiskmark, do you think id would prefer ntfs?
Last edited by kapqa (2026-02-13 17:41:55)
Offline
I use a lot of 128GB SSD (ext4) on my computers/laptops, I don't find them to be slow, but I'm just a regular user; usual things, internet, music, videos, spreadsheet, etc.
Last edited by Camtaf (2026-02-13 18:14:03)
Offline
with the tool kdiskmark, do you think id would prefer ntfs?
It's very unlikely that the filesystem is the culprit, though if you want to try ntfs as an experiment there's no harm.
What is the model of the drive? We may find the manufacturer's rated read and write speeds and see how far off your results are.
Online
"EDIT" in post #1 is the explanation.
"encrypted" has to be slower (and LVM makes it worse). How much depends on the encryption method andcomputer hardware.
Encrypted zeros are no longer zeros, except the encryption sucks.
Or in short: No, you're doing it wrong.
Offline
1. The best ssd accelerator for Linux:
https://github.com/firelzrd/adios
2.
encrypted install
If it's not AES then it's too slow.
Offline
Wait, you guys are getting 120 mbps write speed? dayum....almost makes me want to try an SSD.
https://sourceforge.net/projects/vuu-do/ New Vuu-do isos uploaded December 2025!
Vuu-do GNU/Linux, minimal Devuan-based Openbox and Mate systems to build on. Also a max version for OB.
Devuan 5 mate-mini iso, pure Devuan, 100% no-vuu-do.
Devuan 6 version also available for testing.
Please donate to support Devuan and init freedom! https://devuan.org/os/donate
Offline
Wait, you guys are getting 120 mbps write speed?
Eh? I get ~900MiB/s read/write over the network to my NAS (network limited), and that's mostly 10+ year old gear. Local root filesystem is 3.5GiB/s read, 1.6GiB/s write (real workloads as opposed to the silly marketing numbers, and yes, it's ext4), and that's pretty much the cheapest DRAM-less flash (that wasn't complete trash) I could find at the time.
almost makes me want to try an SSD
"An SSD for the OS is the biggest upgrade you can make for interactive workloads" was a true 15 years ago. These days it's almost impossible to find a system that doesn't do that.
If you think "I still boot from a single bargain-basement mechanical drive from 2009" is some kind of brag (outside the vintage scene, and half of that is using flash these days anyway), you do you. ![]()
As for the OP, benchmark better. What you are testing is the throughput of encrypted LVM, not ext4. In that context CPU performance, memory bandwidth and choice of encryption algorithm will completely mask any differences in filesystem performance.
The best ssd accelerator for Linux
Benchmark numbers or it didn't happen.
Fiddling with exotic schedulers is very workload dependent, and most modern SSD firmware does well enough for general-desktop use that the best choice is either none or deadline, with anything more complicated just adding overhead for no real benefit.
Last edited by steve_v (2026-02-14 08:00:53)
Once is happenstance. Twice is coincidence. Three times is enemy action. Four times is Official GNOME Policy.
Offline
Benchmark numbers or it didn't happen.
You can do it on your own.
I got the effect.
Yes, the best. There are no concurrents ![]()
I like it, you can ignore it. It works. No overhead, just better I/O processing.
Offline
You can do it on your own.
I got the effect.
"Just take my word for it, it feels faster (totally not confirmation bias, trust me bro)".
FTFY.
Thanks, but no thanks. In my testing on low-latency RAID NVME the best i/o scheduler is consistently [none]. I run Gentoo not Arch, so my ricing is data-driven.
Once is happenstance. Twice is coincidence. Three times is enemy action. Four times is Official GNOME Policy.
Offline
steve_v, I see all you need is to discuss for no reason, just because you like to discuss.
ADIOS is just Adaptive Deadline I/O Scheduler. It means that it`s scheduler, just another (better) scheduler. Not default linux scheduler but new better one. It is designed to optimize I/O operations in Linux by providing low latency through adaptive latency control and dynamic deadline adjustments based on past performance. It also effectively prioritizes requests and groups.
You are not forced to use it, OK?
I share here some of my foundings that are good, interesting or better than default. You can just note that is not for you.
Last edited by Devarch (2026-02-14 14:57:04)
Offline
I see all you need is to drop promotions for your "best" thing for no reason, just because you like to promote.
This thread had nothing to do with i/o schedulers, nobody asked which one is the best.
When told out of left-field that something is "The best" (without an "in my opinion"), asking for some proof isn't unreasonable.
Last edited by steve_v (2026-02-14 15:12:49)
Once is happenstance. Twice is coincidence. Three times is enemy action. Four times is Official GNOME Policy.
Offline
I see all you need is to drop promotions for your "best" thing for no reason, just because you like to promote.
This thread had nothing to do with i/o schedulers, nobody asked which one is the best.
When told out of left-field that something is "The best" (without an "in my opinion"), asking for some proof isn't unreasonable.
You don`t want to get the idea:
I share here some of my foundings that are good, interesting or better than default. You can just note that is not for you.
I`M NOT A SELLER. You can use it or not, read the docs or not, test or not, I don`t care.
Offline
If you think "I still boot from a single bargain-basement mechanical drive from 2009" is some kind of brag (outside the vintage scene, and half of that is using flash these days anyway), you do you.
2012 actually, don't hate, how many SSD's last 14 years?
https://sourceforge.net/projects/vuu-do/ New Vuu-do isos uploaded December 2025!
Vuu-do GNU/Linux, minimal Devuan-based Openbox and Mate systems to build on. Also a max version for OB.
Devuan 5 mate-mini iso, pure Devuan, 100% no-vuu-do.
Devuan 6 version also available for testing.
Please donate to support Devuan and init freedom! https://devuan.org/os/donate
Offline
how many SSD's last 14 years?
I don't know how many, but my Transcend 32GB SLC SATA SSD from 2011 is still going strong, no errors, with an average erase count of 29315 right now, according to smartctl.
I paid around 100 Euros for it at the time.
“Either the users control the program – or the program controls the users” Richard Stallman
Offline
how many SSD's last 14 years?
IME, most of them. I still have the first SSD I bought (OCZ Agility 3), also from 2011, and it still works perfectly.
I have a couple of earlier models (2010 IIRC) I bought used, and they work perfectly as well. In fact, I've never actually had an SSD "wear out", the vast majority of failures are sudden and just outside the warranty period, much as with spinning-rust.
Last edited by steve_v (Today 04:45:42)
Once is happenstance. Twice is coincidence. Three times is enemy action. Four times is Official GNOME Policy.
Offline
I have a 750GB Western Digital scorpio black 2.5 HDD from 2010, still going strong as a portable drive for backups.
Odd size too, i dont think hdd's come in that size anymore do they?
Offline
Lol, if we're playing the "oldest working drive" game... Conner CP-210 (1984) no issues, no bad sectors. Not much use nowadays being 42MB, but still occasionally boots DOS in one of my vintage boxes.
Most of this is really just survivorship bias of course. To get any real idea of HDD vs SSD reliability you need a much bigger sample size than any of us have.
IMO trying to gauge how many SSDs last 14 years is a bit silly at this point in time anyway, since most SSDs available back then have long been retired... Not because they failed, but because they were miserably small. A HDD from 2012 is probably a size that's still useful today, but an SSD from the same period is eclipsed by commodity SD cards and USB flash drives at a fraction of the cost.
i dont think hdd's come in that size anymore do they?
375GB(ish) platters were state-of-the art in 2010, and 2 platters is about all you can do in a 2.5x3/8" laptop drive. These days areal density is a fair bit higher, so the multiples are different.
Last edited by steve_v (Today 06:52:31)
Once is happenstance. Twice is coincidence. Three times is enemy action. Four times is Official GNOME Policy.
Offline
Ive got nothing earlier than 2010.
On the drive itself it says "Nov 2010, made in thailand", think bought it around 2011 too.
I vaguely remember the size of first hdd drive i had (not the model) in late 1998 for an IBM desktop, I think it was a whopping 4GB.
I prefer ssd's nowdays though, much faster and so far ive not had any corruption yet, "touches wood".
Offline
SSDs for general use, HDDs for infrequently accessed bulk storage where capacity>performance...
And hybrid ZFS pools for both at the same time. Nothing quite beats an array of large mechanical drives with a TB or so of high-IOPS SSD as cache and 100GB ish of RAM dedicated to caching the cache. ![]()
Once is happenstance. Twice is coincidence. Three times is enemy action. Four times is Official GNOME Policy.
Offline
thanks, could test with another SSD also same/brand/ 120 GB,
it seems the ext4 and ntfs don't differ so much in speed after all;
must have been en error devil on the loose.
the only difference made this time with gparted was to create a new partition table from the start before formatting;
the ssd was linked under /media/user/5EA--etcet-etcet-etcet-etcet from the get-go for both format;
if i am not mistaken , the other ssd once showed up as /dev/something, but will have to test again to make sure there is not something else going on with that SSd.
EDIt:
maybe bit "solved" too-early,
it seem the two SSD same/brand/ same denomination/size SA400S3 behave differently;
the earlier tested still show very slow performance with ext4 and "normal" speeds with ntfs.
the other SA400S3 show good write/speed with both ext4 AND ntfs -- on same computer, with same cables, on ssame sata sloet (sata3 speed capable).
so am wondering if the SSD is not somewat bit-defective or some other mystery involved.
ext4 -speed screenshot for originally tested ssd
https://ibb.co/23RdpwvR
https://ibb.co/N2BhwwG2
nfts -speed screenshot
https://ibb.co/6JqDJGb6
Last edited by kapqa (Today 15:18:48)
Offline