• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Should I use an SSD cache?

maverickronin

Major Contributor
Forum Donor
Joined
Jul 19, 2018
Messages
2,527
Likes
3,311
Location
Midwest, USA
RAID is an availability technology. So if you are prepared for data to be unavailable whilst hardware is replaced and the data restored, then you don't need RAID (or the associated expense or loss of capacity).

RAID is mostly just plain fault tolerance. You can use that for availability, but it's also great for making sure that your backup device doesn't crap out while you're restoring from it.
 

Chr1

Addicted to Fun and Learning
Forum Donor
Joined
Jul 21, 2018
Messages
844
Likes
643
Slightly off topic but...

I am curious what advantages a NAS has over a low power PC with USB connected drives- RAID/backups aside?
I still use a couple of NASs in JBOD configuration for storing my movies, but my PC as a music server, as it actually uses less power...
 

chelgrian

Senior Member
Joined
Jul 27, 2020
Messages
340
Likes
367
Why is that?
Flash prices are at an all time low currently because Samsung got their demand forecasts massively wrong and overproduced. They are now having to shift that over production through the channel which necessarily means discounted prices.

These things have a habit of over correcting thus when the inventory is cleared flash prices will end up higher than they were before for a time until another manufacturer decides to try to complete on price or gets their forecasts wrong.
 

Berwhale

Major Contributor
Forum Donor
Joined
Aug 29, 2019
Messages
3,962
Likes
4,964
Location
UK
RAID is mostly just plain fault tolerance. You can use that for availability, but it's also great for making sure that your backup device doesn't crap out while you're restoring from it.

Fault tolerance and availability are synonymous. I think the chance of me losing both a primary and corresponding backup disk in two different NAS devices in two different buildings are pretty low. Given that I don't have infinite funds to spend on hardware, I have to balance cost with capacity, redundancy, availability and operational overhead. I am happy with my solution in this respect.

With respect to RAID - Some years ago, I experienced a URE (Unrecoverable Read Event) at home whilst rebuilding a RAID5 array due to drive failure - I lost the whole array. There was a roughly 3% chance of this happening given the RAID level, number & size of drives and the read error rate of the drives. I knew there was a low probability of this happening, there was nothing on there that I couldn't afford to lose or get back another way, but it still gave me pause and I stopped using RAID at home on the next iteration of my home server a year or two later.
 

Berwhale

Major Contributor
Forum Donor
Joined
Aug 29, 2019
Messages
3,962
Likes
4,964
Location
UK
Slightly off topic but...

I am curious what advantages a NAS has over a low power PC with USB connected drives- RAID/backups aside?
I still use a couple of NASs in JBOD configuration for storing my movies, but my PC as a music server, as it actually uses less power...

For me, the Synology DSM software is feature rich, easy to use and excellent at what it does. The hardware is also well put together, reliable and is supported for a long time. Basically, it just sits there and does what it needs to do and I don't have to worry about it. I keep the config as simple as possible and don't run many apps (just Plex really) to keep it this way.

For the other applications that I want running all the time, I have a Dell Optiplex 7060 micro form factor PC with a 35W 8th gen i3 CPU running Vmware ESXi. Most of my stuff runs on an Ubuntu server VM.

Prior to this I ran a selection of homebrew and commercial servers at home (HP Proliant & Microservers, Dell T20, etc.) running Windows Server/Home Server and various iterations of ESXi. These days, I prefer the smaller size, lower noise and electricity bills of my current setup.
 
OP
D

Digby

Major Contributor
Joined
Mar 12, 2021
Messages
1,632
Likes
1,561
Since my drives are often spun down (especially the ones in the secondary), i'm not so concerned with the number or capacity of drives. It would be wasteful to replace the 4 & 6TB drives before a reasonable amount of use (5 or 6 years).
Ah yes, good point, I see.

A related question - Windows is pretty crap about being able to spin down and keep drives spun down, it always seems to be spinning them up for no reason whatever, is linux better at this, in that it'll only spin up drives you need to access.
 

maverickronin

Major Contributor
Forum Donor
Joined
Jul 19, 2018
Messages
2,527
Likes
3,311
Location
Midwest, USA
Fault tolerance and availability are synonymous.
Availability is subset of reasons to employ a fault tolerant system. Availability requires fault tolerance but not everything that benefits from fault tolerance requires high availability.

I think the chance of me losing both a primary and corresponding backup disk in two different NAS devices in two different buildings are pretty low. Given that I don't have infinite funds to spend on hardware, I have to balance cost with capacity, redundancy, availability and operational overhead. I am happy with my solution in this respect.

Besides budget, it depends on your personal level of paranoia and data hording tendency.

With respect to RAID - Some years ago, I experienced a URE (Unrecoverable Read Event) at home whilst rebuilding a RAID5 array due to drive failure - I lost the whole array. There was a roughly 3% chance of this happening given the RAID level, number & size of drives and the read error rate of the drives. I knew there was a low probability of this happening, there was nothing on there that I couldn't afford to lose or get back another way, but it still gave me pause and I stopped using RAID at home on the next iteration of my home server a year or two later.

Yeah. I've seen too many overly large RAID5s fail too.
 

Berwhale

Major Contributor
Forum Donor
Joined
Aug 29, 2019
Messages
3,962
Likes
4,964
Location
UK
Availability is subset of reasons to employ a fault tolerant system. Availability requires fault tolerance but not everything that benefits from fault tolerance requires high availability.

An FT system is designed to provide zero interruption to service. An HA system is designed minimize (but not completely remove) service interruptions. Both approaches employ redundant hardware, but FT usually costs more; both financially and from a performance perspective as there is usually some sort of penalty for keeping redundant nodes fully active (or fully synchronized with the primary).
 

Berwhale

Major Contributor
Forum Donor
Joined
Aug 29, 2019
Messages
3,962
Likes
4,964
Location
UK
Ah yes, good point, I see.

A related question - Windows is pretty crap about being able to spin down and keep drives spun down, it always seems to be spinning them up for no reason whatever, is linux better at this, in that it'll only spin up drives you need to access.

It would be an overgeneralisation to say one was better than the other. The OS is accessing the disk because it's being asked to. The question is: what is asking to access the disk and why? If the disk contains parts of the OS or system components such as swap files, then it's going to get spun up whether you like it or not.

Windows has typically included more background services and applications 'out of the box' (Whether these are wanted or unwanted depends on your point of view). So many peoples perception is that Windows is worse in this respect and they are probably correct. However, Microsoft have been paring back the basic installation for some years, especially in the server versions of the OS (the desktop versions are being integrated with Microsoft cloud services and have gone the opposite way in some respects), so the situation has improved, at least for servers.

Ultimately, Linux is much more configurable than Windows, so it should be possible to build a 'cleaner' system with it. It would be interesting to compare a minimal installation of a Linux server distro with Windows Nano Server in this respect.
 

kysa

Member
Joined
Jan 22, 2023
Messages
77
Likes
58
It would be an overgeneralisation to say one was better than the other. The OS is accessing the disk because it's being asked to. The question is: what is asking to access the disk and why? If the disk contains parts of the OS or system components such as swap files, then it's going to get spun up whether you like it or not.

Windows has typically included more background services and applications 'out of the box' (Whether these are wanted or unwanted depends on your point of view). So many peoples perception is that Windows is worse in this respect and they are probably correct. However, Microsoft have been paring back the basic installation for some years, especially in the server versions of the OS (the desktop versions are being integrated with Microsoft cloud services and have gone the opposite way in some respects), so the situation has improved, at least for servers.

Ultimately, Linux is much more configurable than Windows, so it should be possible to build a 'cleaner' system with it. It would be interesting to compare a minimal installation of a Linux server distro with Windows Nano Server in this respect.

Such OS choice ultimately boils down to documentation. Both systemd-based OSes such as Arch and Debian and openrc-based such as Gentoo are well-documented and are super easy to configure since you don't have to deal with every preinstalled Windows service.
 

digitalfrost

Major Contributor
Joined
Jul 22, 2018
Messages
1,541
Likes
3,172
Location
Palatinate, Germany
I have been using PrimoCache for many years.


It can use RAM as a read/write cache, and you can also do a L2 (SSD) cache for hard drives. What is cached can be selected for each volume independently.

1693409682538.png



It works really well. The first read might be slow but for things that you use often it will be in the cache and accelerated. It can also pre-fetch often used files after boot so they will be there before you need them.
 

Berwhale

Major Contributor
Forum Donor
Joined
Aug 29, 2019
Messages
3,962
Likes
4,964
Location
UK
I have been using PrimoCache for many years.


It can use RAM as a read/write cache, and you can also do a L2 (SSD) cache for hard drives. What is cached can be selected for each volume independently.

View attachment 308783


It works really well. The first read might be slow but for things that you use often it will be in the cache and accelerated. It can also pre-fetch often used files after boot so they will be there before you need them.

Is RAM caching still relevant now that NVMe drives are so fast and cheap now...

From PrimoCache's website (note that they are using a really old copy of CrystalDiskMark)...

1693427616402.png


From the 2TB Kingston KC3000 NVMe PCIe Gen4 drive in my PC (just over £100 in the UK at the moment)...

1693427734588.png
 

digitalfrost

Major Contributor
Joined
Jul 22, 2018
Messages
1,541
Likes
3,172
Location
Palatinate, Germany
It doesn't matter much unless you want to extend SSD lifespan by deferring writes - but that brings its own problems if the PC crashes (I don't use it anymore). For NVMe SSDs it's probably not necessary, but while I still had games on harddrives I liked to use it very much (including the SSD cache). I also have 64Gbytes of RAM and I don't know what to do with it ¯\_(ツ)_/¯
 

chelgrian

Senior Member
Joined
Jul 27, 2020
Messages
340
Likes
367
Is RAM caching still relevant now that NVMe drives are so fast and cheap now...

From PrimoCache's website (note that they are using a really old copy of CrystalDiskMark)...

View attachment 308843

From the 2TB Kingston KC3000 NVMe PCIe Gen4 drive in my PC (just over £100 in the UK at the moment)...

View attachment 308844
Any modern OS already allocates all unused RAM as disk cache.

SSD caches in front of rotating rust have been a thing for example Apple's short lived 'fusion drive' concept and Intel's 'Smart Response'.

You don't actually need 3rd party software to use an SSD as a RRD cache under Windows it can be done using tiered storage spaces even in client versions of Windows. However there is no GUI for it on client versions thus you have to set it up using Powershell.
 

Berwhale

Major Contributor
Forum Donor
Joined
Aug 29, 2019
Messages
3,962
Likes
4,964
Location
UK
It doesn't matter much unless you want to extend SSD lifespan by deferring writes - but that brings its own problems if the PC crashes (I don't use it anymore). For NVMe SSDs it's probably not necessary, but while I still had games on harddrives I liked to use it very much (including the SSD cache). I also have 64Gbytes of RAM and I don't know what to do with it ¯\_(ツ)_/¯

LOL, I managed to restrict myself to 32GB of RAM in my new build. I'd had 32GB for 5 years in the previous build and it was plenty. Also, the servers I buy in my day job have 3TB of RAM, so 64GB doesn't seem so impressive these days :)

Actually, until Intel ditched Optane technology, we were going with 1TB RAM + 4TB Optane 'memory' in each server. This would have worked in a similar way PrimoCache but for memory. The RAM is used as a transparent cache for the much cheaper Optane memory (1TB RAM + 4TB Optane is around 1/2 the cost of 4TB RAM) and the system 'sees' 4TB of RAM (as long as you run an OS/hypervisor that supports it like vSphere 7)
 

Berwhale

Major Contributor
Forum Donor
Joined
Aug 29, 2019
Messages
3,962
Likes
4,964
Location
UK
Any modern OS already allocates all unused RAM as disk cache.

SSD caches in front of rotating rust have been a thing for example Apple's short lived 'fusion drive' concept and Intel's 'Smart Response'.

You don't actually need 3rd party software to use an SSD as a RRD cache under Windows it can be done using tiered storage spaces even in client versions of Windows. However there is no GUI for it on client versions thus you have to set it up using Powershell.

Yes, i'm well aware of that. SmartDrv was in DOS 4.01.

You could also say that SSDs using TLC or QLC NAND with DRAM or SLC NAND caching are doing it all in hardware.
 
OP
D

Digby

Major Contributor
Joined
Mar 12, 2021
Messages
1,632
Likes
1,561
Right, the simple answer was the correct one in this case, turning off windows sleep setting worked a treat. I don't know why, but windows seemed like it may have been trying to put drives to sleep while I was playing audio/video....maybe it needs a more significant amount of throughput, before it recognises the drive is in use? Either way, sending drives like this to sleep hardly saves any power, so no real loss.

Interesting conversation about SSD caching all the same.

Ultimately, Linux is much more configurable than Windows, so it should be possible to build a 'cleaner' system with it. It would be interesting to compare a minimal installation of a Linux server distro with Windows Nano Server in this respect.
I have a fair bit of rarely used data and plenty of (noisy) 3.5" drives to store it on. I wonder, is there any OS that can keep a PC in sleep mode and on awakening not automatically spin up the disks, they are only spun up as needed/requested? That would enable me to use a lot of storage I have that for data that only needs occasional access without excess noise/power consumption in the mix.
 

Berwhale

Major Contributor
Forum Donor
Joined
Aug 29, 2019
Messages
3,962
Likes
4,964
Location
UK
I have a fair bit of rarely used data and plenty of (noisy) 3.5" drives to store it on. I wonder, is there any OS that can keep a PC in sleep mode and on awakening not automatically spin up the disks, they are only spun up as needed/requested? That would enable me to use a lot of storage I have that for data that only needs occasional access without excess noise/power consumption in the mix.

This is why I moved my storage from a PC server (Dell T20) to a Synology NAS. The Synology draws under 20W, the T20 was more like 150W. You could also build a Raspberry Pi NAS if you wanted something with even lower power draw.
 

ThatM1key

Major Contributor
Joined
Mar 27, 2020
Messages
1,056
Likes
894
Location
USA
Yes you should keep your music on an SSD. SSDs don't mind small files that much compared to HDDs.

However not all SSDs are good. Some have terrible write life spans and if bad enough worse performance then a HDD itself (Depending on the work task and if the drive itself doesn't have a buffer/cache). I can get Kingston 1TB NV2 for ~$40 USD but the write life span would be half of a Samsung NVMe SSD. For $10 USD more, I can get a Samsung 970 EVO Plus and get a good long write life span. Although you are gonna be reading most of the time, a quality-built SSD has less failure points (Ex: Quality controllers, capacitors, etc.).

I would stay away from Samsung QVO drives unless you need lots of TB in one drive. Samsung EVO drives are usually priced the same or even cheaper. When it comes to NVMe SSDs, stay away from the Samsung 980 (Not the Pros models), cost more than a 970 EVO Plus and has no cache.

The only fun thing about computers right now is ironically storage (for me at least). SSD and HDD prices have fallen so much. I paid $120 for my 1TB Samsung 970 EVO Plus and years before that I paid $100 for a 4TB WD Blue HDD.
 

Berwhale

Major Contributor
Forum Donor
Joined
Aug 29, 2019
Messages
3,962
Likes
4,964
Location
UK
Yes you should keep your music on an SSD. SSDs don't mind small files that much compared to HDDs.

However not all SSDs are good. Some have terrible write life spans and if bad enough worse performance then a HDD itself (Depending on the work task and if the drive itself doesn't have a buffer/cache). I can get Kingston 1TB NV2 for ~$40 USD but the write life span would be half of a Samsung NVMe SSD. For $10 USD more, I can get a Samsung 970 EVO Plus and get a good long write life span. Although you are gonna be reading most of the time, a quality-built SSD has less failure points (Ex: Quality controllers, capacitors, etc.).

I would stay away from Samsung QVO drives unless you need lots of TB in one drive. Samsung EVO drives are usually priced the same or even cheaper. When it comes to NVMe SSDs, stay away from the Samsung 980 (Not the Pros models), cost more than a 970 EVO Plus and has no cache.

The only fun thing about computers right now is ironically storage (for me at least). SSD and HDD prices have fallen so much. I paid $120 for my 1TB Samsung 970 EVO Plus and years before that I paid $100 for a 4TB WD Blue HDD.

Unless you are making your own music, the read/write profile for music storage is going to be heavily weighted towards reads. The durability of NAND flash is therefore unlikely to be much of a concern, even for the QLC NAND in the QVO.
 
Top Bottom