default header

Hardware

Solid State Drives

Moderator: JC Denton

Solid State Drives

Unread postby tackywoolhat » 04 Jan 2010 04:35

Continued from here: http://culture.vg/forum/topic?p=12112#p12112

ChaosAngelZero wrote:A friend of mine at another forum told me that it'd be a better idea to get a solid state drive for storing the operating systems and leave the big HDDs for data storage. I guess there shouldn't be any performance bottlenecks if you install the games in the HDDs too, in case the SSD ends up running out of free space, but you can always replace the drives with bigger, faster ones in the future, anyways.


That's not that great of an idea. SSDs are not nearly fast or cheap or even large enough yet. You'd probably need a 60 GB drive even if all you wanted to do was put Windows 7 on it, because with the install, swap/hibernation files, system restore, and the fact that the current crop of SSDs require 20% of the drive to be free for top performance, you'd run out of space fast. And running out of space on SSDs kills write times, drops them well below what you'd experience with a mechanical drive.

Even at their best SSDs are only 25% faster than mechanical drives anyway -- and that's only reading data. Some SSDs are even worse at writing data than mechanical drives. Either wait until things improve (find an SSD that reliably implements TRIM and is large enough) or go with mechanical drives. You can get outrageously large drives for chump change these days.
tackywoolhat
 
Joined: 14 Nov 2009 00:16

Unread postby austere » 05 Jan 2010 21:45

I'm not sure where you got that 25% figure for a SSD's best case. There's almost no access time for SSDs, so their best case is going to be magnitudes better than even Western Digital's VelociRaptor. As long as you get an SSD with a decently designed controller, you'll gain a huge advantage for random reads. It would be a totally worthwhile boot drive, if one could afford it.

That said, everything you wrote about write performance and capacity per cost is true. There is also performance degradation over time from both reads and writes. Most tech reporters discuss the latter, but there is a problem called read-disturb which NAND Flash manufacturers have quietly brushed under the table.

Even if you read from a block, you will attract electrons away from their floating gate. Over time this will create an error and the entire block is erased and rewritten using the built-in error correcting code. This problem will only get worse over time, since manufacturers are using single gates to store many bits. I can write more about SSDs, but it's a tad off topic so I will stop here.

If icy gets an SSD, he should use it for the Windows 7 boot, since it's the only OS that supports TRIM at the moment. I would disable the pagefile though or it'll be such a waste! As long as these two conditions are met, you can enjoy a long lifetime off your SSD with very little degradation in performance.

I really want to find out how games perform under it the OSs icy listed though. My gut feeling tells me that the overhead from all the additional code since Windows XP will result in lower frame rates for many games. That is, if those games would actually run under Vista/Windows 7.
User avatar
austere
 
Joined: 07 Dec 2009 22:50

Unread postby tackywoolhat » 05 Jan 2010 22:56

KarlPopper wrote:I'm not sure where you got that 25% figure for a SSD's best case. There's almost no access time for SSDs, so their best case is going to be magnitudes better than even Western Digital's VelociRaptor. As long as you get an SSD with a decently designed controller, you'll gain a huge advantage for random reads. It would be a totally worthwhile boot drive, if one could afford it.

I didn't mean strict access time, I meant total loading time of a large amount of data (specifically a game level: here) from SSD into memory. Random reads of small amounts of data are much quicker under an SSD, but I'm not sure how important that is, considering how most of what you need in an application like a game is loaded into memory and then used from there. If you've got enough memory to hold what an application is dealing with, then the only time the SSD's lack of access time comes into play at all is when you start the application or load something gigantic into it. Is this the wrong way to think about it?

I'd be interested in hearing more about the single-gate stuff, seeing as how we're now on-topic :p
tackywoolhat
 
Joined: 14 Nov 2009 00:16

Unread postby austere » 07 Jan 2010 14:24

That was a good recent benchmark, though I wish they would put all of it on a single page. I noted the following passage:

Oh, and rather than testing SSDs in factory-fresh form, we first put them into a simulated used state devoid of empty flash pages. We think this better represents long-term SSD performance.


While this is a good idea, they also state that they used XP. TRIM would circumvent the issue of empty flash pages as long as you keep a good amount of space empty. I suppose that it couldn't be helped since Windows 7 was in its beta testing phase at the time.

I see that you got the 25% figure by inverting the DOOM 3 loading times and comparing the fastest SSD to the VelociRaptor. In my opinion, you're adding in a lot of computation in additional to the hard drive's performance. As the CPU was quite dated, I'm willing to bet loading times would be significantly influenced by the CPU, not the drive. The loading time for Far Cry is about 19.2% faster which probably means the game takes longer to precompute the level's rendering, I guess.

All that said, I agree with you overall. The most you'd benefit from an SSD is modestly improved load times and lower latency OS operations. The latter is probably not very important for a gaming machine as you'd run one application exclusively. It's probably not worth the money, but that doesn't stop someone who wants the latest and greatest from obtaining it. :) If I wanted a compromise, I would consider purchasing the VelociRaptor as well as a standard 7.2k rpm one for its large capacity.

As you're interested, I'll write more about SSDs -- but first I've gotta get some work done!
User avatar
austere
 
Joined: 07 Dec 2009 22:50

Unread postby austere » 10 Jan 2010 21:13

As promised, I will now write more about Flash memory issues. For the sake of completeness, I'll briefly go through the advancements that got us in the position we're in today.

In the old days, EEPROMs were the only non-volatile memory architecture available. The acronym stands for Electrically-Erasable Programmable ROMs, the latter part being a bit of a misnomer. In order to erase a bit stored on a transistor, a large voltage is applied to a floating gate. This process of erasing gates was rather slow. To speed things up, an EEPROM was designed with a special grouping of transistors in blocks. This was coined "Flash" by some Japanese guy since the large voltage across a block was a bit like lightning on a chip-scale.

Over time, many clever tricks were employed to get the best performance for the lowest price. Each time this was done, the complexity of the overall system increased. Initially, the simple "NOR" arrangement of floating gate transistors (herein "cells") was used to enable random access to individual words. Unfortunately, this arrangement requires a metal line between each transistor, making the cells rather large. An alternate "NAND" arrangement was used, sacrificing random access for higher memory density. Now, to read and write, you have to do so 512 bytes at a time. This value may have increased since I've last studied this field.

You may have noticed that Flash memories have gotten a lot cheaper recently. There's a good reason for this and it isn't just due to process scaling. To make memory cheaper, write endurance and speed were sacrificed for density. Now, each floating gate will have multiple logic levels, not just 0s and 1s. Sandisk recently introduced a 16-level (4-bit) MLC (Multi-Level Cell) flash memory chip.

The earliest work I found using this technique dates back to 1995 and was published in the IEEE Solid State Circuit Conference. Apparently, it took a long time to commercialise the technology. Each time they increase the number of bits per cell, they further sacrifice speed and endurance by about a half. I'm not sure if we'll see 5-bit MLC Flash ROMs in the future though -- the sensing circuits are becoming more complex** and large, eroding gains made by using the technique.

As I wrote previously, when you read from a NAND flash, you activate a whole line of floating gate transistors and thus charge will move between them. With higher bit density MLCs, it takes less charge* to leak in and corrupt a stored value. Each time you read a block, this will happen to some extent. After a certain margin is eroded, a bit is flipped and the block is trashed. Early Flash memory controllers didn't account for this problem. Today, you may encounter one when you buy a cheap MP3 player. It will fail suddenly, even though you haven't written onto it for a while. When implemented properly though, the error is detected and corrected -- costing a write. Thus, just by reading your drive enough, you will wear it out like writing to it.

If my knowledge is recent enough, SSDs do not go beyond a 2-bit MLC at the time of writing. I would expect them to have awful performance if some cheap bastards decide to use 3-bit MLC modules. If this happens, it will be a race to the bottom since most consumers are completely ignorant of all this information. Just like they play crap "games", they'll buy crap SSDs. :( After that, 2-bit MLCs and standard Single Level Cell (SLC) would exist only in industrial or "enterprise" equipment. With most foundries using 32 nm processes for Flash memory this scenario might just happen. After all, optical lithography will only work up to 12nm and the alternatives are not yet economically viable. Or reliable enough for memory chips, for that matter.

Caveat: If any of the information I provided goes out of date in the future, I'll hopefully post in this thread to correct it. You just can't predict what kind of special process or trick they will use to alleviate existing issues.

* Today each cell uses a few thousand electrons to make up its levels. Each time you scale down, this number decreases. Each time you raise the bit density per cell, it takes less electrons to disturb a stored value.

** R. Jacob Baker asserts that process variability makes >4-bit MLCs impossible using current sensing circuits. He proposed a 5-bit MLC using an insane modulation scheme which sacrifices even more speed and sensing circuit area. If you're interested, he writes about it here but it requires some microelectronics background: http://cmosedu.com/jbaker/papers/Flash_mass_storage.pdf.
User avatar
austere
 
Joined: 07 Dec 2009 22:50

Unread postby zinger » 25 Jan 2010 12:00

I'm getting an external hard drive for storage and I'd like some help on chosing one. I'm not looking to spend a fortune on the best drive availiable, just a reliable and portable drive that is as fast (and preferably cheap) as possible. FireWire? USB 2.0? eSATA (I just noticed this connector on my laptop)? Anything else to keep in mind, or should I just pick whatever 1TB Western Digital or Kingston drive I can find?
User avatar
zinger
 
Joined: 22 Oct 2007 16:32
Location: Sweden

Unread postby icycalm » 28 Jan 2010 19:42

I think eSATA is the fastest of the three connections you mentioned (because I doubt your laptop has the latest USB version, USB 3.0), but since you are not going to be doing any serious gaming with this hard drive (and by serious I mean Crysis- or Supreme Commander-serious; emulation and stuff like Braid do not count as serious) you might as well pick up the drive with the best GB/euro ratio you can find. And again, as far as brands are concerned, I wouldn't worry about it. They are all pretty much equivalent.

KarlPopper wrote:I would disable the pagefile though or it'll be such a waste!


Can you elaborate on this? I've no idea what a "pagefile" is.

KarlPopper wrote:It's probably not worth the money, but that doesn't stop someone who wants the latest and greatest from obtaining it. :)


I've answered this comment here: http://forum.insomnia.ac/viewtopic.php?t=3146, so the only thing I want to point out is that, in the context of this thread, we are really only interested in establishing the extent of the performance boost, and at most also the cost effectiveness of this boost compared to other upgrades that could be made for a similar amount of money. Apart from that, whether the upgrade, any upgrade, is "worth the money", depends on the financial situation of the person who contemplates it.

I'll be back soon with some more thoughts on the state of the hard drive market.
User avatar
icycalm
Hyperborean
 
Joined: 28 Mar 2006 00:08
Location: Tenerife, Canary Islands

Unread postby austere » 29 Jan 2010 00:00

The pagefile is an area on the hard drive that the Windows memory manager will copy memory in and out of. Memory that hasn't been used in a while will be committed to the pagefile and only placed back into physical memory once it has been referenced again. This is called "page swapping" and it's the way Windows implements virtual memory to extend your total available memory beyond your system's physical memory. It has other uses as well, since it's a large store that the OS can use to keep application memory contiguous. Unfortunately, some applications are written under the assumption that a pagefile exists and thus won't run if you disable it.

About my stupid comment, I was trying to find a way to counter tackywoolhat's statement that "SSDs are not nearly fast or cheap or even large enough yet". Now that I read it again, it looks like I'm doing the opposite. I should have just linked to your article, but I wasn't sure if it applied to hardware without exception. The fact that obsolete technology can often be more expensive than current technology had me confused, but with your above post its now clear to me that this is a red herring.
User avatar
austere
 
Joined: 07 Dec 2009 22:50

Unread postby zinger » 02 Feb 2010 10:51

icycalm wrote:I think eSATA is the fastest of the three connections you mentioned (because I doubt your laptop has the latest USB version, USB 3.0), but since you are not going to be doing any serious gaming with this hard drive (and by serious I mean Crysis- or Supreme Commander-serious; emulation and stuff like Braid do not count as serious) you might as well pick up the drive with the best GB/euro ratio you can find. And again, as far as brands are concerned, I wouldn't worry about it. They are all pretty much equivalent.


Thanks. True, not much serious gaming, but some other serious stuff. I ended up getting this one, 1TB eSATA/USB 2.0, less than 100 euro:

http://www.verbatim-europe.co.uk/en_1/p ... 14030.html
User avatar
zinger
 
Joined: 22 Oct 2007 16:32
Location: Sweden

Unread postby icycalm » 18 Jun 2013 19:09

So, an update on the world of hard drives, on the occasion of placing an order for my first SSD.

First up, what I got. I got the Samsung 840 Pro 512GB drive for $468 off of Amazon US, which appears to be universally accepted as the top performing and possibly also the most reliable SSD on the market (enterprise-level drives are more reliable, but worse performing, so they are no good for gaming, on top of being far more expensive.)

c26-SS840SSDPRO-512hero-l.jpg
c26-SS840SSDPRO-512hero-l.jpg (30.74 KiB) Viewed 31498 times


Note that it's a 2.5" drive like most (all?) SSDs, so I also got a 3.5"-to-2.5" bay converter to fit it in my case. Since my case is a Thermaltake, I went for this:

Thermaltake 3.5 Inch to 2.5 Inch SSD HDD Bay Drives Converter Kit AC0014 (Black)
http://www.amazon.com/gp/product/B003WO ... UTF8&psc=1

(I wonder if newer cases have 2.5" bays in them, considering SSDs are getting so prevalent...)

And here's a ranking of SSDs, with the Samsung Pro at the top spot:

http://www.fastestssd.com/featured/ssd- ... te-drives/

512GB seems to be the maximum capacity at this time for most companies. Indeed, some companies only go up to 480GB, which makes me glad Samsung is among those who give you the extra 32, because at that capacity range you really can use every extra GB you can get. How much space does Windows 7 take? That extra 32 probably covers the size of the operating system.

Having said that, AnandTech had a review last month of the Crucial/Micron M500 range which goes up to 960GB lol:

http://www.anandtech.com/show/6884/cruc ... 40gb-120gb

DSC_0075-2_678x452.jpg


The dude was ecstatic to see an SSD at that capacity, and especially for the ridiculously low price of $599 (there WERE a few other 960GB-1TB drives before this, but they cost upwards of $1,000), and for a while I considered going for this one. Unfortunately, the only one I could see on Amazon was going for $750 from a scalper:

http://www.amazon.com/Crucial-2-5-Inch- ... 709&sr=8-1

on top of the fact that the AnandTech review says the drive sacrifices performance to achieve the higher capacity:

Anand Lal Shimpi wrote:For SSDs to become more cost effective they need to implement higher density NAND, which is often at odds with performance, endurance or both. Samsung chose the endurance side of the equation, but kept performance largely intact with the vanilla 840. Given that most client workloads aren't write heavy, the tradeoff made a lot of sense. With the M500, Crucial came at the problem from the performance angle. Keep endurance the same, but sacrifice performance in order to hit the right cost target. In the long run I suspect it'll need to be a combination of both approaches, but for now that leaves us in a unique position with the M500.

The M500's performance is by no means bad, but it's definitely slower than the competition. Crucial targeted Samsung's SSD 840, but in most cases the TLC based 840 is faster than the M500. There's probably some room for improvement in the M500's firmware, but there's no escaping the fact that read, program and erase latencies are all higher as a result of the move to larger pages/blocks with the drive's 128Gbit NAND die. The benefit to all of this should be cost, but we'll have to wait and see just how competitive the smaller capacities of the M500 are on cost.


Worth noting also is that what tackywoolhat mentions in his OP still applies: top performance requires leaving 20% of the drive free:

Anand Lal Shimpi wrote:If you have the luxury of keeping around 20% of your drive free, Samsung maintains its performance advantage. If, on the other hand, you plan on using almost all of your drive's capacity - the M500 does have better behavior than even the 840 Pro.


Another thing to note for this update is the recent appearance of PCIe SSDs. Apparently, these are now the fastest storage solution available (though I wonder how they fare against RAM disks...) Here's a short article from 2010 that explains the basics:

http://www.computerweekly.com/feature/P ... can-use-it

Danny Bradbury wrote:The biggest benefit of PCIe-based SSD drives is increased performance. With other server-based SSD types, customers were able to forego the mechanical considerations of conventional hard disk drives (HDDs) -- suddenly rpm measurements became irrelevant because there were no moving parts. But with those types of SSD, the SATA-based interface limits the capacity of the bus that transfers data from the SSDs to the processor.


Prices for these have been apparently coming down recently, and they are now becoming competitive against regular SSDs. However, I am not going to look further into them at this time, because I only have two PCIe x16 slots on my motherboard, and they are both taken by my GTX 690 graphics cards. I do have two free x8 slots, but wouldn't those bottleneck the SSD, making the whole investment pointless? That's one thing I haven't been able to ascertain after a quick Google search: what's the maximum bus speed these SSDs can utilize? Do they go up to PCIe 3.0? In which case again I am not interested in them at the moment, since my motherboard is PCIe 2.0. (Another consideration is that, since my graphics subsystem puts out A LOT of heat, I am not sure I'd want the hard drive that carries my operating system right in the middle of it...)

And a final note for this update as regards mechanical drives: prices continue to fall, with a 4TB Seagate drive going for $183.43 on Amazon, for example:

http://www.amazon.com/gp/product/B00B99 ... PDKIKX0DER

No idea how fast/reliable/etc. this one in particular is, I am just giving an example. Eventually I'll want to get a few drives of this capacity, but I am currently once more constrained by my motherboard, which it seems can only take two SATA 3.0 drives, one of which will be the SSD I just ordered. So I'll have to wait until I build a new computer for the 4x4TB RAID array that I am dreaming. Damn motherboard! It was the best thing on the market when I bought it, and the thing has lasted for almost three years now, so I can't really complain, but it's getting on now and dragging my system choices down with it. But anyway, enough of my whining: these are the latest news from the hard drive world, and if you know anything of significance I missed, feel free to let me know about it.
User avatar
icycalm
Hyperborean
 
Joined: 28 Mar 2006 00:08
Location: Tenerife, Canary Islands

Unread postby recoil » 02 Mar 2019 23:51

https://www.pcper.com/news/Storage/ADAT ... SX8200-Pro

Jeremy Hellstrom wrote:ADATA hits new highs and lows with the XPG SX8200 Pro

Last year ADATA launched their XPG SX8200 NVMe SSD, which offered impressive speed without a high cost, currently you can grab 1TB for just under $200. This year they followed up with the XPG SX8200 Pro, using Silicon Motion's new SM2262EN controller, paired with the same 64-layer Micron TLC flash as used on the original. The Tech Report tested it out and found it to be almost a chart topper, surpassing many other more famous brands, and the best news is it is a mere $10 more than the previous version.

If you are looking for a PCIe 4x M.2 NVMe drive, this one should be on your list!

Image

Image
User avatar
recoil
 
Joined: 26 Feb 2010 22:35
Location: California, USA


Return to Hardware