Print 97 comment(s) - last by mindless1.. on Nov 24 at 3:44 AM

The OS and application data are continually becoming easier to fit on today's platters - why not move it to NAND? - Image courtesy Samsung
Do you need a solid-state drive? Samsung says you do, and here's why

DailyTech recently had the opportunity to sit down with Don Barnetson, Samsung's director of flash marketing, to chat about the future of NAND devices.  Specifically, we picked Barnetson's brain about solid-state drives and future NAND storage.

Over the past few months, we've seen dozens of announcements about solid-state hard drives.  PQI has already announced a 64GB flash drive (which coincidentally, is based on Samsung NAND), which ASUS, Fujitsu, Samsung and Sandisk have all announced products based on solid-state hard drives. Given the fact that the hard drive has been the bottleneck on PC performance for years, the question has to be asked is solid-state technology ready to take us out of the dark ages of storage? 

In the 90s, the largest advocate of more storage was Microsoft.  The company insisted we have larger hard drives for Windows 95, then Windows 98.  Then the next largest proponent for more storage became the application designers, pleading users to get larger hard drives for image manipulation or games.  But today, I can fit Vista, Outlook (and all of those 2GB PST files) and even a few games in less than 1/10th of my 250GB hard drive.  The other 100-odd gigabytes is mainly composed of MP3s and a few DVD rips.  I am the prime candidate for a solid-state hard drive.

Most business users claim only a fraction of the hard drive space provided for them, especially considering most unique data gets written to a network anyway.  The operating system and applications can all fit in less than 10GB of space, which is well within the sizes of solid-state hard drives today.  Barnetson's group has calculated that during an 8-hour day the average hard drive:
  • Has about a 1% chance of failure per year
  • Consumes 9W
  • Loses about 7 to 15 minutes per day in productivity
The fact that we lose so much time alone due to hard drive spin-ups and seeks is alone appalling, but the decreased power consumption is what is driving solid-state adoption today. A NAND device uses less than 200 milliwatts during read/writes, and 0 watts when not being accessed.  On the desktop this is relatively unimportant, but on a notebook the hard drive accounts for 10% of the total power draw.  Cutting this number down to less than 1% means an extra 12 minutes of usage on my 2 hour battery.

When asked about the reliability of NAND-based hard drives, Barnetson had no problem shrugging off fears of write corruption of failure.  "Samsung's solid-state devices have a MTBF of approximately 1 to 2 million hours."  Typical disk-based hard drives have a mean-time between failures of approximately 100,000 to 200,000 hours.  Since there are no moving parts, the only real point of failure is for something to come unsoldered or a problem with the physical bit during a write.

Obviously, write-errors are a huge concern for those who have used flash products in the past.  Only a few years ago the highest-end flash media was only useable for 1,000 or so writes.  At that point the physical bits would "burnout" and could no longer be flipped. Today's single-level cell (SLC, memory that stores one bit per cell) is rated in excess of 100,000 writes before burnout.  Multi-level cell flash, memory that stores multiple bits per cell, is significantly cheaper but even then is still rated at over 10,000 writes before burnout. 

Is 10,000 writes enough?  Absolutely, assures Barnetson.  Samsung memory uses a technique called "wear leveling" to distribute the writes on a media through as many groups of cells as possible. The idea behind wear leveling is that all of the cells have approximately the same amount of writes to them, maximizing the life of the device.  Consider a typical computer that writes 120 megabytes per hour to the hard drive.  On a 32GB solid-state NAND drive, wear leveling would distribute this data over the entire drive -- it would take 267 hours to fill the device once. Even on a multi-cell flash device, at this rate it would take no less than 150 years to burnout all the bits on the SSD.  Single-cell drives are capable of ten times as many writes.

Even so, Samsung's initial solid-state drives are all single-cell designs.  This first generation of SSDs are prohibitively expensive for most, but Samsung's SSD roadmap already has plans for multi-cell level drives as early as next year, which should bring the cost down considerably.  Additionally, Samsung anticipates announcing drives in capacities of up to 128GB in early 2008. 

Solid-state memory will not entirely replace disk drives.  The fact is, media is more and more prevalent each day.  5 years ago, a fringe enthusiast may have had as much as 1GB of MP3s on his hard drive.  Today even the average user may have 100GB of just Lost episodes on their hard drive.  As an intermediate step hybrid hard drive, hard drives with multi-gigabyte NAND caches, will provide the 2007 stopgap before really big SSDs get cheap.  These drives can load the entire operating system, some applications and even a little bit of user data (like Outlook PST files) onto the NAND.

Our insatiable appetite for media cannot be even remotely matched with the production of NAND memory right now, but for games and operating systems, solid-state devices are here and ready to go.

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

RE: Stupid maths
By IGx89 on 11/15/2006 7:12:13 PM , Rating: 5
I'm sure they'll have error correction algorithms to ensure data isn't lost; you won't have to worry about that. Most likely flash hard drives would just gradually decrease in capacity over time, similar to floppy disks (when you used Scandisk on them). Believe it or not, CD's devote an entire half of their physical bits to error correcting bits!

RE: Stupid maths
By hadifa on 11/15/2006 10:27:26 PM , Rating: 2
I am still not convinced that we can trust solid state drives.

How effective the wear leveling is and what -if any- are its limitations?

There are some data that are updated a lot. For example consider some data in a database table. How the wear leveling will work for them. If I need to update a bit in a table, does it write the whole memory block or page to another location. That is the only way (I think) to achieve 100% wear leveling, but then what about the performance?

What about the FAT. Will it apply the wear leveling to FAT as well? If it does not then the location where the fat is written to will wear out much faster. If it does, then it will be interesting to see how.

These and many other issues have solutions already but I do not know any that makes a flash memory suitable for every day usage.

Generally it is not easy to update flash memory data. You can see that in their every day usage. We can copy new data and the wear leveling mechanism will work nicely and spread the data through out the medium, but update?

Note: Flash memories CANNOT update a single bit or page. To update something you need to erase it first. This is may not be a problem until you consider that you can only erase a complete block at a time. So to update a bit you need to write a whole page (ie 512 bytes) and to do that you need to erase a block (ie 16KB). The current applications for computers are not designed with these limitations in mind. Does Samsung claim that their hard drive will be OK no matter the application?

RE: Stupid maths
By Trisped on 11/16/2006 11:08:10 AM , Rating: 2
From what I have read they only change the bits that need to be changed, which doubles the life of the bits (since half the time the bits are going to be the same).

RE: Stupid maths
By OCedHrt on 11/16/2006 12:44:35 PM , Rating: 2
I doubt the location of files matter. Wear-leveling should be transparent to the user. Just because the flash memory has reassigned cells to different addresses, it doesn't mean the address used to access them has changed.

Additionally, think of all the defragmenting that won't need to be done!

RE: Stupid maths
By Sulphademus on 11/16/2006 3:51:22 PM , Rating: 2
Lets say you are saving an existing file and have made significant changes: To spread out the wear it writes the file in a different physical location. But to prevent having a duplicate copy of the same file, would it not have to erase (or at least flag) the original file, thus dramatically lowering the 'spread ratio'?

Also, what happens when you do a defrag?

RE: Stupid maths
By hadifa on 11/16/2006 6:56:14 PM , Rating: 2
what happens when you do a defrag?

Since it is an electronic memory and there is no mechanical part involved, why would you need to defrag? You do not need to defragment a flash memory.

Furthermore, because of the existence of the wear-leveling mechanism that adds an additional layer of abstraction, defragmenting is meaningless.

Note: Hate to contradict myself or confuse anybody, but defragmenting can be helpful in some cases and flash devices ,though that is more like exception than the norm.

RE: Stupid maths
By semo on 11/19/2006 12:02:01 PM , Rating: 2
a big reason to do defrag on an hdd is to reduce the number of seeks required to read/write, i.e., reduce the wear and tear.

incidentally that also improves performance on an hdd. it won't improve performance that much on a ssd because seek times are insignificant because of the slow transfer rate.

there are some utility apps that defrag ram so i'd imagine there are advantages.

RE: Stupid maths
By Zirconium on 11/20/2006 10:48:05 PM , Rating: 2
[Defragmenting] won't improve performance that much on a ssd because seek times are insignificant because of the slow transfer rate.

WHAT SEEK TIMES? Solid state disks/drives do not "seek." There is no armature moving over a platter like in a usual hard drive.

RE: Stupid maths
By glennpratt on 11/22/2006 5:19:06 PM , Rating: 2
I don't know of programs that really "defrag" ram, they did have mem optimizers back in the Win 9x days, but that was mostly hog wash or making up for poor memory management.

RE: Stupid maths
By hadifa on 11/16/2006 6:38:48 PM , Rating: 2
You are right about the existence of error-correcting algorithm but it can correct data if there is only one corrupted bit in a block and not more.

The error-correcting and detecting checksum will typically correct an error where one bit in the block is incorrect

(Not clear if by block the article means flash BLOCK or PAGE! since in the previous paragraph it uses block for pages.)

“And I don't know why [Apple is] acting like it’s superior. I don't even get it. What are they trying to say?” -- Bill Gates on the Mac ads
Related Articles

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki