backtop


Print 97 comment(s) - last by mindless1.. on Nov 24 at 3:44 AM


The OS and application data are continually becoming easier to fit on today's platters - why not move it to NAND? - Image courtesy Samsung
Do you need a solid-state drive? Samsung says you do, and here's why

DailyTech recently had the opportunity to sit down with Don Barnetson, Samsung's director of flash marketing, to chat about the future of NAND devices.  Specifically, we picked Barnetson's brain about solid-state drives and future NAND storage.

Over the past few months, we've seen dozens of announcements about solid-state hard drives.  PQI has already announced a 64GB flash drive (which coincidentally, is based on Samsung NAND), which ASUS, Fujitsu, Samsung and Sandisk have all announced products based on solid-state hard drives. Given the fact that the hard drive has been the bottleneck on PC performance for years, the question has to be asked is solid-state technology ready to take us out of the dark ages of storage? 

In the 90s, the largest advocate of more storage was Microsoft.  The company insisted we have larger hard drives for Windows 95, then Windows 98.  Then the next largest proponent for more storage became the application designers, pleading users to get larger hard drives for image manipulation or games.  But today, I can fit Vista, Outlook (and all of those 2GB PST files) and even a few games in less than 1/10th of my 250GB hard drive.  The other 100-odd gigabytes is mainly composed of MP3s and a few DVD rips.  I am the prime candidate for a solid-state hard drive.

Most business users claim only a fraction of the hard drive space provided for them, especially considering most unique data gets written to a network anyway.  The operating system and applications can all fit in less than 10GB of space, which is well within the sizes of solid-state hard drives today.  Barnetson's group has calculated that during an 8-hour day the average hard drive:
  • Has about a 1% chance of failure per year
  • Consumes 9W
  • Loses about 7 to 15 minutes per day in productivity
The fact that we lose so much time alone due to hard drive spin-ups and seeks is alone appalling, but the decreased power consumption is what is driving solid-state adoption today. A NAND device uses less than 200 milliwatts during read/writes, and 0 watts when not being accessed.  On the desktop this is relatively unimportant, but on a notebook the hard drive accounts for 10% of the total power draw.  Cutting this number down to less than 1% means an extra 12 minutes of usage on my 2 hour battery.

When asked about the reliability of NAND-based hard drives, Barnetson had no problem shrugging off fears of write corruption of failure.  "Samsung's solid-state devices have a MTBF of approximately 1 to 2 million hours."  Typical disk-based hard drives have a mean-time between failures of approximately 100,000 to 200,000 hours.  Since there are no moving parts, the only real point of failure is for something to come unsoldered or a problem with the physical bit during a write.

Obviously, write-errors are a huge concern for those who have used flash products in the past.  Only a few years ago the highest-end flash media was only useable for 1,000 or so writes.  At that point the physical bits would "burnout" and could no longer be flipped. Today's single-level cell (SLC, memory that stores one bit per cell) is rated in excess of 100,000 writes before burnout.  Multi-level cell flash, memory that stores multiple bits per cell, is significantly cheaper but even then is still rated at over 10,000 writes before burnout. 

Is 10,000 writes enough?  Absolutely, assures Barnetson.  Samsung memory uses a technique called "wear leveling" to distribute the writes on a media through as many groups of cells as possible. The idea behind wear leveling is that all of the cells have approximately the same amount of writes to them, maximizing the life of the device.  Consider a typical computer that writes 120 megabytes per hour to the hard drive.  On a 32GB solid-state NAND drive, wear leveling would distribute this data over the entire drive -- it would take 267 hours to fill the device once. Even on a multi-cell flash device, at this rate it would take no less than 150 years to burnout all the bits on the SSD.  Single-cell drives are capable of ten times as many writes.

Even so, Samsung's initial solid-state drives are all single-cell designs.  This first generation of SSDs are prohibitively expensive for most, but Samsung's SSD roadmap already has plans for multi-cell level drives as early as next year, which should bring the cost down considerably.  Additionally, Samsung anticipates announcing drives in capacities of up to 128GB in early 2008. 

Solid-state memory will not entirely replace disk drives.  The fact is, media is more and more prevalent each day.  5 years ago, a fringe enthusiast may have had as much as 1GB of MP3s on his hard drive.  Today even the average user may have 100GB of just Lost episodes on their hard drive.  As an intermediate step hybrid hard drive, hard drives with multi-gigabyte NAND caches, will provide the 2007 stopgap before really big SSDs get cheap.  These drives can load the entire operating system, some applications and even a little bit of user data (like Outlook PST files) onto the NAND.

Our insatiable appetite for media cannot be even remotely matched with the production of NAND memory right now, but for games and operating systems, solid-state devices are here and ready to go.


Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Stupid maths
By Lonyo on 11/15/2006 6:55:18 PM , Rating: 3
quote:
Is 10,000 writes enough? Absolutely, assures Barnetson. Samsung memory uses a technique called "wear leveling" to distribute the writes on a media through as many groups of cells as possible. Consider a typical computer that writes 120 megabytes per hour to the hard drive. On a 32GB solid-state NAND drive, wear leveling would distribute this data over the entire drive -- it would take 267 hours to fill the device once. Even on a multi-cell flash device, at this rate it would take no less than 150 years to burnout all the bits on the SSD. Single-cell drives are capable of ten times as many writes.


I don't care how long it takes for ALL the bits to burn-out, I want to know how long it takes for one bit to burn out.
One bad bit = corruption = possible lost data. So, it may be 150 years of use before I lose ALL my data, but that's not something I care about, since I may lose the most important stuff first.

What a pointless use of mathematics to inflate numbers in a positive manner.





RE: Stupid maths
By IGx89 on 11/15/2006 7:12:13 PM , Rating: 5
I'm sure they'll have error correction algorithms to ensure data isn't lost; you won't have to worry about that. Most likely flash hard drives would just gradually decrease in capacity over time, similar to floppy disks (when you used Scandisk on them). Believe it or not, CD's devote an entire half of their physical bits to error correcting bits!


RE: Stupid maths
By hadifa on 11/15/2006 10:27:26 PM , Rating: 2
I am still not convinced that we can trust solid state drives.

How effective the wear leveling is and what -if any- are its limitations?

There are some data that are updated a lot. For example consider some data in a database table. How the wear leveling will work for them. If I need to update a bit in a table, does it write the whole memory block or page to another location. That is the only way (I think) to achieve 100% wear leveling, but then what about the performance?

What about the FAT. Will it apply the wear leveling to FAT as well? If it does not then the location where the fat is written to will wear out much faster. If it does, then it will be interesting to see how.

These and many other issues have solutions already but I do not know any that makes a flash memory suitable for every day usage.

Generally it is not easy to update flash memory data. You can see that in their every day usage. We can copy new data and the wear leveling mechanism will work nicely and spread the data through out the medium, but update?

Note: Flash memories CANNOT update a single bit or page. To update something you need to erase it first. This is may not be a problem until you consider that you can only erase a complete block at a time. So to update a bit you need to write a whole page (ie 512 bytes) and to do that you need to erase a block (ie 16KB). The current applications for computers are not designed with these limitations in mind. Does Samsung claim that their hard drive will be OK no matter the application?


RE: Stupid maths
By Trisped on 11/16/2006 11:08:10 AM , Rating: 2
From what I have read they only change the bits that need to be changed, which doubles the life of the bits (since half the time the bits are going to be the same).


RE: Stupid maths
By OCedHrt on 11/16/2006 12:44:35 PM , Rating: 2
I doubt the location of files matter. Wear-leveling should be transparent to the user. Just because the flash memory has reassigned cells to different addresses, it doesn't mean the address used to access them has changed.

Additionally, think of all the defragmenting that won't need to be done!


RE: Stupid maths
By Sulphademus on 11/16/2006 3:51:22 PM , Rating: 2
Lets say you are saving an existing file and have made significant changes: To spread out the wear it writes the file in a different physical location. But to prevent having a duplicate copy of the same file, would it not have to erase (or at least flag) the original file, thus dramatically lowering the 'spread ratio'?

Also, what happens when you do a defrag?


RE: Stupid maths
By hadifa on 11/16/2006 6:56:14 PM , Rating: 2
quote:
what happens when you do a defrag?


Since it is an electronic memory and there is no mechanical part involved, why would you need to defrag? You do not need to defragment a flash memory.

Furthermore, because of the existence of the wear-leveling mechanism that adds an additional layer of abstraction, defragmenting is meaningless.

Note: Hate to contradict myself or confuse anybody, but defragmenting can be helpful in some cases and flash devices ,though that is more like exception than the norm.


RE: Stupid maths
By semo on 11/19/2006 12:02:01 PM , Rating: 2
a big reason to do defrag on an hdd is to reduce the number of seeks required to read/write, i.e., reduce the wear and tear.

incidentally that also improves performance on an hdd. it won't improve performance that much on a ssd because seek times are insignificant because of the slow transfer rate.

there are some utility apps that defrag ram so i'd imagine there are advantages.


RE: Stupid maths
By Zirconium on 11/20/2006 10:48:05 PM , Rating: 2
quote:
[Defragmenting] won't improve performance that much on a ssd because seek times are insignificant because of the slow transfer rate.

WHAT SEEK TIMES? Solid state disks/drives do not "seek." There is no armature moving over a platter like in a usual hard drive.


RE: Stupid maths
By glennpratt on 11/22/2006 5:19:06 PM , Rating: 2
I don't know of programs that really "defrag" ram, they did have mem optimizers back in the Win 9x days, but that was mostly hog wash or making up for poor memory management.


RE: Stupid maths
By hadifa on 11/16/2006 6:38:48 PM , Rating: 2
You are right about the existence of error-correcting algorithm but it can correct data if there is only one corrupted bit in a block and not more.

quote:
The error-correcting and detecting checksum will typically correct an error where one bit in the block is incorrect


http://en.wikipedia.org/wiki/Flash_memory

(Not clear if by block the article means flash BLOCK or PAGE! since in the previous paragraph it uses block for pages.)


RE: Stupid maths
By bobsmith1492 on 11/15/2006 7:12:41 PM , Rating: 2
It's called "wear leveling"...

It doesn't write and re-write each bit if it's used more; it spreads the writing around. Theoretically, no single bit will burn out before any of the others since they are all written to an equal amount of times.


RE: Stupid maths
By mindless1 on 11/22/2006 12:59:27 AM , Rating: 2
No, in theory and in practice some bits will burn out first. It is not a hypothetical model, unless that model concedes the issues of being a physical device prone to imperfections.

It would be silly to think that right at 1 million cycles (or whichever applies) every bit just fails in succession.

Further, wear leveling cannot come close to what everyone things it can do. It would have to keep track of all these millions of writes to do so, instead it will inevitably far faster wear out areas that are non-static (files).


RE: Stupid maths
By Golgatha777 on 11/15/2006 11:40:09 PM , Rating: 2
On the desktop this is relatively unimportant, but on a notebook the hard drive accounts for 10% of the total power draw. Cutting this number down to less than 1% means an extra 20 minutes of usage on my 2 hour battery.

60min*0.10*2 = 12min


RE: Stupid maths
By theapparition on 11/16/2006 7:43:14 AM , Rating: 2
Your math is correct, but not how you applied it. To do a proper calculation, you'd have to know how may watts were being saved, and how many watt*hours the battery was. Then you could do a proper comparison.


RE: Stupid maths
By MAIA on 11/16/2006 8:09:25 AM , Rating: 3
It's an estimate for gods sake ...


RE: Stupid maths
By sxr7171 on 11/16/2006 11:24:00 AM , Rating: 2
Simple. Subtract 1.2 and get 10.8 minutes.


BTW anyone with a laptop that gives only 2 hours battery life should just give up on using it without the AC adapter - it's pointless.



RE: Stupid maths
By KristopherKubicki (blog) on 11/16/2006 11:31:09 AM , Rating: 2
The flight from Chicago to LA lacks those pesky AC adaptors


RE: Stupid maths
By sxr7171 on 11/16/2006 8:54:15 PM , Rating: 2
Well what I was really saying was that if you have a laptop that gives you 2 hours battery life you should leave it at home. If you need a real laptop meant to be used portably, a Thinkpad X60 with up to 8 hours battery life is what should be carried by frequent travelers.


RE: Stupid maths
By Zirconium on 11/20/2006 10:52:01 PM , Rating: 2
Do you own IBM stock?


RE: Stupid maths
By kkwst2 on 11/21/2006 1:43:16 AM , Rating: 2
You mean Lenovo stock? You don't need to own stock in the company to know that they're the best business notebooks available.


RE: Stupid maths
By kkwst2 on 11/21/2006 1:36:33 AM , Rating: 2
Not if you're using it to run CAD or CFD analysis. I'll stick with the T-series, thanks. The X graphics and screen just doesn't cut it for what I need.


RE: Stupid maths
By MAIA on 11/16/2006 8:15:36 AM , Rating: 2
quote:
What a pointless use of mathematics to inflate numbers in a positive manner.


I don't think so. In fact, if something gets inflated, are such uninformed speculations. It's not just wear leveling, but RS- ECC threshold tuning, flash memory scrubbing and fully associative caching, among other features. Please, don't let your ignorance make look you bad...


RE: Stupid maths
By Wwhat on 11/16/06, Rating: -1
RE: Stupid maths
By Wwhat on 11/16/06, Rating: -1
RE: Stupid maths
By MAIA on 11/17/06, Rating: -1
RE: Stupid maths
By Wwhat on 11/17/06, Rating: -1
RE: Stupid maths
By MAIA on 11/17/06, Rating: -1
RE: Stupid maths
By Wwhat on 11/17/06, Rating: -1
RE: Stupid maths
By rushfan2006 on 11/17/2006 2:31:08 PM , Rating: 1
quote:
Oh dear, I was hoping that my post, although annoying to some because it veered off topic even more, would point out that your behavior was transparent to us and it might help you change before you got too entrenched and became one of those types, but if you indeed have "15 years experience" then I guess I'm too late.


Dude Wwhat the hell are you talking about? I don't get your beef? You going off the deepend because the other poster made a perfectly fine post bringing up some interesting points that were on topic to the article at hand?

Who gives a shit if he didn't define them, I read and re-read his post...not a whiff of him trying to do offer anything but some insight on the matter.

You are coming off looking like the prick not him....



RE: Stupid maths
By Wwhat on 11/17/2006 2:52:08 PM , Rating: 1
All his post was a bashing of the previous poster, he just threw in some technical terms to be a wiseass but did not as much as hint what he was talking about in relation to those because the only purpose of the technical terms was to be a vehicle to bash the poster, not to inform.
The only purpose of his post was to be unpleasant to the poster he replied to.
I fail to see how that is not abundantly clear, normally I would let the thing pass because I see types like him making post like that all the time but this time I though it would do good to point out some things for a change.

And I'm not off the deep end at all I think, I'm calm and relaxed.

But thanks for your honest post and attempt to understand it and re-reading his post for that purpose and giving me that respect, I appreciate it.


RE: Stupid maths
By sxr7171 on 11/16/2006 11:14:27 AM , Rating: 1
Thank you. Most people just take it without thinking. Someone even downrated you for being more intelligent than them.


RE: Stupid maths
By MAIA on 11/17/06, Rating: -1
RE: Stupid maths
By PandaBear on 11/16/2006 2:42:31 PM , Rating: 4
This is a lie. I worked for another flash memory company and the write/erase cycle of SLC is not close to 100,000 cycles. What I know is around 10,000 erase cycle, and you have to program one page at a time. For the cheaper MLC, which most likely will be what the cheap, large drive is made of, the cycle is closer to 3000 cycles. Samsung in particular has a horrible quality in their MLC and I wouldn't think they have this under control yet as of mid Mar this year.

The 100,000 cycle read is read disturb failure, meaning that reading 100,000 times will corrupt the data. The write/erase caused over program is non recoverable damage like a bad sector in HD, but it is now a bad page/block instead.

Wear leveling, however, do works very well and I wouldn't worry about that. Basically the LBA we read/write to is not directly mapped to one block, so it does get weared out on all block evenly (i.e. in linklist fashion).



RE: Stupid maths
By GTMan on 11/18/2006 5:20:39 PM , Rating: 2
What a pointless post! Once a bit is close to burning out it would be marked as bad and not used. Since the bits wear out evenly it would take over 100 years before the first bit wore out. Read the article first next time!

The chances of using the same drive for even 10 years is pretty slim. Something better will come along way before even one bit fails.

Plus if there was an unexpected failure the error correction capabilities would move the data to a good location and mark the bad bit as unusable.


RE: Stupid maths
By blwest on 11/19/2006 12:06:45 AM , Rating: 2
so in 11 days of writing at 2 megs/second your drive is toast. Sounds something like 3 months worth of use.


Let me be the first ..
By ViperROhb34 on 11/15/2006 6:12:44 PM , Rating: 5
To say the obvious..

This is a good idea ! Maybe not for everyone, but a step in the right direction.. paving the path for better, bigger solid state drives..

Imagine, a quiter pc.. with no rotating platters in the HD.. runs cooler.. uses less power.. more reliable.. and.. the big one.. faster access times of data !!




RE: Let me be the first ..
By Aeros on 11/15/2006 6:22:31 PM , Rating: 2
My only concern is I/O speeds... Any mention of improvments in that area?


RE: Let me be the first ..
By Fnoob on 11/15/2006 7:02:05 PM , Rating: 1
That was my concern as well. I work in digital imaging solutions, and I have yet to see a flash memory card of any flavor that comes close to even a decent 7200RPM IDE drive. With even SCSI320 15,000RPM drives still being a bottleneck in todays systems, I don't see flash drives replacing HDD's for a long while to come.


RE: Let me be the first ..
By ydgmdlu on 11/16/2006 12:34:20 AM , Rating: 3
Actually, flash memory is significantly faster than magnetic memory (i.e. current HDDs). The limitations that you're experiencing are due to the interface with the memory, not the memory in itself.


RE: Let me be the first ..
By GoatMonkey on 11/16/2006 8:57:11 AM , Rating: 4
It should be obvious that a solid state drive can be made to perform much better than any traditional hard drive. You're expecting the performance to be the same as what you get from your SD memory card or thumb drive. This is not the same thing.

Think more along the lines of the Gigabyte RAM Drive that was released a while back. That's not exactly the same either, since it's using regular RAM with a battery backup, but that type of drive is capable of filling up the entire SATA bandwidth, while a regular hard drive can't come close to that.

Of course, I have no information about the actual performance of these Samsung drives, but you can bet that they won't be slower than a regular hard drive, it would just defeat the purpose.


RE: Let me be the first ..
By caater on 11/15/2006 8:26:30 PM , Rating: 3
THG reviewed this samsung SSD a few months ago here - http://www.tomshardware.com/2006/09/20/conventiona...
it has ata/66 interface and in tests was limited to interface only.
but even that 50MB/s all through the "disc" is very impressive.
and i can't see no reason why they wouldn't produce a unit, only limited by SATA or even SATA-II interface.


RE: Let me be the first ..
By mindless1 on 11/24/2006 3:34:44 AM , Rating: 2
False. ATA66 max realized throughput is higher than 50MB/s. That is a device (SSD) limitation, not the interface.

The reason why they wouldn't is fairly easy to see. There's no new technology here to have a SSD. It's only a matter of packaging, to take an existing CF3.0 class controller, put this all on a larger card in a plastic shell with IDE spaced pinout.

To go with SATA or SATA-II, they will have minimal to no benefit, would have to develop a new controller. It's already obscenely expensive as it is- there is no real justification for these to cost more per GB than a CF card, particularly buying so many GB at a time from a memory manufacturer.


RE: Let me be the first ..
By leidegre on 11/16/2006 3:21:20 AM , Rating: 2
I've been using a kinda odd setup in my computer were I have a RAID0 + 320GB (with 16MB cache). i usually run the OS and perfomance applications such as games of the RAID0 and that has proven to be very efficent.

Still this Solid-State stuff looks intresting, becuase that would not only save power, but it could be a perfomance device as well. There are some enterprise server storage solution based of SSD, and they calim that they out-perfom any other solution by years. And if that is true, which i do not doubt, then SSD will certainly be able to ramp up perfomance I/O. Also, harddisk they make a lot of noise, and produce heat, is that a fact with SSD? (i'm not sure, but i think it's considerbly less.)


the age of consumer ssd
By fc1204 on 11/16/2006 9:38:36 AM , Rating: 4
I am very excited to see all the comments about this article. Yet I am a bit disappointed that so many of the HDD fans have not looked into the current applications of SSD and how an advanced technology is so poorly understood.
If you piloted a current US Air Force jet plane... your data recorder is at least one SSD, the USAF does not want their pilots losing the most important information- and probably the pilot and the plane- due to HDD failure. Boeing and Airbus, the planes that most of the people in the western civilization fly in, uses SSDs as the data recorder. Why? Since they fail less.
In fact, the only reason why your notebook PC does not have SSD inside instead of your HDD, is because most people can't afford the SSD technology.
The endurance rating of 100,000 write/erase cycles for SLC flash is each bit. The MTBF and FIT information can be gotten from some of the industrial SSD makers' websites- SiliconSystems has some decent White Papers.
There really has to be a reason why Industrial PC's have been using SSDs as well. I mean, it's really all about how much safer data is stored in solid-state drives rather than a fragile mechanical-based device. Where do you think the most important data- the BIOS- in your computer is stored? A relative of Flash memory not a relative of HDD. As a matter of fact, Intel is expanding that idea with Robson.
Don't get me wrong, I use HDDs, they store massive amounts of information, but I normally do not store them all in my notebook. The spinning of the HDD kills my usage time, which is one reason you probably don't own a 1" or less micro-drive for your MP3 or DSC or cell phone- the drain will eat up your mAH battery.
Taking SSD seriously is the only way for those of you that always wanted a laptop to last more than 3 hours free of the power cord- that or you use the 3rd world laptops with the crank.
I sincerely hope all of you will get excited about this consumer SSD idea, since the only way I will ever get to buy a laptop that can be both mobile and productive depends on all of you.




RE: the age of consumer ssd
By Xietsu on 11/16/06, Rating: -1
RE: the age of consumer ssd
By sxr7171 on 11/16/2006 11:39:09 AM , Rating: 2
No, he's right. You simply cannot see beyond your nose.


Hybrid drives and Robson technology is really the way of the future especially for ultraportable laptop users like myself. I want fast loading and the ability to use the laptop without HDD spinup for hours while I am doing basic editing of Office files. That battery will last for 10-12 hours and perform well without any lag.


RE: the age of consumer ssd
By sxr7171 on 11/16/2006 11:39:52 AM , Rating: 2
I mean "are."


RE: the age of consumer ssd
By mindless1 on 11/24/2006 3:44:58 AM , Rating: 2
You are deluded and spewing nonsense.

If you had some, oh, facts, you'd realize the HDD is already consuming far less power than you seem to think it is, and if you are doing basic editing and have set your HDD to continue spinning, you are your own cause for the problem you pretend SSD drives would solve.

10-12 hours runtime has nothing to do with a HDD, except as a minor consumer of power in an entire platform. Consumers could have chosen the least power hungry CPUs, chipsets, etc, and smaller screens, but did they? Performance drives the market and causes your battery to last only 4 hours or so, not the hard drive.


RE: the age of consumer ssd
By hadifa on 11/16/2006 7:42:42 PM , Rating: 2
You are right and wrong. There are applications where SSD are much better. It depends on the nature of the application. Storing songs, recording data where there are motion and disturbance etc... are great usage for SSDs, but this do not translates to every application in every scenario. The every day usage of a HDD can be a world different from those applications. If we needed the HDDs just to record data with minimal updating - similar to examples- then SSDs are very reliable and can easily exceed the HDDs when there is movement involved as well.

There are many applications that need to change the previously written data. In these cases the SSDs will perform very poorly. You cannot generalize based on your examples.


RE: the age of consumer ssd
By fc1204 on 11/17/2006 12:02:53 AM , Rating: 2
Well, that's one of the reason why people need to read up on the technology available.

There are SSDs based on DRAM and SSDs based on Flash. And then Flash can be broken down to NOR, NAND, and AND/AG AND. So this is similar in part to the HDD coming of age in the early 80's where there are 20-30 makers.

I think that every company that has access to DRAM and Flash will be coming out with their version of the SSD- based on their own vision of the future and it is the consumers that will ultimately determine which companies come out with the best solution.

The more educated we are, the better choices we make.


Random Thoughts
By Trisped on 11/16/2006 11:33:02 AM , Rating: 3
Forward compatibility: we buy bigger hard drives so we don’t have to go out and buy another one till we get a new PC. When I bought my 120 I didn’t have more then 10GB of data to put on it. Now I have 2 120s that have about 120+GB of data across the two of them. The drives are about 3 years old and should last another 2.
The data reported is not specific to the Gaming population, as games often take up a major section of our hard drives. If they are saying that we only need solid state storage for a small number of things, and standard HDD for the rest I agree, but gamers need more then most solid state devices will provide. Cost is also prohibitive, and the space will be a major limiting factor for home users a few years after the drive was purchased.

I have XP, vista, MSOffice2007, 10 games taking up 120+ GB. While I don’t have to have all the games on at the same time, it is nice when someone asks me to play if I don’t have to uninstall a game and install a new one to do so.

“Even on a multi-cell flash device, at this rate it would take no less than 150 years to burnout all the bits on the SSD. Single-cell drives are capable of ten times as many writes.“ o uninstall a game and install a new one to do so.
But, if you have 1 bad bit in a sector (or whatever you call the memory groupings) then the whole thing is unusable because that one bit will not be correct. So you don’t have to wait for every bit, but for a few, like 1% of them to go bad. Then you would have lots of good, but unusable bits, and no remaining storage. This will still probably give you 5-10 years of use before it goes bad, but not 150.




RE: Random Thoughts
By Schrag4 on 11/16/2006 1:51:18 PM , Rating: 2
I tend to agree that the math doesn't quite work out for the 150 year figure they give. Hopefully it would last long enough to at least rival what we have today, which may last long sometimes, but overall is very unpredictable.

So do bits only fail when they're being written to? And if so, would the 'drive' be able to detect the failure and write to a different byte/block/whatever? If that's the case, I see this as a HUGE step up from our mechanical drives which typically go down pretty hard when they do go down. It would mean that you won't actually lose any data when a bit wears out. Rather, you'll just lose unused space on the drive. Am I way off here?

As far as read/write speeds, can't they RAID the chips like crazy? I know that's already be speculated. It seems like it would be easy to accomplish...

FYI, I find this very exciting...as soon as it becomes affordable.


RE: Random Thoughts
By hadifa on 11/16/2006 7:20:48 PM , Rating: 2
quote:

So do bits only fail when they're being written to?

yes
quote:

would the 'drive' be able to detect the failure and write to a different byte/block/whatever?

Yes
quote:

I see this as a HUGE step up from our mechanical drives

Is it? If it is then FDDs were a huge step ahead of HDDs in term of reliability.

quote:

It would mean that you won't actually lose any data when a bit wears out. Rather, you'll just lose unused space on the drive. Am I way off here?


Hmmm , because of the correcting algorithm used, You won't loose any data if only ONE bit wears out. The block will be marked as bad and the data will be transferred to another block.

Note that because of the wear-leveling mechanism used, after the first bit corrupts, it is very likely that others will follow fast –in the future writes of course- and it is downhill from there.


RE: Random Thoughts
By zsouthboy on 11/16/2006 2:23:29 PM , Rating: 2
Please do not read his last paragraph. It is uninformed.


What part of bit-remapping don't you understand? You don't "lose" one bit in the middle and lose the entire "sector", as you say.

The device also knows how many writes it has gotten. More than likely, long BEFORE you ever lost data, you would get a warning that cells are near the end of their rated life.


RE: Random Thoughts
By Larso on 11/16/2006 3:26:25 PM , Rating: 2
quote:
But, if you have 1 bad bit in a sector (or whatever you call the memory groupings) then the whole thing is unusable because that one bit will not be correct. So you don’t have to wait for every bit, but for a few, like 1% of them to go bad. Then you would have lots of good, but unusable bits, and no remaining storage. This will

So, if you make just one nano scratch on a CD to hurt a bit, a complete sector on the CD is unreadable? You guys really need to learn something about error correcting codes. Read on: http://en.wikipedia.org/wiki/Error-correcting_code


RE: Random Thoughts
By hadifa on 11/16/2006 7:32:03 PM , Rating: 2
quote:

quote:

But, if you have 1 bad bit in a sector (or whatever you call the memory groupings) then the whole thing is unusable because that one bit will not be correct. So you don’t have to wait for every bit, but for a few, like 1% of them to go bad. Then you would have lots of good, but unusable bits, and no remaining storage. This will

So, if you make just one nano scratch on a CD to hurt a bit, a complete sector on the CD is unreadable? You guys really need to learn something about error correcting codes. Read on: http://en.wikipedia.org/wiki/Error-correcting_code

We are talking about NAND flash memory and not the CD. Apart from some of the terms, he is right about the concept.

http://en.wikipedia.org/wiki/Flash_memory

Quote from the article:
quote:

The error-correcting and detecting checksum will typically correct an error where one bit in the block is incorrect. When this happens, the block is marked bad in a logical block allocation table, and its (still undamaged) contents are copied to a new block and the logical block allocation table is altered accordingly.


What we need...
By Aikouka on 11/16/2006 8:59:34 AM , Rating: 1
What we really need is a lifetime graph. The graph would show bits available (preferably in GB with 3 decimal points or so) against time. Now, this would have to be based on some sort of average use, but what it could show is that even over the course of say 10 years, a 64GB SSD drive would only deteriorate into say... a 42GB SSD drive (note these are just purely numbers I made up, I have no idea what it would be). I think this would really show people what the power and usability of the drives are.




RE: What we need...
By KristopherKubicki (blog) on 11/16/2006 10:58:27 AM , Rating: 3
Due to wear leveling, all of the bits would fail at the same time.

If you continuously wrote 1MB to a 1GB stick, wear leveling would distribute the bits that are written so that every bit would be written to once before one bit is written twice.

If you wrote 1MB to a 1GB stick every second, with 10,000 writes per bit before burnout, you would be able to write for about 1,000 * 10,000 seconds, or 115 days continuously. Obviously if you're using higher quality SLC media, that figure goes up by a factor of 10.

Kristopher


RE: What we need...
By Aikouka on 11/16/2006 12:22:55 PM , Rating: 2
Not necessarily, your reasoning is flawed in that it assumes that no data will ever remain the same or you could say it looks at free space only. I look at these SSD's and think "Operating System". Operating System files do see a decent amount of read access, but they don't tend to change too often (save viruses and such ;)).

Looking at it this way, you'll see that a lot of files may end up "blocking" the wear leveling algorithm and only allowing it to use a certain amount of space. Therefore, there's less time until these bits would be used over again and you'd begin to see more bits fail depending on how much space is available.

Here's a little ASCII diagram:

[1-1-1-1-1-1-5-5-5-4-4-4]

This is very small in comparison (obviously) to a real SSD, but it shows how the first group has only ever been written once and never changed, but this causes the later section to become the more volatile part. At this rate, you'd possibly end up losing 25-50% of your drive at some point. Of course I don't expect a system file to never be altered, but I'd expect it to be altered significantly less than say... the contents of your Temporary Internet Files folder.

There is no real method around this blocking issue unless you rewrite the bits that're being "blocked" to the same value, but why would you create an algorithm to cause the drive to last a shorter duration just for uniformity?


RE: What we need...
By zsouthboy on 11/16/2006 2:17:25 PM , Rating: 3
Just because the data doesn't change doesn't mean the memory can't move it around at will.

Enough with the FUD for flash, everyone (not picking on you).

Hilarious that people are worried about reliability of their data; we are all using hard drives right now, right? Do you *know* how many times your drive has gone, "Eh, this data has had to been corrected a few times. I'll mark it off, and move the data elsewhere."? Because that is what's happening, right now, transparently.


RE: What we need...
By Aikouka on 11/17/2006 1:16:24 AM , Rating: 1
Already thought of that, and then I realized this:

Why the hell would an algorithm be programmed to waste more of the lifetime (i.e. unnecessarily moving a file to another location just to even out the lifespan)? That's just stupid, illogical and wasteful.


RE: What we need...
By mindless1 on 11/24/2006 3:37:09 AM , Rating: 2
yes, it certainly does mean it can't move it around, beacause that would halve the performance.


RAIDed SSDs?? droool....
By ninjit on 11/16/2006 1:21:35 AM , Rating: 2
I used to think that the main drawback of flash based storage over magnetic hard disks was sustained/sequential read/write speeds. And it's advantage was random read/write access.

But many of the above comments say thats not the case anymore, and in looking around the web, I gather that's because of parallized operations within the SSD itself - i.e. if you wrote 64 kB to the device, maybe 1kB would be written to 64 components simultaneously - internal raiding.

Imagine then taking 4 of these things and putting them into your own RAID 5 system - this would be great for video editing, you could jump around to any frame within a multi-GB file almost instantly.




RE: RAIDed SSDs?? droool....
By tsukasa on 11/16/2006 2:46:19 AM , Rating: 2
Did they say that the newer single cell types got 100k hrs?
that would provide a pretty decent lifetime for those types. 11-12 Yrs?


RE: RAIDed SSDs?? droool....
By tsukasa on 11/16/2006 2:48:23 AM , Rating: 2
Did they say that the newer single cell types got 100k hrs?
that would provide a pretty decent lifetime for those types. 11-12 Yrs?


RE: RAIDed SSDs?? droool....
By fc1204 on 11/16/2006 9:46:11 AM , Rating: 3
FYI, the PQI 32GB/64GB drives utilizes a RAID controller to get the increased performance and capacity. It's a bit expensive to buy one and bust it open to see for yourself, so you should just trust me on that one.


digital multi track recording
By kevinkreiser on 11/15/2006 7:09:44 PM , Rating: 3
i use my pc for digital recording, because im in a band. often times i experience a lot of jitter when i play back a song with multiple tracks, because the harddrive has to simultaneously stream data from each track. I think that the faster access times of solid state drives will improve this latency and make it much easier to have songs which have 16 or more tracks playing at once. so yeah, this would be great for digital recording.




RE: digital multi track recording
By saratoga on 11/16/2006 8:34:58 PM , Rating: 3
Lets assume you're recording at 192k/24 bit PCM times 32 mono channels. Thats 18MB/s. Lets also say that you can afford to buffer 10MB worth of data for each stream, and we'll further assume that you've recorded your streams in stereo channel pairs on the hard disk. That means that you'll need to read in a new stream every 18 seconds to avoid buffer underruns. Lets also assume that your HD can read at 50MB/s for sustained reads, and has a seek time of 15ms. Both are not all that great. Reading in one channel pair to fill its buffers takes 15ms + 20MB/(50MB/s) = 415ms. Times 16 stereo channels, means for 6.6 seconds to read in 18 seconds of audio, or in other words, your slowish HD is fast enough to handle more then 96 channels.

Furthermore, while reading, seeks occupy only 4% of your disk time, so eliminating them would only let you add 3 channels worth of audio or so.

If you're getting jitter at 16 tracks, its not the hard disk. My first guess would be lack of RAM. The instant you hit the page file, you're going to queue up reads and drain your buffers. Make sure that doesn't happen. My next guess would be a crappy IDE controller that can't handle multiple concurrent requests very well. In this case a high end controller, or even SCSI would help.


RE: digital multi track recording
By mindless1 on 11/22/2006 1:04:57 AM , Rating: 2
Let's assume your arbitrary idea that everything was done as you described, is untrue.

Why assume it? Because the problem itself is evidence.


By sabrewulf on 11/15/2006 7:16:24 PM , Rating: 1
As soon as I saw that chart title: "How is HDD storage used?" I thought sure there'd be a pr0n reference.




By sxr7171 on 11/16/2006 11:30:34 AM , Rating: 2
On a cheap useless laptop perhaps 2 hours is the norm, but on a decent ultra portable you can get 7-9 hours and this would raise that by 38-49 minutes.


By Spivonious on 11/16/2006 1:08:41 PM , Rating: 4
Read the graph again:

The OS takes up 1/3 of the used space on hard drives, not 1/3 of the total space. The used space is less than 10% of the total drive space. So on a 250GB drive, 25GB are used, with about 8GB devoted to the OS, 3GB devoted to apps, and 14GB devoted to user files.


A long way off
By Dragen on 11/15/2006 7:15:55 PM , Rating: 2
It pleases me greatly to see us moving in a direction where we leave magnetic storage behind and move in a new direction. Technical issues aside, the general public (not just tech freaks) demand more space than what this new solid state has to offer. It's not so much about the Operating System vs Hardware (like it was back in '95), it's shifted in to being able to store, index, search, and play massive media libraries.

iTunes, Media Center, mp3's, videos, rendering, are all things the general public uses and demands. And while this Samsung tech is way cool, what do you say about Enterprise Database Servers?

The day I see a company running a database on a solid state device, reliably, I will sit happy. But until then, completely abolishing magnetic storage is a pipe dream.

Still cool to think about nonetheless..




RE: A long way off
By Regs on 11/16/2006 7:55:15 AM , Rating: 2
Completely agree. The Hard Drive has been the only "real" bottle neck for the past 5 or so years. Faster load times, faster data prefetching, and a whole slew of other possibilities it will bring in the future. The computer revolves around the hard drive.


RE: A long way off
By sxr7171 on 11/16/2006 11:34:16 AM , Rating: 2
Hybrid is the way to go. Maybe about 16-32GB on NAND and 200GB on magnetic disk all in a 2.5" form factor. OS and frequently used data on the NAND (with a backup on the HDD for peace of mind) and all the rest of the data on the HDD. In fact having a backup on the HDD should allow for simultaneous read of OS files from both sources for faster loading.


Does size really matter?
By Fnoob on 11/15/2006 9:29:57 PM , Rating: 2
Considering that we can now fit ~64GB or more on a tiny flash card... how much could be stored in a traditional 3.5" form factor drive? Grins.




RE: Does size really matter?
By Tamale on 11/15/2006 10:55:51 PM , Rating: 2
this is what i want to know. 64GB in a compact flash size should be equivalent to at least 2TB in a standard 3.5" hard drive size by my rough (and pretty generous) estimation of cramming 36 compact flash cards into the same space. (4x3x3)


RE: Does size really matter?
By saratoga on 11/16/2006 8:19:45 PM , Rating: 2
The memory itself has a thickness measured in microns. If you didn't care about cost, you'd probably be able to fit thousands of flash chips in that kind of volume since they're so thin.


Maths...
By oTAL on 11/15/2006 11:38:30 PM , Rating: 2
quote:
on a notebook the hard drive accounts for 10% of the total power draw. Cutting this number down to less than 1% means an extra 20 minutes of usage on my 2 hour battery.


Sure... cause 2hours * 0.10 = 0.2 hours = 20 minutes...

..duh...




RE: Maths...
By KristopherKubicki (blog) on 11/16/2006 12:22:12 AM , Rating: 1
It seems as though my state-funded math degree has failed me :( Let the lashings commence.


RE: Maths...
By Aikouka on 11/16/2006 8:45:36 AM , Rating: 1
Bah, it's a well-known fact that as you progress into higher maths, you tend to "forget" your lower maths. You wouldn't believe how many Computer Science students I went to college with who couldn't add correctly but could do derivitives in their head :P. It seems we defragged our brains and put the simple math all the way in the back ;).


RE: Maths...
By CascadingDarkness on 11/16/2006 4:37:16 PM , Rating: 2
It's true, I know a guy who works with a team designing formula one engines, but he can't do simple math without a calculator.


So how fast are these SSD going to be?
By Byte on 11/16/2006 7:48:41 PM , Rating: 2
Transfer rates seem to peak at about 15MBps and 30 when you dual channel it, which isn't too bad, but current HDD have some pretty insane sustained rates. How are access times?




RE: So how fast are these SSD going to be?
By saratoga on 11/16/2006 8:35:49 PM , Rating: 2
A little slower then main memory.


By mindless1 on 11/22/2006 1:00:52 AM , Rating: 2
A small fraction of main memory, well under 25%


Page file
By Great Googly Moogly on 11/17/2006 4:17:38 AM , Rating: 2
Did everyone just forget the page file here or what? Page file writes will most certainly be the bane of these drives.

And the ultra conservative 120 MB/hour figure quoted here (for ebaying/emailing parents and the elderly, or kids who only play games) ignores any kind of writing to the page file.




RE: Page file
By Aikouka on 11/17/2006 8:47:30 AM , Rating: 1
Nope... didn't forget, but you can change where the Pagefile resides with ease :). Set it to a normal hdd and have a decent amount of ram so you'll typically see less access (it'll pretty much never be none since Windows loves the pagefile).


RE: Page file
By Great Googly Moogly on 11/20/2006 4:55:11 AM , Rating: 2
I know this.

But if you're just going to have the SSD as a complement to a regular magneto-mechanical HDD, then what's the bloody point? The fact that you can't have a page file on the drive nullifies most pro-SSD arguments mentioned in this newsblurb and the comments.


i know why they think we need it
By Quiksel on 11/15/2006 6:27:20 PM , Rating: 4
quote:
Do you need a solid-state drive? Samsung says you do, and here's why

It costs a lot of money to get one! That's one reason!

Personally, I can't wait to get one at a decent price. While the tech is there, the pricing ISN'T. The tech is ready for primetime, but the economics ARE NOT.

They are going to be FABULOUS for notebooks and corporate computers. Speed, power usage, etc., are very desirable. But not at current prices.




SSD VS HDD
By hadifa on 11/16/2006 8:22:46 PM , Rating: 3
Looking at the comments, I thought it is good to have a comparison between HDDs and SSDs so I compiled this list and tried to put the main and more relevant ones to the topic and ignore the smaller differences. I am not claiming that this is a complete list but I hope some people find it useful.

SSD VS HDD
PROs:
1-Easier to parallelize
2-Better performance (potentially)
3-Less power consumption
4-No mechanical parts
No noise
No mechanical part failure
No data corruption because of movement
5-Can provide more flexible physical shape (The chips can be made into custom boards)

CONs:
1-More expensive per GB.
2-Worse performance (currently)
2-Difficulty with data update.
3-Wearing out based on number of writes.
4-For usage in PC, it is not as mature as HDD! OSs, file systems and many applications are optimized for HDD.

NAND characteristics :
Read granularity of page. (ie 512 bytes)
Write granularity of page
Erase granularity of block (ie 32 pages)
No write or overwrite can be performed without erase.
Limited number of writes to about 100,000 times.

Note: For SSD I had the nand flash memory in mind.
Note: There are many different NANDs available and some of them have some slight twists to them, which might blur the validity of some of the points.




By darrenf on 11/18/2006 8:52:11 PM , Rating: 2
SSD and KNOPPIX its read only this could be altered to 99% 98%.................

everything else is in ram until the end of my session where i can accept to update or not.




By AndyFlysMicrolites on 11/19/2006 8:31:29 PM , Rating: 2
Bit confused around Wear leveling. If the physical cell changes to the address mapping used, by the wear levelling mechanism, Where does the actual translation map sit, is it also held in physical cell memory and how does that get wear levelled? Sort of like a chicken or egg thing. It would seem to me that for this leveling to work there must be at least some constant cells that can then be used to provide the point to the translation table. wouldnt those constant cells be more likely to wear out 1st?




Built for speed?
By ceefka on 11/21/2006 5:53:07 AM , Rating: 2
If the NAND flash SSD is able to use the full 1.5/3GBs spec of the SATA port, then I'd be interested.

Imagine a 128GB drive in the hotswap drive bay of your HD camera.

Playing a sampled piano's and drums would be great with a 64GB SSD. You don't have to write that much, just put it on there and your good to go FAST.




New Math & Defrag
By mindless1 on 11/22/2006 1:39:19 AM , Rating: 2
The key to getting this wear-leveling to work well enough to not exceed the expected write life of the cells is to spend several times as much on your SSD so you have mostly free space. If you have a 36GB drive and have it 30GB full, obviously the leveling can only occur on the remaining 6GB.

It cannot be a true wear-leveling that writes to each the same # of times, it would have to keep track of all writes to do this since a filesystem is dynamic in which data is retained- we aren't wiping the drive and starting over clean when actually using it. I'll ignore this factor though as I don't have the ability to do an equation that would approximate the toll it might take- and of course it would vary per use, user, data, etc - too many variables.

Some power users can easily write 512MB/hr. so the remaining space had one cycle _minimum_ in 12 hours.

NOW, if it takes 150 years to burn out all the cells at 267 hours per fill but we're filling the remaining space in 12 hours, let's do the math.

267 / 12 = 22.25
150 years / 22.25 = 6.67 years (to burn out ALL those).

To those dismissing defragmenting their SSD, we have a reason! That reason is so the wear-leveling can write to these lesser used cells too, instead of constantly to the other ones that were free. Even defragmenting, since it isn't a perfect wear-leveling, some of the files will be re-written onto areas that formerly had static files, so the new free space will still have some well exercised cells. You'll have to defrag more often to significantly change the odds there.




"This is from the DailyTech.com. It's a science website." -- Rush Limbaugh

Related Articles













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki