Print 25 comment(s) - last by PrezWeezy.. on Oct 21 at 1:25 PM

Fewer reasons remaining against SSD adoption

Limited write endurance is one of the factors that detractors bring up with regards to solid state drives (SSDs). Most NAND flash chips using multi-level cell (MLC)technology in SSDs have a write endurance of around 10,000 cycles. That isn't as great a problem in SSDs greater than 120GB that use wear-leveling technology, but smaller sized SSDs have less capacity and will reach the upper limit much quicker.

That issue is why almost all SSDs aimed at the corporate and enterprise market use Single Level Cell flash chips, which typically have a write endurance around the 100,000 cycle mark. These include Intel's X25-E, OCZ's Vertex EX and Agility EX series, and Super Talent's MasterDrive RX series.

Micron Technology is one of the key partners in IM Flash Tech along with Intel Corporation. IMFT produces the 34nm NAND flash used in Intel's second generation X25-M SSDs using 2-bit-per-cell MLC chips. Micron and IMFT have been working on improving the write endurance of their NAND chips, and they have now reached a breakthrough.

“By leveraging our mature 34nm NAND process, Micron has developed Enterprise NAND products that support customers’ high-endurance requirements. These products ensure that enterprise organizations have a highly reliable NAND flash solution – be it MLC or SLC – for design into the broader enterprise storage platform,” said Brian Shirley, Vice President of Micron’s memory group.

The company’s new 32Gb MLC Enterprise NAND devices achieve an impressive 30,000 write cycles. They are also introducing a 16Gb SLC Enterprise NAND device that achieves 300,000 write cycles. The new chips also support the ONFI 2.1 synchronous interface, making them easier to integrate into new products.

Both of these new chips are built on the 34nm process which IMFT introduced last year, and can be configured into multi-die, single packages supporting densities of up to 32GB for MLC NAND and 16GB for SLC NAND.

Micron is now sampling its Enterprise NAND products with customers and controller manufacturers, and is expected to enter volume production at the beginning of 2010.

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

By JuPO5b4REqAYbSPUlMcP on 10/19/2009 12:42:01 PM , Rating: 5
The company I am working for is converting all user workstations to SSD (60G or more). Completion should be done by Q2 2010. We are seeing dramatic differences in performance, especially with boot aand application start times, typically items which directly impact user "wait" time.

By semo on 10/19/2009 12:51:26 PM , Rating: 2
wish my company was doing that. instead, they are buying top of the range c2 processors, cheap peg (when the integrated is just fine) and 4gb ram for admin staff. everyone thinks the pc is an upgrade and no one cares to realize that it is no faster than a prescott for its intended use

By Randomblame on 10/19/2009 12:52:03 PM , Rating: 5
I have to keep a fleet of pentium 3's running. They have 10gb 5 platter hard drives - I would love to work for a company with a budget.

By Mr Perfect on 10/19/2009 1:28:43 PM , Rating: 2
You should see about picking up other companies leftovers. They'd have to pay a fee to get them properly recycled(assuming they pay attention to such things), so if you offer to remove them for free you might score some P4s for nothing.

Barring that, a rubber mallet is only a couple bucks at Lowes.

By nvalhalla on 10/19/2009 2:51:49 PM , Rating: 2
Yep. I'm looking at a stack of Dell GX260s and 240s we are trying to figure out how to get rid of.

By Sparke on 10/19/2009 4:27:53 PM , Rating: 2
The school I work for just got rid of around 200 GX240's, 260's and Dimension 4100's as well as 150 CRT monitors. We donated them to a non-profit group that fixes them up and uses them as thin clients for charities and underprivileged schools. Also, Staples offers free recycling of Dell-branded computers and monitors. Try giving them a call to schedule a large drop-off.

By MatthiasF on 10/19/09, Rating: -1
By PrezWeezy on 10/19/2009 5:26:53 PM , Rating: 3
Actually, we just decided to upgrade all of our users to SSD's. We noticed that the boot time alone went from 10-15 minutes from putting in a password to having outlook open, to 10-15 seconds. When you are a CPA billing $150 an hour saving 20 minutes a day makes a big difference. Not to mention any time we have to work on their laptops and do updates, or install new software we spend half the time, so instead of 30 minutes per PC to install CCH software it takes closer to 15. That adds up too.
We found the payback on the SSD's ended up being less than 6 months.

By MatthiasF on 10/19/2009 9:26:27 PM , Rating: 1
Why is this guy being rated down? He makes some good points.

Anyway, who really gets right to work in the morning? Most people turn their computer on and get some coffee. Some firms have their computers go to standby on log-off (for speed turning back on and backups/over-night updates).

I also doubt it took 10-15 minutes for the computer to boot up. The last time I had a computer take that long, was Pentium 3 days, which I hope you're not using with SSDs. In reality, it's probably closer to 6-8 minutes and it's now probably taking 1-3 minutes with the SSD. It's an improvement, but not as wide a margin as you make out.

You could have got the same improvement setting the workstations to go to standby instead of full power off. Updates could be applied overnight, eliminating that part of the argument too.

The only thing left is an increase in speed installing programs and that's really only limited to the speed of the optical drive or network.

So, what's the real reason you bought them? Cause they're "cool" right?

By SAnderson on 10/20/2009 3:49:41 PM , Rating: 4
Network administered PCs taking much longer to boot than normal home use PCs. The one I am on right now takes 10+ minutes to boot due to all of the network crap. Take it off the network and its probably much quicker. And yes it is the HDD that's the limiting factor in boot time.

By PrezWeezy on 10/21/2009 12:59:20 PM , Rating: 2
Actually it DID take about 10 to 15 minutes. And I timed it taking 13 seconds now. You obviously have never worked on a CPA's network. The amount of software it has to load is incredible. Add in the 5+ GB of email, the Document add-on, the Engagement add-on, e-Tools and a host of scanning "junk" it takes a while.

I defined being booted as being in windows, outlook open, and you can click on the start menu with it popping up instead of grinding. To hit the power button and get to the logon screen took noticeably longer, but maybe in the 45 second neighborhood. Although that time tends to not be important because the last thing we have people do at night when they leave is restart so it is at Ctrl+Alt+Del when they come in the next morning.

The fact is that the only real limiting factor in most PC's today is getting the information from the hard drive to the memory. You can argue that perhaps I could have made them slightly faster by optimizing the fragmentation using JKdefrag or that perhaps I could have simply used bigger drives with higher platter density. But all of that messing around, spending time would have resulted in a few percentage difference. Not worth it. The SSD's are the biggest difference you can make in a PC today. Simply because it removes the single slowest bottleneck.

As a side note, yes, it really does make a difference in time when installing software. We use large SAS arrays with GB Ethernet to all of our PC's it was then limited by how fast it could copy to the drive.

You can disagree with me on some specific point or two but don't go down the road of being supreme knowledge in the universe. You aren't involved in our network, you don't know the layout, you don't know the software or the requirements. If you say that SSD's don't make a difference in your setup that's fine. I tend to think they would, but you can make that decision because I don't know your topology. For us, they made a huge difference.

By PublixE on 10/19/2009 5:32:31 PM , Rating: 1
From a previous poster I thought this quote would suffice...

Mr Thickety Thickhead from Thicksville Thicksylvania


Or did you guys just see a fancy new tech and jumped on it without thinking?

You probably have no idea what they do anyways. Instead of jumping to conclusions (without thinking...) and insulting him/her, you could have at least made a post worth reading.

Believe it or not - there are some people who require EVERY bit of speed - whether it be ten SSD's in RAID 0 or Core I7 975 or both. Whatever it may be - some people require every bit of performance they can get (think Pixar with their movies or in another field - advanced scientific calculations).

Yes - we really should keep our Quantum BigFoot Hard drives - screw progress, or any other new fangled device, right?

/ Rant

Micron is now sampling its Enterprise NAND products with customers and controller manufacturers, and is expected to enter volume production at the beginning of 2010.

Hopefully these will trickle to consumer SSD's just as quickly!

By MatthiasF on 10/19/09, Rating: 0
By FaaR on 10/20/2009 6:28:07 AM , Rating: 4
Whatever points you're trying to make gets drowned in the noise of your needless and unprovoked hostility and rudeness. If you'd behave like a normal person, more people would listen to what you have to say and judge your words more on their own merits, rather than the way you say them.

Also, most files may be larger than 128kb, but any person who actually knows anything about disk performance know that access time is indeed the vastly more important figure in almost every situation. Why else do you think enterprise drive arrays are comprised of relatively low-capacity high spindle speed drives? It's to cut down on read/write latency, of course.

Most disk accesses are on the order of 32kb-ish per I/O request, not many megabytes, and a HDD reads or writes that amount in (for argument's sake) a microsecond. Problem is, it takes a thousand times longer or more to actually seek to that sector and wait for the heads to settle... That's the HDD's achilles heel.

If all you do is read or write the occasional large or small file, then neither access time OR transfer speed is really all that important, because even a large file will be transferred quickly with any reasonably modern harddrive (within one or a few seconds at most, typically), and this very slight delay is tiny tiny compared to the full workday.

However, if the system experiences heavy disk activity, then the picture changes noticably with access time dominating hugely unless all you do is simple linear reading/writing (such as video editing, for example). I can just use myself as an example, my main rig starts up roughly 15 apps on bootup (and a multitude of background services and whatnot). With a standard HDD, it took about half a minute if not more of waiting before disk activity settled down to a level where the PC became responsive. When I switched to an Intel SLC SSD, the PC is fully responsive about 2-3 seconds after typing in the password. Basically, I can use the system normally by the time I've moved my hand from the keyboard to the mouse upon logging in!

Hybrid harddrives get ignored mostly because they're neither fish nor fowl, but rather a half-assed half-measure that does not fully share the advantages of either tech they try to combine. It's not nearly as fast as a dedicated SSD, and at the same time not as cheap as a standard harddrive. Add the complexities of managing the flash cache as best as possible, wear leveling complications, more parts that can fail and so on and then the difficulties of trying to explain the benefits of your new hybrid harddrive to a mostly ignorant herd of customers (consumers as well as enterprise), and I think you too will see how HDD makers find the extra expense in hybrid R&D not really worth the bother...

Cycle counting
By martinw on 10/19/2009 12:57:36 PM , Rating: 2
Anyone know what the metric really means? Does it mean that after 30k cycles 50% of all such devices will have failed? And what is the distribution of failures? Is the profile gaussian? If so, what is the standard deviation?

RE: Cycle counting
By ilkhan on 10/19/2009 1:49:04 PM , Rating: 2
AFAIK its the number of writes per cell before failure. Its a range of course, but probably an average with a fairly low SD.

Regardless, by the time my G2 wears out I'll have moved to a new drive.

RE: Cycle counting
By menting on 10/19/2009 6:46:57 PM , Rating: 2
the standard deviation is probably a trade secret, but the 30k cycles is probably the average, and it's probably not gaussian.
30k cycles is quite a bit in an SSD, with a "good" write leveling algorithm. Basically it means if you write your HD over once every day, it'll still take about 30k cycles on average before fails come in. And the SSDs probably have extra NAND chips to replace those cells marked as bad.

RE: Cycle counting
By PrezWeezy on 10/19/2009 8:31:44 PM , Rating: 2
The way Intel does wear leveling is by actually putting 80 GB on the drive, then telling the OS it has ~74 GB. It's the old 1000 vs. 1024 standard that they use to calculate the amount of extra chips. IE if you had a rotational "80 GB" drive you end up with ~74 GB as calculated by the OS. So the Intel SSD tells the OS it has the 74 GB usable, then uses their wear leveling algorithm to decide where to write the data the most efficient way.

RE: Cycle counting
By jordanclock on 10/20/2009 3:02:16 AM , Rating: 2
Not quite. What SSD manufacturers do for wear leveling is basically intentionally fragment the drive. Since random access time is the same as sequential, fragmentation doesn't matter much. So with that in mind, they spread out the writes to all parts of the drive. This means that no one area is likely to fail before the rest.

As for the 80GB drive formatting to 74GBs, yes. It's the classic issue of different ideas of kilo/mega/giga. Also, SSDs have sort of "extra" cells. They aren't normally available, but as cells fail, they are activated for use by the firmware. This means storage capacity remains about the same as cells fail for a bit, instead of dropping.

RE: Cycle counting
By SAnderson on 10/20/2009 3:59:42 PM , Rating: 2
Correct. How else do you get 80/160GB from a 16/32 Gb Chip? Its one of the way Intel designs the controller to help deal with write alg. and the pesky way Flash deletes data. Extra space is available for the controller to use but only 80GB is available. And yes once some cells fail, you will still have 80GB of good cells to use, thus GREATLY increasing the average cycling per cell. Say you have 90GB of cells worth of cycling but with only 80GB of data spread across it.

RE: Cycle counting
By PrezWeezy on 10/21/2009 1:23:37 PM , Rating: 2
I think if you use 5x16GB chips you end up with 80 GB...And if I'm not mistaken 10x16 still equals 160.

RE: Cycle counting
By PrezWeezy on 10/21/2009 1:25:19 PM , Rating: 2
Go read the SSD Relapse at Anandtech. he explains it very well how the wear leveling works. It is, in fact, an 80 GB drive. It does report 74 GB to the OS and it uses the other 6 GB for it's wear leveling and "extra" cells you are talking about.

RE: Cycle counting
By motigez1 on 10/20/2009 3:27:22 PM , Rating: 2
it is very simple.
10K cycles means you can fill the media 10K times, i.e. a 64GB SSD, you may write 10000 times, or 64GB *10000 = 640 terabytes, let's assume efficiency is not perfect so you can write only a third ~200TB.
how much do you need?
an average client user with windows OS will write 4-5GB/day, assume 365 working days, so in the entire year you will write 5*365 = 1.8TB
which means you can use the drive for 200TB/1.8TB ~ 100years!!! you would probably replace your NoteBook in less than 5yrs,
now, you should tell me if 500 cycles are not enough!
Micron does not enable 30K for us, the end users, but for the data center storage. we should be safe with only 500 Cycles! (assume a decent SSD design)

RE: Cycle counting
By SAnderson on 10/20/2009 4:02:43 PM , Rating: 2
These will not be priced for normal users, just the normal 1k/5k cycle chips. Its basically the same product with a few small process differences to earn a larger margin. The server market makes the big bucks for chip manufactures.

Call me a pedantic bollocks....
By Amiga500 on 10/20/2009 3:37:30 AM , Rating: 2
But surely you don't 'reach' a breakthrough...

They have 'made' a breakthrough.


"Paying an extra $500 for a computer in this environment -- same piece of hardware -- paying $500 more to get a logo on it? I think that's a more challenging proposition for the average person than it used to be." -- Steve Ballmer

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki