Print 43 comment(s) - last by mindless1.. on Feb 26 at 10:51 AM

SSDs may be the key to snappy performance on laptops and desktops, but they also create security risks due to their inability to be fully wiped with present technology. Better encrypt that data!  (Source: Gear Diary)
Revelation could prove a nightmare to careless businesses and individuals

Businesses and government offices are constantly replacing computers and buying new hardware.  Typically when this is done, data on the hard drives of the defunct machines is wiped, lest it fall into the wrong hands.

However, an intriguing study [press release] by researchers at the University of California San Diego (UCSD) reveals that businesses thinking they've wiped NAND thumb drives or NAND solid-state drives (SSDs) may be in for a surprise.

Every time you write to a hard drive -- be it magnetic disk or NAND -- you make semi-permanent changes that persist until you overwrite that block of memory.  When you delete files on your computer, you typically are merely deleting the indexes of the files.  The actual data persists on the drive until you overwrite it.

Over a dozen methods have been worked out to try to fully overwrite data on a magnetic hard drive and permanently erase any traces of the drive's original contents.  Researchers tried those methods on flash drives and discovered that, at best, they left 10 MB of every 100 MB file intact.

To study how successful the data destruction was, the researchers took apart an SSD.  Rather than check the Flash Translation Layer (FTL), which would merely show data as indexed by the drive, they actually sliced out the physical chips and queried them via their pins.  This allowed them to test the data status at the lowest level.

The findings might shock some, but came as little surprise to the researchers who expected magnetic drive techniques to work less than optimally for SSDs.  

Some of the techniques attempted, such as Gutman's 35-pass method, Schneier 7-pass method, erased as much as 90 percent of data successfully.  But other techniques, like using pseudorandom numbers to overwrite data on the chip or using a British HMG IS5 baseline, left virtually the entire file intact.

Researchers Laura Grupp and Michael Wei comment, "Our results show that naïvely applying techniques designed for sanitizing hard drives on SSDs, such as overwriting and using built-in secure erase commands is unreliable and sometimes results in all the data remaining intact. Furthermore, our results also show that sanitizing single files on an SSD is much more difficult than on a traditional hard drive."

Of course, if you encrypt all the data on the SSD to start, you make it harder to access.  The researchers note this and suggest that to completely prevent data loss, users then destroy their keys and use new technology to directly overwrite all of the drive's pages.

Chester Wisniewski, a senior security advisor for Sophos Canada, blogged on the study praising its accuracy.  He writes, "To properly secure data and take advantage of the performance benefits that SSDs offer, you should always encrypt the entire disk and do so as soon as the operating system is installed... [S]ecurely erasing SSDs after they have been used unencrypted is very difficult, and may be impossible in some cases."

These results are not only troubling for business and government users, but for home users as well.  You have plenty of things to worry about falling into the wrong hands -- personal emails from your family; credit card records; medical records; and other private info.  At present, you can't be 100 percent sure you can securely dispose of SSDs with this kind of information, but by using encryption you can reduce the likelihood of someone get your information to almost zero. 

According to a recent iSuppli report, only 2 percent of laptops currently carry SSDs.  However, iSuppli predicts that by 2014, that total will rise to 8 percent.

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

RE: Blindingly Obvious
By Azsen on 2/22/2011 8:21:31 PM , Rating: 2
What I think is the problem is that the erasing programs overwrite a block of data then mark the overwritten data as writable again straight after. While the program is still running, the wear leveling algorithms in the SSD may force the program to write over that same block again instead of going through all the consecutive blocks.

You could probably fix it by modifying the program to fill up the drive with random data completely, then marking it all to be free space afterwards. Then do multiple passes with the same technique.

RE: Blindingly Obvious
By mindless1 on 2/23/2011 12:06:14 AM , Rating: 2
Thank you. Finally someone who "gets" it. Their methodology was flawed because they were trying to overwrite specific FILES instead of FILLING the SSD with random data - in the latter case it doesn't matter where the controller decides to put the data because once it starts to (does) run out of free space, all have been overwritten.

Ironically enough this topic is silly. You don't try to hunt and find individual files to multipass overwrite, you overwrite the entire HDD, so the distinction between HDD and SSD isn't really relevant except the technique that does the writing (query free space, create random data file that is that size).

RE: Blindingly Obvious
By mathew7 on 2/23/2011 9:42:30 AM , Rating: 2
I really don't think that they were overwriting only the files (although I have not read the links). I think they used current procedures to securely erase, but I wonder how many passes they made. If they only made one (1 write/LBA), then 10% of the data seems to be roughly the spare blocks.
Also, the ATA secure-erase command, if implemented by the drive, could be just erasing the LBA-to-flash block translation. So the data itself could still be there if you bypass the controller (what they actually did).

RE: Blindingly Obvious
By mathew7 on 2/23/2011 9:54:35 AM , Rating: 2
I have just read the press release, and indeed they also talk about "sanitizing single files". So the recovery of 900MB of 1000MB file is not surprising.

I was thinking that maybe large corporations/goverments have some "push-button-erase" devices, which would work on the whole drive. But it seems the graph is related to single files.

PS: Even on multiple overwriting of whole drive, the wear algorithm may still skip over heavily-written blocks.

"Young lady, in this house we obey the laws of thermodynamics!" -- Homer Simpson

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki