backtop


Print 24 comment(s) - last by mindless1.. on May 15 at 3:15 PM


(Click to Enlarge)
Samsung wants to sell its SSDs through partners instead of in the channel

Corsair is a company best known for their DRAM products, targeted mostly at the enthusiast market. In January, they entered the SSD market with a less than stellar product, but it was an important first step.

Things are improving though this week for Corsair Storage Solutions. The company is shipping a high-performance 256GB MLC SSD using a Samsung controller and 128MB of cache. The hardware involved is very similar to Samsung's own SSD product, which it currently only ships to OEMs. Samsung currently has no plans to sell the PB22-J directly to the channel, according to Samsung SSD Product Manager Brian Beard in an email.

The Corsair P256 delivers maximum read speeds of up to 220 MB/sec and maximum write speeds of up to 200MB/sec. Random read and write speeds were not immediately available.

Anticipation has been high for OCZ Technology's own high-end SSD using Samsung's controller. The company had a preliminary version working in their labs in February, and we were told that the company was targeting a late April launch for its Summit series. Our latest indications are that the Summit series will launch by the end of May, but the company may decide to show it off at Computex in June instead.

However, OCZ isn't too worried, as its best-selling Vertex series is doing very well. So well, in fact, that Intel has had to lower prices repeatedly in order to avoid losing too many sales to the upstart. OCZ recently launched a high capacity SLC Vertex EX series to compete directly against Intel in the enterprise server and workstation markets.

The Corsair Storage Solutions P256 SSD is available immediately from Corsair’s authorized distributors and resellers worldwide along with a two year limited warranty. The street price at most e-tailers at launch is around the $700 mark, with a part number of CMFSSD-256GBG2D.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

General question
By Spivonious on 5/14/2009 1:09:45 PM , Rating: 2
Why do you need cache on an SSD? I thought access times were <0.1ms.




RE: General question
By surt on 5/14/2009 1:15:49 PM , Rating: 4
Best case access times. If you invoke a lot of random writes you get into much longer times. By putting a cache in front of those writes they can group them together for better performance.


RE: General question
By dragonbif on 5/14/2009 1:16:07 PM , Rating: 2
That is access times but the write is way slower and can be painfull. I would think the cache would help speed it up but I dont have one to test so I am uninformed and this is just a guess.


RE: General question
By AnnihilatorX on 5/14/2009 1:55:28 PM , Rating: 4
SSDs get slower the more you use them because of the flaw to do with block and records in a SSD system. It's better explained in the following article:

http://www.anandtech.com/storage/showdoc.aspx?i=35...

This problem plagues even the Intel X25-M SSD.
A larger cache would help write speed in this case.

However I worry that having such a large cache will pose problem with data loss due to power outages.


RE: General question
By walk2k on 5/14/09, Rating: -1
RE: General question
By strikeback03 on 5/14/2009 3:52:27 PM , Rating: 2
Plenty of these will likely go in desktops, same as past SSDs.


RE: General question
By mindless1 on 5/14/2009 11:36:01 PM , Rating: 2
That's an often cited and usually irrelevant concern. If a system is vulnerable to power outtages (if it matters that there would be data loss) it should be on a suitable UPS.

Otherwise, a power outtage also loses any data in main memory, which tends to cache more than the memory on the drive. It would make sense though, to at least splurge on an ups that may cost less than 10% that of the SSDs in the system.


RE: General question
By AnnihilatorX on 5/15/2009 10:41:25 AM , Rating: 2
Most consumers won't be on UPS
Main memory is usually not a concern because that'd be user's fault if they do not save there work to disk often

With cache memory however, the user had click save work, which just so happens that the work is on the cache of the HDD not the HDD itself, then the work is lost. In that case you can't blame the user.


RE: General question
By mindless1 on 5/15/2009 3:13:47 PM , Rating: 4
When something is saved it doesn't just sit in the cache, it's written to the drive immediately just as it would be with a mechanical drive, except even faster. This makes the argument irrelevant, in any case the SSD is less prone to power outtage data loss than the traditional mechanical HDD is.

Most consumers won't pay this money for an SSD either, nor do they have data so critical or power outtages often enough that it would be significant what was lost from a HDD cache.

Main memory is in fact much more of a concern because it is constantly holding data, nobody is writing out data in realtime every few milliseconds.

I would argue that anyone who is sane and really cares about data loss will have an UPS, and that you are totally wrong about HDD cache because in most cases that can be written to disk between the time the power starts to go out and the capacitors are drained too much, keeping in mind the total cache size is 128MB but for both read and write, and that it writes at 200MB+/s. In other words, the odds of data loss from cache issues is extremely low while the loss from main memory is extremely high.

To put it another way, people have power outtages, you don't hear them going on and on about this problem even with mechanical drives that had a similar ratio of write speed to cache size, yet higher power requirements to write, to keep the drive energized due to platter speed and write arm movement.


RE: General question
By surt on 5/15/2009 3:09:27 PM , Rating: 2
Why wouldn't you have the same worry with conventional hard drives with their 32MB caches?


RE: General question
By mindless1 on 5/15/2009 3:15:43 PM , Rating: 3
You have not only the same worry but rather the conventional hard drive is more, not less prone to data loss, since it uses more power (needs more during a rapid loss of electrity), writes slower, and it's smaller cache causes more of the data to stay in volatile main system memory. The prior poster simply doesn't understand much about computer electronics IO or electricity.


RE: General question
By Natfly on 5/14/2009 1:55:53 PM , Rating: 2
To avoid write stuttering, allow native command queuing, and to allow the controller to process multiple commands -- write/read from different nand blocks at the same time.


RE: General question
By mathew7 on 5/15/2009 1:09:26 AM , Rating: 2
The problem is that there is not standard way of knowing block/page size. Example:
NTFS default cluster size=4K
SSD usually used page size=4K
but the OS accesses 512-bytes chuncks (interface compatibility with HDD), so it's a high chance that a 4K cluster will actually take 2-half pages. So when you write 4K, the controller has to read 2 4K pages, save half of the data and then write it.
I don't know how it's implemented in Vista/Win7, but XP surely is too old to know about this.
This could be prevented by careful partitioning, but even here there are things to be aware of: like offset of the 1st cluster to the partition beginning, interruptions in cluster sequence (I'm talking about something like ext2/3 inodes vs metadata blocks).


RE: General question
By highlandsun on 5/14/2009 2:13:03 PM , Rating: 2
Because DRAM is always faster. Putting cache on a drive makes it possible to support bursts at full interface speed - 300MB/sec - thus freeing up the interface sooner or allowing more requests to be queued with less overhead.


RE: General question
By winterspan on 5/14/2009 2:38:42 PM , Rating: 2
You are correct that flash has a very fast READ access time, but remember when you are WRITING to flash, if there is data in the sector you writing to, the whole block has to be erased first and this erase procedure takes many orders of magnitude longer simply reading the data. This is particularly true for MLC flash.

The first generation of "affordable" SSDs had rotten performance with JMicron controllers and no cache. Thankfully, newer drives from Intel (X25), OCZ, Samsung, et al are using good controllers and have DRAM cache.

The bottom line is this:

64MB or more DRAM cache is VERY important for high performance SSDs. Do not buy one without it -- particularly an MLC model! The random write performance will be terrible.


RE: General question
By Motley on 5/14/2009 8:16:09 PM , Rating: 2
Because, if a write request is in DRAM cache, and before it is flushed to the SSD, another write request comes to write over it, you have saved the drive from having to make the first write, which increases performance AND longevity of the drive.

Such operations aren't as uncommon as you might think, especially for the housekeeping portions of the disk, for FAT that would be the VTOC and directories. For NTFS that would be the MFT, etc. Also think about the pagefile, log files, etc.


RE: General question
By emboss on 5/14/2009 9:43:18 PM , Rating: 1
quote:
Why do you need cache on an SSD?


You don't. And the memory on current SSDs isn't used as a cache. It's used to store page mapping tables and other information that the controller uses to decide where to put a particular chunk of written data (or where to get it from, if it's a read). The marketing departments of the manufacturers just call it cache because it's a word that sounds familiar.

As a side note, the memory on hard drives isn't used as a cache either. It's mostly used as a write buffer and a read-ahead buffer.


RE: General question
By mathew7 on 5/15/2009 1:21:44 AM , Rating: 2
quote:
It's used to store page mapping tables and other information that the controller uses to decide where to put a particular chunk of written data


I really don't agree with you. You see, the info you are talking about is the most important in flash world. But to write 128MB would take a loooong time. And this information CANNOT be lost. So while they COULD use it to temporary store flash mapping tables, I really don't think it's more that 1/4 of it. But they can use it as cache and it still makes sense:why write half of a page when in 2 seconds a new write request could come for the 2nd half.

quote:
As a side note, the memory on hard drives isn't used as a cache either. It's mostly used as a write buffer and a read-ahead buffer.


Cache IS a read/write buffer. The difference I can think of is size: buffers are small (1-10 blocks) while caches are large (1000+ blocks). But on HW side, i really can't see the difference.


RE: General question
By emboss on 5/15/2009 5:20:56 AM , Rating: 2
quote:
I really don't agree with you. You see, the info you are talking about is the most important in flash world. But to write 128MB would take a loooong time. And this information CANNOT be lost. So while they COULD use it to temporary store flash mapping tables, I really don't think it's more that 1/4 of it.


Actually, you'll find that keeping the entire mapping table available takes up more than the amount of memory on the drive for the larger drives. Consider the Intel X25-M - 80 hdd-manufacturer-GB at 4 KB per logical block gives you 20,000,000 logical blocks that need to be kept track of. With 64 MB of RAM, that's just 26.8 bits of memory per logical block. There's 10 8 GB flash chips, so a total of 81920 MB of flash space, or 20,971,520 physical pages to be able to number uniquely. This takes 25 bits of space, per logical block, leaving not much spare for anything else. In total memory usage form, using 25 bits for 20 million blocks comes out to 59.6 MB of space needed. And that's in a form that's not really useful for the controller, nor allowing any space for reverse mappings.

The tables as stored in DRAM are in a different format to those stored in the flash. For example, the DRAM versions are optimised for fast searching, and contain a reverse mapping table so that the controller can find out what logical block a particular physical block is associated with. The flash versions only contain logical->physical mappings and are structured to work well with the standard limitations of NAND flash.

During the drive controller boot phase, the controller reads the mapping information off the drive and constructs the fast-access version in DRAM. For the larger drives, only a subset of the mapping table can be in memory at once, so there's a bit of complication there that I won't deal with. Think of the in-memory version being a literal table: element zero contains the physical block of logical block zero, etc.

The on-disk format is a bit more complicated. Essentially it's made up of a partial simple table (like the DRAM format) and a journal.

When a logical block write is complete, it appends a "logical block x was written to physical block y" entry to the end of the journal, updating the DRAM-like table on the flash only every so often. The in-DRAM version is obviously updated instantly. When the controller starts up, it reads in the table, and applies any updates that are left in the journal.

This is all much easier to explain with a diagram :)

quote:
But they can use it as cache and it still makes sense:why write half of a page when in 2 seconds a new write request could come for the 2nd half.


That's using it as a buffer, not a cache. And the answer to your question is that power may be lost in 1.5 seconds. If the drive isn't going flat out, then there's no harm in doing the write ASAP. This is unlike a mechanical drive that would potentially incur a significant seek time. On the other hand, if the drive is going flat out, then you'd have to have a massive write buffer to make any difference. Also, any half way recent OS would combine multiple writes to adjacent sectors if they are made within a short period of time.

The typical buffering method for SSDs is to only buffer the current logical block being written to (sorta, again it's hard to describe without diagrams and varys between vendors). As soon as writes start going to a new logical block, the old one is flushed. This can be seen by comparing the performance of two sequential writes alternating between the two with various sized chunks. As long as the chunk size is above the logical block size of the SSD, performance will be the same as a simple sequential write. Once you drop below the logical block size, performance drops off significantly. On a drive that has write buffering, there is no drop. Both the Intel and Indilix drives show the expected drop-off with the alternating streams.

quote:
Cache IS a read/write buffer.


As much as I hate posts that point to Wikipedia, there is a good page on the subject that explains the differences: http://en.wikipedia.org/wiki/Cache

quote:
The difference I can think of is size


The difference is in what they are used for. Caches hold information that has been recently accessed and that may be accessed again (or is frequently accessed). Buffers hold information that you haven't quite got around to doing something with yet.


RE: General question
By Calin on 5/15/2009 4:24:10 AM , Rating: 2
Intel uses the cache RAM only for internal purposes - but others use the so called cache RAM for storing user data too.


warranty
By dastruch on 5/14/2009 3:48:54 PM , Rating: 3
Why with only 2 years limited?




RE: warranty
By kmmatney on 5/14/2009 5:31:24 PM , Rating: 4
Especially when it's $700! It should have a 5 year warranty - and by that time prices will be a lot lower, so it will be cheaper to replace, if needed. The 2 year warranty is very weak.


RE: warranty
By TSS on 5/14/2009 7:53:01 PM , Rating: 2
unless the drive lasts only 2 years. then it's genius!

seriously though. i'm willing to bet if you blow 700 bucks of 256GB's of space with mediocre performance improvements over a HDD, you'l lreplace this drive before the warranty is over.

because in 2 years the new $700 drives will be so much better....


By Steve73 on 5/14/2009 9:46:01 PM , Rating: 2
I want this HD! However, still not in the right price range. Hopefully this drive will drop to $400 dollars by Christmas. If it does, I'm there with money in hand.




"People Don't Respect Confidentiality in This Industry" -- Sony Computer Entertainment of America President and CEO Jack Tretton














botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki