backtop


Print 61 comment(s) - last by ipay.. on May 2 at 3:51 AM

OCZ enters yet another market, jostling with Fusion-IO and Super Talent for enterprise and enthusiast dollars

The promise of fast access speeds has lured many enthusiasts over to SSDs already. Maximum capacity is doubling every year, and costs are dropping due to new process technologies being introduced.

One of the most important target markets for SSD manufacturers is enterprise customers. They are demanding the fastest access speeds possible, whatever the cost. SSDs are often used in a tiered storage scenario, replacing short-stroked 15k RPM mechanical hard disk drives. Even though SSDs are expensive in terms of cost per gigabyte, they offer the greatest performance return for servers due to their fast access times and read/write rates. Power and cooling requirements are also greatly reduced.

OCZ recently launched their Vertex EX series of SSDs in order to compete in this lucrative market, but SSDs are already starting to be limited by the SATA interface. Companies like Fusion-IO, which counts Steve Wozniak on its Board of Directors, have faced the problem by using the PCI-Express interface, which is available using 1, 4, 8, and 16 lane slots on most motherboards.

Super Talent recently announced its RAIDDrive SSD with up to 2TB of storage, but won't be available until June. It uses an x8 PCI-E slot to achieve read speeds of up to 1.2 GB/s, far exceeding the 300 MB/s design limit of the SATA 2.0 specification.

OCZ will compete against the RAIDDrive with its own Z-Drive SSD using a PCI-E 2.0 x4 slot. It will feature a combined 256MB cache managed with an onboard RAID controller. Capacities of 250GB, 500GB, and 1TB will be offered. Maximum read and write speeds vary for each model in the series, although the maximum sustained write speed will be limited to 200 MB/s for all Z-Drives. Random read and write speeds were not made available.

While weighing only 500 grams, the Z-Drive will also save space for power users already looking to RAID Vertex drives. It has a MTBF (Mean Time Before Failure) of 900,000 hours along with a 2 year warranty.

 “It is our goal to deliver tailored SSD solutions for the complete spectrum of high performance applications,” said Eugene Chang, Vice President of Product Management for the OCZ Technology Group.
 
“Designed for ultra high performance consumers, the Z-Drive takes the SATA bottleneck out of the equation by employing the ultra fast PCI-Express architecture with a RAID controller and four Vertex controllers configured in four-way RAID 0 within an all-in-one product, making this solution ideal for applications that put a premium on both storage performance and maximum capacity.”

Pricing and shipping dates have not yet been announced. However, based on the current cost of Vertex drives, pricing around the $800, $1400, and $3000 marks for the 250GB, 500GB, and 1TB models respectively can be inferred.

Part Number

Size

Maximum Read Speed/ Write Speed

OCZSSDPCIE-1ZDRV250G

250GB

450 / 300 MB/sec

OCZSSDPCIE-1ZDRV500G

500GB

510 / 480 MB/sec

OCZSSDPCIE-1ZDRV1T

1000GB

500 / 470 MB/sec

 



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Is This Nothing More than SSD-RAID???
By Cobra Commander on 4/27/2009 2:11:41 PM , Rating: 2
I don't understand - isn't this just integrating some SSDs onto a RAID Controller???




RE: Is This Nothing More than SSD-RAID???
By Ratinator on 4/27/2009 2:29:33 PM , Rating: 3
Raid would imply 2 or more drives, wouldn't it? This is a single stand alone drive.


By wuZheng on 4/27/2009 2:53:23 PM , Rating: 3
In the sense of traditional hard disks, yes. However, with flash memory its easier and more practical to make logical divisions of the flash memory and access them as if they were two separate physical drives. So RAID concept still applies.


RE: Is This Nothing More than SSD-RAID???
By VooDooAddict on 4/27/2009 2:30:40 PM , Rating: 3
In essence yes.

However many inexpensive raid cards or on board raid cards aren't build to handle the sustained transfer rates that more then 2 SSD disks can put out.

I'd love to make one of these a boot drive, then I'd have more drive bays open for 1TB or 1.5TB drives for mass storage.


RE: Is This Nothing More than SSD-RAID???
By semo on 4/27/09, Rating: -1
RE: Is This Nothing More than SSD-RAID???
By leexgx on 4/27/2009 8:32:07 PM , Rating: 1
your thinking of raid card Probeing the hard drives,

this SSD has no config, you plug it in and it work (maybe drivers) it may add at more then 30-50 secs off the boot time due to the Speed of it unless your replaceing it from SSD in the first place then it may add 10 secs to boot time but i realy think it boot faster, one of these vertex drives are fast on there own and 4 of them are just insane (vertex Z-Drive)

if you get an real RAID card (not compareing to the above) them things do take there time to get going as thay norm have stagered start up on for the hard disk so not to over load the PSU (peak load all disks starting up same time uses alot of power)


RE: Is This Nothing More than SSD-RAID???
By seamonkey79 on 4/28/2009 3:11:00 PM , Rating: 3
What?

?His? thinking of raid card probing the hard drives...

the rest made even less sense.

English is NOT that hard people, figure it out.


By leexgx on 4/28/2009 11:38:49 PM , Rating: 1
ok

there are more things that more Pricey RAID cards do then what on board do, some do take up to an min to do what ever thay do norm its sending hard disk spin up command to each SATA port waits then sees if something is pluged in (probe part norm waits 5-15 secs per port) this is more prone on Hardware SAS/SCSI RAID cards then cheap raid cards/onboard, as thay norm just ask all drives at once norm only adding 5 secs to the boot up time, but can make alot of load on the PSU, if power on spin up hold jumper is not used on the hard disks (only seen them on WD consumer drives)

some one used 2 raid cards and on board with 24 SSDs , somthing likey that may take time to start but as ssds have no spin up time not likey as thay respond to commands soon as power is applyed
http://www.youtube.com/watch?v=96dWOEa4Djs funny vid 24 SSDs 2GB/s speeds (total)

did not want to take it to far with the tech side guess i should of before


By ipay on 5/2/2009 3:45:09 AM , Rating: 2
What was that??

What language are you speaking in cause this technobabble didn't make any sense!!!


RE: Is This Nothing More than SSD-RAID???
By HrilL on 4/27/2009 2:39:43 PM , Rating: 2
Yeah that is exactly what it is. But its a lot faster as it avoids the bottle neck of SATA 2. But SATA 3 will be out soon enough though 10Gb/s might not be enough bandwidth to not be a bottle neck for long.


RE: Is This Nothing More than SSD-RAID???
By therealnickdanger on 4/27/2009 3:58:42 PM , Rating: 3
"SATA 3" is officially being called "SATA 6 Gbit/s" at the moment and does not reach speeds of 10Gbit/s, but rather 6Gbit/s, making is still much slower than PCIe. Even at its best, SATA 6Gbit still won't match PCIe. I think we are going to see an upset in storage interfaces in the coming years. As companies fight for faster SSDs, they will be FORCED to use PCIe in order to compete with drives like the Z-Drive (as well as similar products coming soon from Patriot and G.Skill).

I don't think "SATA 3" has much a future...


RE: Is This Nothing More than SSD-RAID???
By therealnickdanger on 4/27/2009 4:02:19 PM , Rating: 2
What I really meant to say is "I don't think 'SATA 3' has much of a future as a performance interface." Chances are good that if PCIe storage devices take off, we'll see PCIe controller cards with SATA 3Gbps and eventually SATA 6Gbps ports. So SATA 6Gbps will still have a place, but perhaps only as an internal connection method for PCIe controllers.

That would be awesome if it happened like that...


RE: Is This Nothing More than SSD-RAID???
By Targon on 4/28/2009 7:57:29 AM , Rating: 2
The problem is that using a controller designed for hard drives, including spin-up times, tolerances, and so on will slow down and limit the speeds of SSD drives that do not have these issues. What is really needed is a new type of interface that is designed with SSD in mind, including the higher transfer rates.

So, how do you deal with this situation if you want to make a high performing SSD other than a dedicated controller card that goes into a slot? You could make a controller with a proprietary connection method, but without standards, who would buy such a thing other than corporations that buy into the idea?


By therealnickdanger on 4/28/2009 7:42:55 PM , Rating: 2
quote:
The problem is that using a controller designed for hard drives

I don't think this is a traditional HDD RAID controller. I have seen no statements or evidence to say one way or the other, but even if it was, it clearly isn't posing a problem for these drives to hit incredible speeds.


RE: Is This Nothing More than SSD-RAID???
By Cobra Commander on 4/27/2009 4:14:53 PM , Rating: 2
Enterprise doesn't use SATA to begin with so that's a non-issue.


By Bytre on 4/27/2009 6:22:11 PM , Rating: 2
There's also 6GB SAS.


RE: Is This Nothing More than SSD-RAID???
By leexgx on 4/27/2009 8:34:58 PM , Rating: 2
allso its gigabit not gigabyte
so its max speed with out overhead is likey to be 500-550MB/s, thats Half the speed of the OCZ vertex Z-Drive can offer

dam thing is very costy thought (id prefer to just get 250gb norm vertex + 1gb HDD)


By therealnickdanger on 4/27/2009 9:44:01 PM , Rating: 2
Gbit/s = gigabits per second.


By MrPoletski on 4/28/2009 3:29:59 AM , Rating: 2
what about a hard disk drive with multiple sata lanes?


RE: Is This Nothing More than SSD-RAID???
By luceri on 4/28/2009 10:10:47 AM , Rating: 2
I agree. We'll probably start seeing higher end motherboards containing 3-4-5x pci-e x16 slots, then the mainstream and budget grade have SATA-3 only with 1-2x pci-e x16 slots.

The thing is though, you can only fit so many pci-e slots on a motherboard. the things take up a lot of space and aren't cost-effective. Something better than SATA3 needs to come out and likely will quickly.


By luceri on 4/28/2009 10:12:29 AM , Rating: 2
Another thought, perhaps we'll be seeing something like the common 4x ram slots that lock the ram into place for hard drives, right on the motherboard soon for SSD's. I don't know, it's just a thought and would make things simple.


By emboss on 4/27/2009 3:15:39 PM , Rating: 3
Probably not even that. I'm willing to bet that if you pop the cover you'll find an essentially standard RocketRAID card and 4 SSDs (sans casing) with a bit of cabling. So integrating it all into a single package, but probably not a single PCB.


not bad
By Chernobyl68 on 4/27/2009 1:13:39 PM , Rating: 3
now of those two slot video cards would stop making the other PCI slots useless...




RE: not bad
By dflynchimp on 4/27/2009 1:39:38 PM , Rating: 1
Motherboards with more than one PCI-E slot also need to come down in price, and frankly I'd like to see a complete phasing out of the traditional PCI slot. These board makers need to see the PCI-E as more than just a platform for multigpu. many other devices could benefit from the higher bandwidth of PCI-E


RE: not bad
By tastyratz on 4/27/09, Rating: 0
RE: not bad
By dflynchimp on 4/27/2009 3:55:02 PM , Rating: 3
I suppose I should clarify myself.

What I was hoping for is motherboards that come only equipped with PCI-E X16 slots. No PCI, no PCI-E 1X, just 16X. could be 2.0 or 1.0. The point is if we had a unified standard then it gives us much more flexibility in placement and card selection.


RE: not bad
By Motley on 4/27/2009 11:04:48 PM , Rating: 4
You know that you can put any x1, x4, or x8 PCI-e card into a PCI-e x16 slot, right?

Any higher slot will accept lower cards, so if your motherboard has 3 x16 slots, a x4 and a x1 slot, you can put in two nvidia 295 cards for 4-way SLI, put one of these in the 3rd x16 slot, a soundcard in the x1 slot, and still have a x4 slot free for a raid card of some sort.


RE: not bad
By tastyratz on 4/29/2009 11:11:27 AM , Rating: 2
yes this is true,

the issue is that these slots come at a price. People want their cake and to eat it too, I would love only x16 slots all down the line in a row on every motherboard, but people wont want to pay the 600 bux it costs to build a motherboard that can handle it.

Yes all those full speed slots can negotiate down, but the reality is anyone requiring something close to that would be an EXTREME minority so with little market demand comes little penetration.

What I would like to see, is standardization of only full length slots on atx sized motherboards, even if they are electrically only 1x or 4x speed. Obviously they would need to be marked so people understood it was only a 1x slot, but it would allow more flexibility in card choice/placement/etc.
I run a server 4x raid card with a complex array attached, I feel the pain on motherboard selection. I can only buy a crossfire/sli mobo to use it and cant run sli without a 3 way sli mobo in place.


RE: not bad
By mixpix on 4/27/2009 2:17:38 PM , Rating: 2
I totally agree. It's about time they just started sticking more PCIe slots in.

One problem exists with the location of PCI and PCIe slots. Once you start putting multiple video cards into a board it pretty much shoots all of your other expansion slots dead because of room issues. I think they also need to develop a new layout or start using risers.


RE: not bad
By grath on 4/27/2009 7:08:17 PM , Rating: 2
One word... ePCIe

(yes i know its not a word, come up with a better one)

I think the simplest and most effective solution would be to offer an External PCI-Express connection. For the most part, the devices that we need to plug into our blocked slots (sound and tuners) need only one PCIe lane, and dont really need to be inside the case anyway. The currently available (or affordable) external options for these using USB or Firewire are generally agreed to be inferior to a PCI/e solution.

So why not offer a PCIe interface to these devices that is not physically restricted to the slot location on a board? It doesnt even need to be truly external, even the ability to connect a cable to those blocked x1 slots and run it to another location inside the case would be useful. An empty 5.25" drive bay looks just about big enough to fit two reasonably lengthed cards and the required small backplane. Gonna sprint to the patent office now bye


RE: not bad
By drank12quartsstrohsbeer on 4/27/2009 7:20:22 PM , Rating: 2
That already exists, at least the ePci-e standard does. Havent seen anyone make any products yet


RE: not bad
By Visual on 4/28/2009 5:16:15 AM , Rating: 2
Expresscard slots on laptops are essentially external pci-express 1x slots. They are mostly used with small devices that fit completely inside the slot, but there's nothing stopping manufacturers to make bigger products that sit in a separate external case and just plug in the expresscard slot with a cable.

There actually are existing solutions to use any pci-express card in an external box with a cable plugged in either an expresscard slot or a special internal pci-express adapter card.
http://www.magma.com/products/pciexpress/expressbo...
http://www.magma.com/products/pciexpress/expressbo...
Just don't look at the price if you don't want to give up all hope of living ;)

But the limitation of just a single pci-express lane when using expresscard is disappointing. We need a standard with more lanes.

There is the new ATI XGP "standard", which uses a 2-lane external pci-express port, but it is far from standardized yet. I think only one model of a Fujitsu Amilo notebook has it, and I am not sure if it can be used for any generic device - currently it is only used with a HD3870 card.

Asus was also working on some variant of external graphics which possibly might involve a generic external pci-express solution, but I don't think anything's out yet and I don't know any details.


RE: not bad
By Amiga128 on 4/27/2009 8:26:42 PM , Rating: 2
I was thinking about the same thing but also for CPU's.

Take an nVidia Ion PC remove the sound, add ddr2/ddr3 memory stick and put it in a box the same size as a DVD drive that fits in the 5.25 drive bay at the front, and add a ePCIe to connect to the motherboard.

Adding an extra CPU would be just like adding a DVD drive. Pop into bay, connect power and ePCIe and you now have an extra CPU. The ram on card can be used as cache if needed. The CPU can be any chip you want from Intel, AMD, Cell etc.

The same can be used for graphics but 5.25 drive bays would be at the back when the PCI/PCI Express expansion slot go.

The main changes would be PCI express should be optical fibre and all power comes from the power supply as the amount of space taken up by PCI express is getting too big. Adding a graphics card should be just as easy as adding a DVD drive, with the connector only taking up a small space on the motherboard. Motherboards would be cheaper and smaller as most of the power for the devices will come from the power supply not the motherboard.

I would also have an optical connection from the power supply to the motherboard so you can monitor how much power is being used by the power supply for all the devices.


RE: not bad
By Jacerie on 4/28/2009 1:41:11 AM , Rating: 3
the last thing we need is more cables hangin out of a PSU. the modular PSUs are nice, but there is only so much cable management you can do.
the ideal solution would be to simply have one cable from the PSU to the mobo and all power to cards and drives redirected through the mobo, but i doubt it's feasible with the power draw of some devices.


RE: not bad
By Amiga128 on 4/28/2009 5:46:46 PM , Rating: 2
The only change would be PCI express.

Some cards already have 2 PCI express power connectors anyway.

If all cards got there power from the PSU then all cards would only have 1 PCI express connector from the PSU instead of 1 from the motherboard and 1/2 from the PSU.


RE: not bad
By kevinkreiser on 4/27/2009 2:34:37 PM , Rating: 2
In order to phase out pci we need to start seeing reliable pci-e wireless cards, tv tuners, and sound cards (professional recording). Until companies who make these cards start providing them, we are stuck with pci. Granted some do exist, but most of them suck. I do hope it starts happening though.


RE: not bad
By cfaalm on 4/27/2009 3:16:22 PM , Rating: 2
I agree on the tuners and such. Let's agree to not develop anymore PCI cards, but only PCIe

I know a couple of professional PCIe soundcards that definately don't suck. It is also a matter of protecting your investment. If you have heavily invested in PCI add on cards like DSP and such, you would want them to operate for as long as possible.

I think we need more PCIe lanes in the future and start equipping/augmenting chipsets accordingly. Is it even viable to develop a new SATA standard when you can connect to the PCIe bus this way? As for the real estate on a motherboard, we could use connectors on the side and connect them still like regular drives.

If you want to say goodbye to PCI today, buy a Mac.


RE: not bad
By xti on 4/27/2009 3:34:02 PM , Rating: 2
another vote for better tuners...maybe its the software...but everything seems so sluggish now a days in comparison to pci cards of the past.

we are just running out of pci cards that work with todays boards/OS, or had to abandon pci cards b/c of room issues as stated elsewhere here.


RE: not bad
By Cheesew1z69 on 4/27/2009 4:29:30 PM , Rating: 2
perhaps we don't have many since pci is an old standard and is slowly being phased out? has nothing to do with room...


RE: not bad
By emboss on 4/29/2009 3:51:51 PM , Rating: 3
The main "problem" with PCIe is that the upfront costs for a PCIe-capable chip are much higher than a standard PCI chip.

This is for two reasons. First, PCI IP cores are relatively cheap - this is because they're relatively easy to design, and have been around for a long time. PCIe, on the other hand, requires knowledge of high-speed analog design, and is overall much more complicated to design. Also, it's much more recent, so PCIe IP cores are generally much more expensive to license.

Secondly, you can't really do PCIe on anything larger than 130 nm, due to the high speed requirements. PCI you can do on pretty much anything. For high-volume parts this isn't a huge issue, since smaller processes are generally cheaper for producing large quantities. However, the upfront cost for a 130 nm mask set is close to a million dollars. In contrast, a similar mask set for 250 nm can be got for under 50 grand.

Basically, you need a chunk of venture capitalist coin before you can release the Next Big Thing as a native PCIe card. If you go for standard PCI, you can do it by partially remortgaging your house. An alternative is to use an external PCIe<->PCI bridge, but this approach has it's own problems.


limit?
By Alphafox78 on 4/27/09, Rating: 0
RE: limit?
By mjcutri on 4/27/2009 1:24:44 PM , Rating: 2
You needed to start your quote one word earlier:
quote:
Maximum read and write speeds vary for each model in the series, although the maximum sustained write speed will be limited to 200 MB/s for all Z-Drives.


Reading is fundamental. There is a big difference between MAXIMUM and SUSTAINED...


RE: limit?
By Alphafox78 on 4/27/2009 1:26:11 PM , Rating: 1
quote:
Maximum Read Speed/ Write Speed


The graph doesn't differentiate.


RE: limit?
By Ratinator on 4/27/2009 2:09:43 PM , Rating: 3
But it does, Maximum Read/Write Speed is just what it is.

Did you want them to say Maximum Non Sustained Read/Write Speed?


RE: limit?
By Alphafox78 on 4/27/09, Rating: 0
RE: limit?
By leexgx on 4/27/2009 8:39:45 PM , Rating: 2
thay come with 256mb cache (64mb i guess per SSD) so the max speed will likey come from that as that is alot of cache just for an hard disk you know

so burst speed is likey the max thay can do, subsataned is 200MS/s write when been saved constantly (read is not affected as it can allways read at full speed)

tbo that is one fast thing there cant wait untill some one reviews it (come on anandtech get your hands on one)


RE: limit?
By ipay on 5/2/2009 3:51:25 AM , Rating: 2
I thought that SSDs should not have that whole sustained/average speed differences due to almost no seek times? Or is it because the MLC drives are practically raided by design?


Oh!
By frombauer on 4/27/2009 1:18:18 PM , Rating: 4
Imagine these babies in 3-way SLI... lol




RE: Oh!
By RandallMoore on 4/27/2009 3:43:39 PM , Rating: 3
wow... that sounded waaaay too sexual lol


RE: Oh!
By grath on 4/27/2009 7:15:07 PM , Rating: 2
a fast hard drive is a fast sports car is a fast woman


By Inspector2211 on 4/27/2009 1:26:02 PM , Rating: 2
...$1200 and $1900 for the two smaller drives, respectively, and no price for the Terabyte drive is available yet.

Oh well.

Back to my fallback plan of two Vertex drives in a RAID0 configuration...




RE: Prices (taken from Froogle.com) are actually...
By leexgx on 4/27/2009 8:46:35 PM , Rating: 2
OS on first vertex {60gb}
Games on second vertex {250gb}
1TB disk for storage

best option for SSD really, RAID can Add latency to them and make them slower better to just put 2 drives in c: SSD 64gb , d: SSD and e: HHD

so then OS can do things with out bothering you

but realy 1 OCZ vertex 250gb are Very fast should be fine for OS and games other stuff 7 seconds to load most big games and thats most SSDs as well as crapy dual JMicron hack SSDs

allso less likey to lose data when raid fails


RE: Prices (taken from Froogle.com) are actually...
By Targon on 4/28/2009 8:00:19 AM , Rating: 2
RAID can add some latency if the drives are garbage and the controller isn't designed well. You go with four drives and a fast RAID controller and your latency goes out the window. 2ms seek times should be fast enough for most people to handle.


By leexgx on 4/28/2009 11:45:59 PM , Rating: 2
i agree access time (no seek time on SSD) is just insane on SSDs (baring JMicron SSDs but thats more for randome write problems)

id use RAID with two g.skill Titan 250gb SSDs (maybe smaller) as thay are quite cheap now, as the raid part of the system should hide the Write delays that are prone on JMicron type of disks and the perfoamce should be very good

realy any SSD should be far better then hard disk if your lucky and your motherboard hides the Wirte latency


Why?
By icanhascpu on 4/27/2009 7:03:42 PM , Rating: 2
Why does this need to be so FING HUGE?




RE: Why?
By jarman on 4/27/2009 7:39:59 PM , Rating: 2
Good question. Also, I'm getting a bad feeling looking at the size of that drive and the exhaust ports on the bracket...

I wonder if they're going to have to put a loud fan on that thing to cool the controllers(s).

Time will tell.


RE: Why?
By Dribble on 4/28/2009 4:27:53 AM , Rating: 2
Stick 4 SSD's side by side, and add a bit of extra space for raid controller and that's how big it is.


RE: Why?
By techsup1 on 4/28/2009 10:54:03 AM , Rating: 2
The card format seems like a logical interface for something that is essential just a collection of chips. The only reason I can see that drives exist in bays is that they contain spinning, motor driven media, which must be safely secured in brackets. Future motherboards should be made to contain controllers on the main board that can interface with these cards, that would seem to eliminate the problem of these cards being too large and hot because of their integrated controllers.


Bootable?
By Mr Perfect on 4/27/2009 1:22:47 PM , Rating: 2
Can you boot off of them? Or can they only be used as secondary drives?




RE: Bootable?
By Adul on 4/27/2009 1:29:29 PM , Rating: 2
You should be able to, think of them as an add in raid card. The only difference is the raid card includes the drives on board.


RE: Bootable?
By StraightPipe on 4/27/2009 6:04:44 PM , Rating: 2
I had the same question.

I knopw the Fusion IO drive CANNOT be used as a boot drive...so it's not going to be the IO slinging OS boot disk of the future...severely limiting it uses.


"Nowadays you can buy a CPU cheaper than the CPU fan." -- Unnamed AMD executive














botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki