backtop


Print 61 comment(s) - last by ipay.. on May 2 at 3:51 AM

OCZ enters yet another market, jostling with Fusion-IO and Super Talent for enterprise and enthusiast dollars

The promise of fast access speeds has lured many enthusiasts over to SSDs already. Maximum capacity is doubling every year, and costs are dropping due to new process technologies being introduced.

One of the most important target markets for SSD manufacturers is enterprise customers. They are demanding the fastest access speeds possible, whatever the cost. SSDs are often used in a tiered storage scenario, replacing short-stroked 15k RPM mechanical hard disk drives. Even though SSDs are expensive in terms of cost per gigabyte, they offer the greatest performance return for servers due to their fast access times and read/write rates. Power and cooling requirements are also greatly reduced.

OCZ recently launched their Vertex EX series of SSDs in order to compete in this lucrative market, but SSDs are already starting to be limited by the SATA interface. Companies like Fusion-IO, which counts Steve Wozniak on its Board of Directors, have faced the problem by using the PCI-Express interface, which is available using 1, 4, 8, and 16 lane slots on most motherboards.

Super Talent recently announced its RAIDDrive SSD with up to 2TB of storage, but won't be available until June. It uses an x8 PCI-E slot to achieve read speeds of up to 1.2 GB/s, far exceeding the 300 MB/s design limit of the SATA 2.0 specification.

OCZ will compete against the RAIDDrive with its own Z-Drive SSD using a PCI-E 2.0 x4 slot. It will feature a combined 256MB cache managed with an onboard RAID controller. Capacities of 250GB, 500GB, and 1TB will be offered. Maximum read and write speeds vary for each model in the series, although the maximum sustained write speed will be limited to 200 MB/s for all Z-Drives. Random read and write speeds were not made available.

While weighing only 500 grams, the Z-Drive will also save space for power users already looking to RAID Vertex drives. It has a MTBF (Mean Time Before Failure) of 900,000 hours along with a 2 year warranty.

 “It is our goal to deliver tailored SSD solutions for the complete spectrum of high performance applications,” said Eugene Chang, Vice President of Product Management for the OCZ Technology Group.
 
“Designed for ultra high performance consumers, the Z-Drive takes the SATA bottleneck out of the equation by employing the ultra fast PCI-Express architecture with a RAID controller and four Vertex controllers configured in four-way RAID 0 within an all-in-one product, making this solution ideal for applications that put a premium on both storage performance and maximum capacity.”

Pricing and shipping dates have not yet been announced. However, based on the current cost of Vertex drives, pricing around the $800, $1400, and $3000 marks for the 250GB, 500GB, and 1TB models respectively can be inferred.

Part Number

Size

Maximum Read Speed/ Write Speed

OCZSSDPCIE-1ZDRV250G

250GB

450 / 300 MB/sec

OCZSSDPCIE-1ZDRV500G

500GB

510 / 480 MB/sec

OCZSSDPCIE-1ZDRV1T

1000GB

500 / 470 MB/sec

 



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

not bad
By Chernobyl68 on 4/27/2009 1:13:39 PM , Rating: 3
now of those two slot video cards would stop making the other PCI slots useless...




RE: not bad
By dflynchimp on 4/27/2009 1:39:38 PM , Rating: 1
Motherboards with more than one PCI-E slot also need to come down in price, and frankly I'd like to see a complete phasing out of the traditional PCI slot. These board makers need to see the PCI-E as more than just a platform for multigpu. many other devices could benefit from the higher bandwidth of PCI-E


RE: not bad
By tastyratz on 4/27/09, Rating: 0
RE: not bad
By dflynchimp on 4/27/2009 3:55:02 PM , Rating: 3
I suppose I should clarify myself.

What I was hoping for is motherboards that come only equipped with PCI-E X16 slots. No PCI, no PCI-E 1X, just 16X. could be 2.0 or 1.0. The point is if we had a unified standard then it gives us much more flexibility in placement and card selection.


RE: not bad
By Motley on 4/27/2009 11:04:48 PM , Rating: 4
You know that you can put any x1, x4, or x8 PCI-e card into a PCI-e x16 slot, right?

Any higher slot will accept lower cards, so if your motherboard has 3 x16 slots, a x4 and a x1 slot, you can put in two nvidia 295 cards for 4-way SLI, put one of these in the 3rd x16 slot, a soundcard in the x1 slot, and still have a x4 slot free for a raid card of some sort.


RE: not bad
By tastyratz on 4/29/2009 11:11:27 AM , Rating: 2
yes this is true,

the issue is that these slots come at a price. People want their cake and to eat it too, I would love only x16 slots all down the line in a row on every motherboard, but people wont want to pay the 600 bux it costs to build a motherboard that can handle it.

Yes all those full speed slots can negotiate down, but the reality is anyone requiring something close to that would be an EXTREME minority so with little market demand comes little penetration.

What I would like to see, is standardization of only full length slots on atx sized motherboards, even if they are electrically only 1x or 4x speed. Obviously they would need to be marked so people understood it was only a 1x slot, but it would allow more flexibility in card choice/placement/etc.
I run a server 4x raid card with a complex array attached, I feel the pain on motherboard selection. I can only buy a crossfire/sli mobo to use it and cant run sli without a 3 way sli mobo in place.


RE: not bad
By mixpix on 4/27/2009 2:17:38 PM , Rating: 2
I totally agree. It's about time they just started sticking more PCIe slots in.

One problem exists with the location of PCI and PCIe slots. Once you start putting multiple video cards into a board it pretty much shoots all of your other expansion slots dead because of room issues. I think they also need to develop a new layout or start using risers.


RE: not bad
By grath on 4/27/2009 7:08:17 PM , Rating: 2
One word... ePCIe

(yes i know its not a word, come up with a better one)

I think the simplest and most effective solution would be to offer an External PCI-Express connection. For the most part, the devices that we need to plug into our blocked slots (sound and tuners) need only one PCIe lane, and dont really need to be inside the case anyway. The currently available (or affordable) external options for these using USB or Firewire are generally agreed to be inferior to a PCI/e solution.

So why not offer a PCIe interface to these devices that is not physically restricted to the slot location on a board? It doesnt even need to be truly external, even the ability to connect a cable to those blocked x1 slots and run it to another location inside the case would be useful. An empty 5.25" drive bay looks just about big enough to fit two reasonably lengthed cards and the required small backplane. Gonna sprint to the patent office now bye


RE: not bad
By drank12quartsstrohsbeer on 4/27/2009 7:20:22 PM , Rating: 2
That already exists, at least the ePci-e standard does. Havent seen anyone make any products yet


RE: not bad
By Visual on 4/28/2009 5:16:15 AM , Rating: 2
Expresscard slots on laptops are essentially external pci-express 1x slots. They are mostly used with small devices that fit completely inside the slot, but there's nothing stopping manufacturers to make bigger products that sit in a separate external case and just plug in the expresscard slot with a cable.

There actually are existing solutions to use any pci-express card in an external box with a cable plugged in either an expresscard slot or a special internal pci-express adapter card.
http://www.magma.com/products/pciexpress/expressbo...
http://www.magma.com/products/pciexpress/expressbo...
Just don't look at the price if you don't want to give up all hope of living ;)

But the limitation of just a single pci-express lane when using expresscard is disappointing. We need a standard with more lanes.

There is the new ATI XGP "standard", which uses a 2-lane external pci-express port, but it is far from standardized yet. I think only one model of a Fujitsu Amilo notebook has it, and I am not sure if it can be used for any generic device - currently it is only used with a HD3870 card.

Asus was also working on some variant of external graphics which possibly might involve a generic external pci-express solution, but I don't think anything's out yet and I don't know any details.


RE: not bad
By Amiga128 on 4/27/2009 8:26:42 PM , Rating: 2
I was thinking about the same thing but also for CPU's.

Take an nVidia Ion PC remove the sound, add ddr2/ddr3 memory stick and put it in a box the same size as a DVD drive that fits in the 5.25 drive bay at the front, and add a ePCIe to connect to the motherboard.

Adding an extra CPU would be just like adding a DVD drive. Pop into bay, connect power and ePCIe and you now have an extra CPU. The ram on card can be used as cache if needed. The CPU can be any chip you want from Intel, AMD, Cell etc.

The same can be used for graphics but 5.25 drive bays would be at the back when the PCI/PCI Express expansion slot go.

The main changes would be PCI express should be optical fibre and all power comes from the power supply as the amount of space taken up by PCI express is getting too big. Adding a graphics card should be just as easy as adding a DVD drive, with the connector only taking up a small space on the motherboard. Motherboards would be cheaper and smaller as most of the power for the devices will come from the power supply not the motherboard.

I would also have an optical connection from the power supply to the motherboard so you can monitor how much power is being used by the power supply for all the devices.


RE: not bad
By Jacerie on 4/28/2009 1:41:11 AM , Rating: 3
the last thing we need is more cables hangin out of a PSU. the modular PSUs are nice, but there is only so much cable management you can do.
the ideal solution would be to simply have one cable from the PSU to the mobo and all power to cards and drives redirected through the mobo, but i doubt it's feasible with the power draw of some devices.


RE: not bad
By Amiga128 on 4/28/2009 5:46:46 PM , Rating: 2
The only change would be PCI express.

Some cards already have 2 PCI express power connectors anyway.

If all cards got there power from the PSU then all cards would only have 1 PCI express connector from the PSU instead of 1 from the motherboard and 1/2 from the PSU.


RE: not bad
By kevinkreiser on 4/27/2009 2:34:37 PM , Rating: 2
In order to phase out pci we need to start seeing reliable pci-e wireless cards, tv tuners, and sound cards (professional recording). Until companies who make these cards start providing them, we are stuck with pci. Granted some do exist, but most of them suck. I do hope it starts happening though.


RE: not bad
By cfaalm on 4/27/2009 3:16:22 PM , Rating: 2
I agree on the tuners and such. Let's agree to not develop anymore PCI cards, but only PCIe

I know a couple of professional PCIe soundcards that definately don't suck. It is also a matter of protecting your investment. If you have heavily invested in PCI add on cards like DSP and such, you would want them to operate for as long as possible.

I think we need more PCIe lanes in the future and start equipping/augmenting chipsets accordingly. Is it even viable to develop a new SATA standard when you can connect to the PCIe bus this way? As for the real estate on a motherboard, we could use connectors on the side and connect them still like regular drives.

If you want to say goodbye to PCI today, buy a Mac.


RE: not bad
By xti on 4/27/2009 3:34:02 PM , Rating: 2
another vote for better tuners...maybe its the software...but everything seems so sluggish now a days in comparison to pci cards of the past.

we are just running out of pci cards that work with todays boards/OS, or had to abandon pci cards b/c of room issues as stated elsewhere here.


RE: not bad
By Cheesew1z69 on 4/27/2009 4:29:30 PM , Rating: 2
perhaps we don't have many since pci is an old standard and is slowly being phased out? has nothing to do with room...


RE: not bad
By emboss on 4/29/2009 3:51:51 PM , Rating: 3
The main "problem" with PCIe is that the upfront costs for a PCIe-capable chip are much higher than a standard PCI chip.

This is for two reasons. First, PCI IP cores are relatively cheap - this is because they're relatively easy to design, and have been around for a long time. PCIe, on the other hand, requires knowledge of high-speed analog design, and is overall much more complicated to design. Also, it's much more recent, so PCIe IP cores are generally much more expensive to license.

Secondly, you can't really do PCIe on anything larger than 130 nm, due to the high speed requirements. PCI you can do on pretty much anything. For high-volume parts this isn't a huge issue, since smaller processes are generally cheaper for producing large quantities. However, the upfront cost for a 130 nm mask set is close to a million dollars. In contrast, a similar mask set for 250 nm can be got for under 50 grand.

Basically, you need a chunk of venture capitalist coin before you can release the Next Big Thing as a native PCIe card. If you go for standard PCI, you can do it by partially remortgaging your house. An alternative is to use an external PCIe<->PCI bridge, but this approach has it's own problems.


"Can anyone tell me what MobileMe is supposed to do?... So why the f*** doesn't it do that?" -- Steve Jobs














botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki