Print 45 comment(s) - last by somedude1234.. on Mar 18 at 2:52 AM

640GB ioDrive Duo  (Source: DailyTech)
Fusion-io announces new PCI Express SSD product with massive performance promises

Storage in the computer market currently revolves around two types of products -- the HDD and the SSD. The SSD is faster and requires less power to operate leading to better battery life in portable computers. The HDD offers lower cost and more storage capacity than the current SSDs.

A company called Fusion-io is offering a new product called the ioDrive Duo, which it claims to be the world's fastest and most innovative SSD. The company says that the product doubles the slot capacity of its PCI Express ioDrive storage solution.

The new ioDrive Duo offers what the company claims is previously unheard of levels of performance, capacity, and protection for a single server. The product claims to be able to scale from 6Gb/sec of read bandwidth and offer over 500,000 read IOPS when using four ioDrive Duos.

David Flynn from Fusion-io said in a statement, "Many database and system administrators are finding that SANs are too expensive and don’t meet performance, protection and capacity utilization expectations. This is why more and more application vendors are moving toward application-centric solid-state storage. The ioDrive Duo offers the enterprise the advantages of application-centric storage without application-specific programming."

The ioDrive Duo fits into PCI Express x8 or x16 slots and can sustain up to 20Gb/sec of raw throughput. The company also says that it can easily sustain 1.5Gb/sec of read bandwidth and nearly 200,000 read IOPS. Sustained read bandwidth is 1500 MB/sec, sustained write bandwidth is 1400 MB/sec, Read IOPS is 186,000, and write IOPS is 167,000.

The ioDrive Duo offers multi-bit error detection, correction and flash back protection offering chip level N+1 redundancy and on-board self-healing. The product can also be configured for RAID-1 mirroring between two ioMemory modules on the same ioDrive Duo PCIe card.

The new cards will be available in April 2009 with 160GB, 320GB, and 640GB. A 1.28TB version isn't coming until the second half of 2009. The typical SSD, like the SSD offerings from Intel, are sized like normal hard drives and connect via SATA and other enterprise connection standards.  

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

For storage
By carbon12 on 3/12/2009 12:47:38 PM , Rating: 3
That's great... but is it bootable?

RE: For storage
By wifiwolf on 3/12/09, Rating: 0
RE: For storage
By Dribble on 3/12/2009 1:35:37 PM , Rating: 5
How can be sure it won't be bootable?

It's certainly possible to boot off a PCI-E card. I could stick in a PCI-E raid controller with 8 SSD's in raid 0 and boot off that. If that's possible what makes you so sure they can't make it work?

RE: For storage
By Kougar on 3/12/2009 3:13:32 PM , Rating: 5
Because the original Fusion-Io was not bootable, due to some hardware issue or another they declined to get into details about.

They stated it would be fixed in future products, but I wouldn't assume this PCIe card was bootable unless they specifically mention that it is.

RE: For storage
By lexluthermiester on 3/13/2009 4:38:12 AM , Rating: 2
That is an assumption on your part, but it is likely to be right.

If you click on the image of the card itself, you will note that there is no firmware package, which means it's likely non-bootable. Cards that have no boot design, with few exceptions, don't have firmware chips. So there we go.

RE: For storage
By Tamale on 3/13/2009 3:43:28 PM , Rating: 4
The much more important question, of course, is..

Will it BLEND?

RE: For storage
By Samus on 3/14/2009 12:44:47 AM , Rating: 3

RE: For storage
By mindless1 on 3/14/2009 10:28:37 PM , Rating: 2
There are a few chips on the card that could be high density EEPROMs holding the firmware. If you were expecting the old, large EEPROM chips which are roughly 14mm square, take a look at any modern video card for an example of the smaller ones.

RE: For storage
By Amiga500 on 3/12/2009 1:15:23 PM , Rating: 5
Who cares?

How often do you think a server or workstation is rebooted?

Indeed, do you run your programs from the same partition as your windows/linux install?

This would be like manna from heaven for FEA work! That big I/O bottleneck, removed in one go

RE: For storage
By Flunk on 3/12/2009 1:19:56 PM , Rating: 2
Doesn't really matter how often it's rebooted. All you'd really need is another drive (even USB flash) to store the basic kernel, everything else could be on this.

RE: For storage
By Amiga500 on 3/12/2009 1:29:42 PM , Rating: 2
I know - that was kinda my point.


RE: For storage
By Drexial on 3/12/2009 1:49:24 PM , Rating: 2
for a virtual center this is amazing. It can store a users session while they are active on the PC before writing back to the file server when they finish their session. So it removes the disk i/o lag that file servers would suffer from in a VDI environment. Their desktop would be almost instantaneously available for usage. The server hosting the cards would boot off a standard USB flash drive with the simplified OS on it. Then pull their desktop session from the file server as soon as its cached onto the IO cards their have their desktop.

RE: For storage
By Screwballl on 3/12/2009 2:35:26 PM , Rating: 2
That's great... but is it bootable?

I don't see why not. It is a hard drive using a controller chip and is the same as any PCIe x1 or PCI hard drive controller card...

RE: For storage
By talikarni on 3/12/2009 3:09:22 PM , Rating: 2
I don't understand... is this a controller card that you hook one for their drives into or is this entire card the hard drive and just has a chip on board so it is seen as a hard drive?
I am still learning so these unusual tech devices confuse me

RE: For storage
By mvpx02 on 3/12/2009 4:01:16 PM , Rating: 4
Option #2, this is essentially a controller card with the memory chips built right onto it.

RE: For storage
By lexluthermiester on 3/13/2009 4:46:50 AM , Rating: 2
More that they are plugged in. This series of SSD HHD's uses a based controller that the storage modules plug into. And for different sizes they simply plug in different capacity modules. It's a very smart design! But I agree with what the OP seems to want to use it for, bootability. That size of storage at those speeds? Are you kidding? Me wanty buy!

RE: For storage
By MrPoletski on 3/13/2009 7:34:16 AM , Rating: 1
but it would mean F6 on the Xp install menu and fiddling around for hours trying to get a floppy drive working...

FusionIO = Fusion Bankruptcy
By tshen83 on 3/12/2009 5:52:34 PM , Rating: 5
I love Woz. But everything he touches after Apple hasn't turned out well.

FusionIO has few major disadvantages:

1. Hotswappability. High end servers do allow this, but normal 2S servers don't. When the SSD breaks, if you have to take down the server to swap a PCI-Express card, you will realize how stupid PCI-Express based storage really is.

2. Price. 5000 dollars for 160GB? Why don't you get 12 X25-Es for the same price and get much better aggregate IOPS? the 165K random write IOPS are battery backed at 1K size. Normal file system IOPS are 4K standard, which means, FusionIO won't do the advertised random write IOPS in a database type applications

3. bootable volume. This is another thing FusionIO lacks simply because the storage card needs a PCI-express bridge driver in the OS to get access.

Anything FusionIO does, you can do with an array of cheaper SLC SSDs. Even Intel's X25-E is at 100% price premium over's spot price(currently about $5.50 per GB for SLC 16Gbit chips). Then you can see that the FusionIO is hardly worth 500(5.50*80 16gbit chips) dollars for the 5K price they are asking. Given the fact that Intel will launch 30nm SSD production by the end of the year and the current global economic condition, SSD IC pricing will continue to slide, and fusionIO don't be able to catch the volume economics train.

This is one of the things that sounds good in theory, but in practice, sucks balls.

RE: FusionIO = Fusion Bankruptcy
By Reclaimer77 on 3/12/09, Rating: 0
RE: FusionIO = Fusion Bankruptcy
By codeThug on 3/13/2009 12:55:41 AM , Rating: 2
no sch1dt

By msomeoneelsez on 3/13/2009 2:06:42 AM , Rating: 2
Just to clarify, are you saying PCI-e based storage as in right on the card, or would that include any RAID platform that uses PCI-e as well?

My understanding is that current RAID systems that would support 12 drives would be forced to go through PCI-e anyways because (once again, to my understanding,) no motherboards are out that have native support for 12 drive RAID, let alone a good enough controller for what you are talking about.

RE: FusionIO = Fusion Bankruptcy
By somedude1234 on 3/14/2009 11:13:26 AM , Rating: 2
Hotswappability - Can you hot-swap the PCI-express RAID card in your normal 2S server? If not, then this product has the exact same disadvantages as any RAID card.

Price - Think $/IOp not $/GB... this thing is an absolute beast. Also, after your pay for your 12 X25-E SSD's, what are you going to hook them up to? This is going to cost money too. Are you going to use a 12 port RAID card, SAS HBA, external JBOD? Show me a device that I can hook 12 SATA SSD's up to and actually obtain the aggregate throughput potential of all those drives. 1500 MBps (with the big B) !? That's insane and I don't know of any storage controller that can match it at this price point.

Bootable volume - I agree somewhat, for $5k they should have thrown some boot firmware on the card just to give us the option... but I think that even if it were there most people wouldn't be booting off of this. It's for high workload servers, not gaming rigs.

As to the final comment about your array of SSD's... I still want to know what you're going to hook them up to and get close to the performance of the FusionIO.

RE: FusionIO = Fusion Bankruptcy
By tshen83 on 3/14/2009 9:59:49 PM , Rating: 1
somedude, you sound like FusionIO's salesman, with logical errors all over the place.

"Hotswappability - Can you hot-swap the PCI-express RAID card in your normal 2S server? If not, then this product has the exact same disadvantages as any RAID card."

Yes, you can't hotswap RAID cards. But RAID card have much smaller chance of failure(MTBF) than disks. In fact, RAID cards typically only fail when temperature is too high in a case. That problem can be mitigated easily by having better air flow or better air conditioning. We are talking about failure of SSDs, which is guaranteed to fail at 100,000 erase cycles. If the SSDs reside in 2.5 inch hot swappable bays, replacing them is a 1 minute job. If fusionIO does 167,000 writes a second, you can be sure that the card won't last long. When it fails(and it will fail, all drives fail), it would be a nightmare.

Price - I am thinking in $/IOPS. In fact I am thinking in IOPS*GB/Watt/Dollar. It only takes 6 Intel X25-Es to get the 180K read IOPS the fusionIO provides. My 12 Intel X25-E example is to simply show that FusionIO is twice as expensive as X25-E based raid array that does the same thing.

As to the RAID controller. Intel IOP348 dual 1.2Ghz based cards like Adaptec, Supermicro, Areca all can do about 1.2-1.3GB sustained rate, which is close to the fusion IO spec. You can hook 6-8 Intel X25-Es per RAID card to get to where FusionIO is and do it at half the cost of FusionIO while providing hotswappability.

JBODs are all over the place, Dell MD1120s, HP MSA70, Supermicro chassis,etc. What I am trying to say is that, those 2.5 inch hot swap infrastructure is already in place in most datacenters. All the IT people have to do is to replace the SAS drives in them with the X25-E and the upgrade is in place, cheap and fast. Would people crack open servers and install a pair of 5000 dollar PCI-Express cards? (it would have to be a pair since you want mirroring at the bare minimum).

All it comes down to is that fusionIO is a pipe dream. All PCI based storage so far had failed(Gigabyte iRAM, Sandisk ...etc) So will fusionIO and OCZ Z-Drive and Micron's PCI-E card. Hotswappability is a bitch of a requirement, and volume economics demand at least price parity, both of which FusionIO lacks.

RE: FusionIO = Fusion Bankruptcy
By somedude1234 on 3/16/2009 12:57:39 AM , Rating: 2
Salesman? Hardly. I'm simply impressed by the performance.

It only takes 6 Intel X25-Es to get the 180K read IOPS the fusionIO provides

And it will take 50 of the X25-Es to match the claimed 167,000 4k random write IOps (3,300 for the X-25E)

If you want to hotswap SSDs you'll need even more drives as the performance of some will be lost to redundancy.

What RAID level are you planning on using? Do you expect to get perfect linear scaling for each additional SSD out of RAID5/6 in 4K random workloads? If you go RAID10, then your required number of drives will double. With RAID0 hotswap is pointless.

If you use a SAS RAID card, STP overhead will eat up some of the performance of the SATA SSD's.

Even if you use two of the new cards in RAID1, the $/IOp makes these worth considering. Assuming, of course, that your application actually NEEDS 167,000 random write IOps.

RE: FusionIO = Fusion Bankruptcy
By tshen83 on 3/17/2009 12:19:43 PM , Rating: 2
First of all, Intel's 3300 published random write IOPS is conservative. Depending on IO size, it can actually bench about 5000-8000 IOs per second. The 167K random write IOPS by fusionIO is suspicious at best. It has to be cached in write-back ram, which means, the controller actually lies about data persistence. I don't see a super capacitor or a battery on the FusionIO device, so when power goes out, you might lose transactions(correct me if I am wrong here, but I don't see a battery on the FusionIO PCB)

If you really want to be technical, FusionIO is simply pushing the same RAID idea, just masqueraded under a new brand name. Under the two yellow heatsinks are probably the same Intel IOP348 chips that the RAID card uses. It even has the same heatsink as the Adaptec 5 series RAID cards so I am expecting nothing less than a Intel IOP348 underneath. Then under that other heatsink on the bottom left is probably the PCI-Express lane splitter, allowing the two raid ICs to be sharing the same PCI-Express bus. Then the driver side actually has to hide the internal dual RAID card implementation of the FusionIO card and do software RAID0 on it.

I counted the NAND ICs on that PCB. There is no freaking way that FusionIO can do 167K write IOPS without caching it in ram. Plus, how long do you think the device will last if it actually does 167K writes a second(SLC breaks down at 100K erase cycles)? If it does 167K 4K IOS per second, that is 668MB of data per second, and your FusionIO will be out of space in as little as 5 minutes. So much for high write IOPS if it is full :)

It would take a completely incompetent IT manager to consider FusionIO. For 10,000+ dollars for a pair of FusionIOs, a better bet is to fill the 24bay SAS case with 24x Intel X25-Es at the same price and get over 2.5 times the space and twice the IO throughput. Other than that, over priced SAN systems will try to incorporate FusionIO to future fudge IOPS numbers, but those SAN vendors will die along with FusionIO if they decide to establish dependence on FusionIO.

RE: FusionIO = Fusion Bankruptcy
By tshen83 on 3/17/2009 12:32:10 PM , Rating: 2
Never mind, I saw the 3 Caps C20, C21, C22. The erase cycle and space constraint still exists.

By somedude1234 on 3/18/2009 2:52:05 AM , Rating: 2
I've been looking for an actual benchmark report @ purely 4k random writes from the previously released fusion-io cards. I too am suspicious that they can sustain 167k random write IOps @ 4k. For now, I hope their spec sheets are not complete BS, time will tell.

Your write cache concerns apply equally to the fusion-io and the intel X25-E. From what I could find, the X25-E is only able to obtain the 3,300+ write IOps with write caching enabled, and I don't think the intel SSD has a battery or cap to protect it's cache.

The fusion-io spec sheet "claims" 48 years if you're write-erasing 5TB/day. Even if you cut that by a factor of 10 you're still looking at nearly 5 years. How long do you think one of the Intel SSD's would last if you were writing to it at the maximum possible speed 24x7?

Also, both the fusion-io cards and the X25-E's come with 3 year warranties. They are based on the same flash technology and should have similar lifespans.

The 32 GB X25-E's are going for around $400 each right now. You're at nearly $10K before paying for the 24 drive SAS case, SAS RAID card and cables. Even if you assume 100% perfect linear performance scaling for all 24 drives on a RAID card, you're only to 80k IOps, or about half way there (based on the spec sheets from both vendors). Also, as I stated before you'll loose a portion of your SATA drive performance to STP overhead if you put them behind a SAS controller. You'll loose both capacity and performance to redundancy if you plan on using anything but RAID0.

Where are you getting "twice the IO throughput" from? We're talking write IOps here, right?

Devices such as this (if the manufacturer's claims hold true) can provide a huge boost to certain applications at a price point significantly lower than what's been available in the past. People who maintain applications that are extremely sensitive to IOps performance (both read and write) should consider all available options, including ram-based, pci-e-flash, sas/sata-flash, and anything else that is available.

All for a small price of...
By Soccerman06 on 3/12/2009 12:46:01 PM , Rating: 3
$5000 each

RE: All for a small price of...
By jaericho on 3/12/2009 1:55:50 PM , Rating: 3
This piece of h/w isn't for the consumer. This is for enterprise storage systems. DBs and the like that need high IOps. $5000 is a lot of money, but performance problems for a server can cost a lot more.

RE: All for a small price of...
By codeThug on 3/13/2009 12:54:27 AM , Rating: 4
This piece of h/w isn't for the consumer.

Speak for yourself plebe.

RE: All for a small price of...
By MrPoletski on 3/13/2009 7:38:11 AM , Rating: 4
I'll get two in SLI. =)

By erple2 on 3/12/2009 1:36:35 PM , Rating: 2
The ioDrive Duo fits into PCI Express x8 or x16 slots and can sustain up to 20Gb/sec of raw throughput. The company also says that it can easily sustain 1.5Gb/sec of read bandwidth and nearly 200,000 read IOPS. Sustained read bandwidth is 1500 MB/sec, sustained write bandwidth is 1400 MB/sec, Read IOPS is 186,000, and write IOPS is 167,000.

Is it just me or is this entire paragraph a study in inconsistency? So it can sustain 20Gbps raw throughput (I know, I know, there's overhead and theoretical limits are strictly fun for the number crowd). Then they say that it can sustain 1.5 Gbps of read bandwidth, but Sustained read badnwidth is 1500 MBps, and write bandwidth is 1400 MBps? Those are more or less all incompatible... So which is it?

1.5 Gbps is about 200 MBps (OK, a little bit less than that),but certainly not 1500 MBps. I am very confused after reading this.

RE: What?
By Aeros on 3/12/2009 1:48:21 PM , Rating: 2
Agreed. Please clarify.

RE: What?
By Aeros on 3/12/2009 1:51:44 PM , Rating: 5
N/M I looked it up in the original press release.

Based on PCI Express x8 or PCI Express 2.0 x4 standards, which can sustain up to 20 gigabits per-second (Gbytes/sec) of raw throughput, the ioDrive Duo has more than enough bandwidth to obtain industry-leading performance from a single card. The ioDrive Duo can easily sustain 1.5 Gbytes/sec of read bandwidth and nearly 200,000 read IOPS. Its performance metrics are as follows:

• Sustained read bandwidth: 1500 MB/sec (32k packet size)
• Sustained write bandwidth: 1400 MB/sec (32k packet size)
• Read IOPS: 186,000 (4k packet size)
• Write IOPS: 167,000 (4k packet size)
• Latency < 50 µsec

Much clearer.

RE: What?
By petesonic on 3/12/2009 1:51:55 PM , Rating: 5
According to the FusionIO website The card can sustain 1.5 Gbytes/sec = 1.5 GB/sec. So that lowercase b for bytes probably threw people off.

Also the author kinda screwed up that first sentence
The ioDrive Duo fits into PCI Express x8 or x16 slots and can sustain up to 20Gb/sec of raw throughput.
It should read. The ioDrive Duo fits into PCI Express x8 or x16 slots which can sustain up to 20Gb/sec of raw throughput. This refers to the theoretical maximum as you stated.

Just check out the link below to see the real stats.

Met the people behind it.
By choadenstein on 3/12/2009 4:28:02 PM , Rating: 2
Just wanted to say that I actually met the people behind the company about 2 weeks ago at a conference.

They stated that the real benefit lies in what it can offer to virtualization projects. Really increasing the effectiveness of virtualized OSs and applications.

Also, they claimed it was fast enough to be used as virtual RAM when needed. Whether that's true or not, it definitely would make one hell of a Linux swap file!

RE: Met the people behind it.
By Yames on 3/12/2009 5:22:20 PM , Rating: 2
They made they same claims when I saw them last year.

Did they still have the bicycle thing that would spin you upside down in circles?

SanDisk Vaulter
By RU482 on 3/12/2009 4:18:03 PM , Rating: 2
A few years ago, Sandisk had an SSD that communicated with the PCI-e bus via a minicard connector (not to be confused with the minicard SSDs of today, that are actually just PATA drives and the signals on the connector are IDE spec)

The product never took off.

This actually might be credible
By DukeN on 3/12/2009 5:24:00 PM , Rating: 2
IIRC this is the company Wozniak was recently hired as the Chief Scientist for.

However nothing is as credible as benchmarks, and of course, what's the cost for that performance :)

By Thernn on 3/12/2009 10:58:59 PM , Rating: 2
Absurd people
By zpdixon on 3/13/2009 2:24:31 AM , Rating: 1
"it can easily sustain 1.5Gb/sec"

No. It can sustain 1.5GB/sec. Gb != GB.

What's absurd about people who don't use the right case is that they try to get it right by capitalizing the "G" but don't capitalize the "b". At least if they wrote "gb" they would be consistent (but incorrect). But no, they have to fuck it up to the extreme and confuse everybody by carefully capitalizing one letter and not the other. Grrrrrr.

This was my rant of the day.

Sequential speeds translated
By Andypro on 3/12/09, Rating: -1
RE: Sequential speeds translated
By Doormat on 3/12/2009 1:23:39 PM , Rating: 2
MB/s, not Gb/s. It may have been wrong before, now its correct, at least compared to other articles I've read about this product.

1500MB/s, or 1.5GB/s is nothing to sneeze at. Thats 75% as fast as the 24 Samsung SSDs in RAID we saw the other day.

RE: Sequential speeds translated
By Etsp on 3/12/2009 1:29:32 PM , Rating: 3
The ioDrive Duo fits into PCI Express x8 or x16 slots and can sustain up to 20Gb/sec of raw throughput. The company also says that it can easily sustain 1.5Gb/sec of read bandwidth and nearly 200,000 read IOPS. Sustained read bandwidth is 1500 MB/sec, sustained write bandwidth is 1400 MB/sec, Read IOPS is 186,000, and write IOPS is 167,000.

The author of the article screwed up. It's not 1.5Gbps/1.4Gbps, it's 1.5GBps/1.4GBps

**Note to Shane***
"easily sustain 1.5Gb/sec of read bandwidth"
"Sustained read bandwidth is 1500 MB/sec"

These are contradictory statements.

1.5Gb/sec is 1/8th the speed of 1500MB/sec. Capitalization matters....

RE: Sequential speeds translated
By Andypro on 3/12/2009 2:08:05 PM , Rating: 2
Ah, I see. I figured the author just screwed up the "MB" instead of the "Gbps."

I guess those drives are pretty flippin' fast.

"We are going to continue to work with them to make sure they understand the reality of the Internet.  A lot of these people don't have Ph.Ds, and they don't have a degree in computer science." -- RIM co-CEO Michael Lazaridis
Related Articles

Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki