backtop


Print 42 comment(s) - last by mindless1.. on Jan 31 at 11:47 PM

Intel and Nanochip team up to develop 100 gigabytes per chip

This article was syndicated from Tiago Marques' blog at SiliconMadness.com

Nanochip, Inc., a Silicon Valley startup, has managed to raise $14 million in funding from Intel Capital, Intel's global investment organization, for further development of the MEMS technology.

You read it right: gigabytes, not gigabits.

According to Nanochip (PDF), the technology isn't lithography constrained, allowing production of chips of more than 1GB in capacity, in plants that have already been deemed outdated by current standards.

The lack of lithography constraints means cheaper products, resulting in an opportunity to also replace flash memory, as the technology is also non-volatile.

Today's factories should be able to produce the first products, estimated at 100GB per chip, when the technology is expected to be unleashed for public consumption by 2010. The first samples will be available during 2009.

PRAM, or phase change memory, was expected to be the technology to replace flash in the coming years, since it is also non-volatile, while it is much faster than flash. As Intel found out over the last few years, PRAM doesn't seemed to scale so well, in regards to density, and still has some boundaries to overcome -- namely it's thermal principles of operation.

Nanochip's details of the technology are ambiguous at best, though what is known is that the company is working on hybrid micro-electric-mechanical -- the partnership with Intel suggests a PRAM connection too.  The company has described this as a very small platter coated in chalcogenide is dragged beneath hundreds of thousands of electrical probes which can read and write the chalcogenide.  Casual estimates put this sort of density at one terabit per square inch, or 125GB per square inch. 

The company has not disclosed access speeds. That's a place where PRAM is appointed to be the undisputed king of the hill, so it could limit applications of this type of technology.

For now it seems that the flash SSD drives are going to be replaced before they even reach mass consumption -- which is a good thing. The technology is expensive, doesn't provide a lot of storage space and is prone to failure, due to the low amount of write cycles available per cell. Flash is perfect for pendrives and resisting shock, not so good for regular, intensive, HDD usage.


Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Which is it?
By slashbinslashbash on 1/28/2008 12:39:49 PM , Rating: 2
quote:
Casual estimates put this sort of density at one terabit per square inch; or 116GB per square inch.


I'm guessing the 2nd part is supposed to be square cm?

Also, this tech seems to require mechanical movement -- a step backward?




RE: Which is it?
By Polynikes on 1/28/2008 12:50:35 PM , Rating: 2
It does indeed sound like a strange way of storing information.


RE: Which is it?
By Oregonian2 on 1/28/2008 2:34:47 PM , Rating: 2
I think it's a clever way of mixing technologies, and it'll be interesting to see if it's cost effective.

If you haven't yet, go to their website and read about the basis of their technology. It's phase-change memory that's written by zillions of probes that are in turn moved mechanically. Like a disk you say? Well, kind-of, except that the movement isn't done by a rotating motor, the movement is MEMS driven. Solid state "motor" of sorts. A MEMS driven solid state hard disk. It's at least intellectually interesting. TBD about cost effectiveness (like what kind of yields can they produce, assuming its reliable, etc).


RE: Which is it?
By Methusela on 1/28/2008 12:51:41 PM , Rating: 5
1 Terabit per square inch is roughly equal to 116 GigaBytes per square inch (math tells me it'd be exactly 125 GigaBytes, but who's going to quibble over a few gigs? Certainly not HD manufacturers! ;)


RE: Which is it?
By Assimilator87 on 1/28/2008 1:51:08 PM , Rating: 3
I'm pretty sure Seagate recently had a class action lawsuite filed against them for misrepresenting the actual size of the HDDs. Only a few GB? The consumers care.


RE: Which is it?
By onwisconsin on 1/28/2008 2:15:07 PM , Rating: 2
Or lawyers...


RE: Which is it?
By omnicronx on 1/28/2008 2:54:03 PM , Rating: 2
quote:
I'm pretty sure Seagate recently had a class action lawsuit filed against them for misrepresenting the actual size of the HDDs. Only a few GB? The consumers care.

What would we ever do without our extra 48 576 bytes per MB ;)
HD manufacturers purposely did this when HD sizes where small to not confuse consumers (made it easy 1MB = 1000 kilobytes).

Although it seems like manufacturers are trying to make every extra penny, the truth is probably that the people who first thought up of this plan, did not take into account how far HD space would actually scale. The bigger the hard drive gets, the more times you lose those extra 48 576 bytes ;)

I never really understood how somewhere around 50G lost every 1TB warranted a class action suit..


RE: Which is it?
By HaZaRd2K6 on 1/28/2008 3:37:19 PM , Rating: 5
The reason that whole situation existed at all was because of the different ways drive manufacturers and software programmers measure a gigabyte. A gigabyte, technically speaking, is exactly one billion (10^9) bytes. This is what drive manufacturers put on their drives. What software programmers call one gigabyte is actually one gibibyte (giga binary byte or GiB, 2^30).

An example, take a hard drive that can store exactly 250×10^9 or 250 billion bytes after formatting. Generally, operating systems calculate disk and file sizes using binary numbers, so this 250 GB drive would be reported as "232.83 GB". The result is that there is a significant discrepancy between what the consumer believes they have purchased and what their operating system says they have. So when Seagate says you have a 500GB drive, they're right. You have five hundred billion bytes of storage on that drive. Just because Microsoft measures five hundred billion bytes as binary bytes does not mean that there are differences in the actual, physical storage capacity of the drive.


RE: Which is it?
By mindless1 on 1/28/2008 6:43:44 PM , Rating: 1
Wrong. It's not just software programmers, it's everyone else, the hardware manufactureres ratings of bus speeds, chip capacities (including memory). It was also hard drive manufacturers! Did you get that last sentence? Hard drive manufacturers did correctly state capacity early on, then changed to the present capacity mislabeling.

A gigabyte technically speaking is not 10^9 bytes. Understand what a prefix is, it is to describe the suffix which is a binary not decimal value.

The fact is, the entire computer industry and scientists do accept the binary number system. Only ignorant people who can't understand that there is more than one number system think that a prefix can only exist in the decimal system or has to have a decimal value, that it couldn't be a binary value.

The entire computer industry defined what gigabit, gigabyte, megabit, etc, meant and it was a standardized fixed term BEFORE certain companies misused the term trying to make it a decimal value. That's the whole point of a standardized term, that it doesn't change even if the hard drive industry or a confused poster like yourself really really want it to change.


RE: Which is it?
By chekk on 1/28/2008 8:07:36 PM , Rating: 2
Wow, did get up on the wrong side of your cage this morning? There's no need to be nasty.
quote:
Understand what a prefix is, it is to describe the suffix which is a binary not decimal value.

Exactly. Which means that whether one is referring to decimal values, binary values, apples or whelks, the giga prefix means 10^9.


RE: Which is it?
By chekk on 1/28/2008 8:10:44 PM , Rating: 2
Crap. Apparently, I can't proofread.
"... did you get up ..."


RE: Which is it?
By the goat on 1/29/2008 8:57:20 AM , Rating: 2
quote:
Exactly. Which means that whether one is referring to decimal values, binary values, apples or whelks, the giga prefix means 10^9.


You sir are 100% wrong. The these prefixes are based on accent Greek and Latin languages. Nobody owns these Greek and Latin prefixes. The "International System" (SI for short) system of measurement (meters, liters, etc.) uses these prefixes to represent powers of ten (i.e. giga = 10^9). This is true for every SI unit of measure.

But it is very important to realize that bits, nibbles and bytes are not SI units. Therefore SI has no authority to dictate how accent Greek and Latin words used as prefixes modify these computer memory units of measure.

Everybody in the computer memory industry knows how to use prefixes on binary units correctly. There is no dispute as to what a kilobyte, etc. is defined as (hint: kilobyte = 2^10 = 1024 bytes, not more, not less).


RE: Which is it?
By grv on 1/29/2008 9:19:09 AM , Rating: 2
just because a couple of uneducated programmers misused decimal prefixes doesn't mean its somehow right or "standard".
learn, binary prefixes are standardized since 1998: http://physics.nist.gov/cuu/Units/binary.html
and don't forget to ask your money back for IT education. stupid teachers deserve no money


RE: Which is it?
By the goat on 1/29/2008 2:51:25 PM , Rating: 2
quote:
just because a couple of uneducated programmers misused decimal prefixes doesn't mean its somehow right or "standard". learn, binary prefixes are standardized since 1998: http://physics.nist.gov/cuu/Units/binary.html
and don't forget to ask your money back for IT education. stupid teachers deserve no money

The units for measuring computer memory were defined way before 1998. The alternative binary only prefixes you are talking about are a joke. Nobody uses them.

Like I said before the prefixes in question are based on ancient Greek and Latin. They are not misused by the computer memory industry. They are not tied to decimal units only.

SI gets to make up prefixes for all the units of measure they invent. But SI didn't invent bytes and bits. So why does anybody think SI should dictate how prefixes work with bits and bytes?


RE: Which is it?
By mindless1 on 1/31/2008 11:42:17 PM , Rating: 2
I think you mean the entire computer industry, not just a couple of programmers. Obviously you have no arguement if you choose to ignore such a basic fact that the computer industry did standardize the terms gigabyte, etc, to mean a value in the binary system. They DEFINED what giga, mega, etc, meant in the binary system. 10^9, etc, is in the decimal system not the binary system.

I'd have to agree with the goat, in that it's laughable you reference 1998, what about 30 years earlier?

The problem is simple, you don't know what standardization is or why it's important.


RE: Which is it?
By HaZaRd2K6 on 1/29/2008 2:50:59 PM , Rating: 2
Now that's where you're wrong.

The Système International prefixes are based on ancient Greek. Specifically, the prefix "giga" refers to billion (or ten to the power of nine or 1,000,000,000). The Système International works in base ten, yes, hence giga is base ten.

Computer programmers and component manufacturers turned this definition upside down, made it two to the power of thirty and called it a gigabyte. Two to the power of thirty, as I mentioned before in this thread is 1,073,741,824. Not one billion. In base two (binary), gibibyte is the correct definition of one billion binary bytes. Gigabyte refers to exactly one billion bytes--no more, no less. Gibibyte refers to exactly 1,073,741,824 bytes.

So stop it. You're wrong. Gigabyte means one billion bytes. Gibibyte means 1,073,741,824 bytes. Now you can either tell programmers to start coding in base ten, tell drive manufacturers to start listing drive sizes in base two or just put up with it and realise no matter what happens, neither side is going to give up.


RE: Which is it?
By the goat on 1/29/2008 3:02:30 PM , Rating: 2
quote:
The Système International prefixes are based on ancient Greek. Specifically, the prefix "giga" refers to billion (or ten to the power of nine or 1,000,000,000).

Incorrect in so many ways. First of all the word Giga is Latin not Greek and it means giant not 1,000,000,000.

Mega = Great (Greek)
Tera = Monster (Greek)

What do the words giant, great and monster have to do with base ten?

quote:
The Système International works in base ten, yes, hence giga is base ten.

SI is base ten no doubt. But Bytes and bits are not SI units. So why should I case about SI prefixes?


RE: Which is it?
By HaZaRd2K6 on 1/29/2008 11:09:38 PM , Rating: 2
From Merriam-Webster dictionary:

Giga
Etymology: International Scientific Vocabulary, from Greek gigas giant: billion (10^9) <gigahertz> <gigawatt>


Notice words number two through four there? International Scientific Vocabulary . In other words, your opinion counts for nothing. And if you still want to argue, take it up with the National Institute of Standards and Technology. Here, I'll even give you the link to the .pdf: http://physics.nist.gov/cuu/pdf/sp811.pdf

You done now? I am Greek. I know the word. The word itself (???a?, actually pronounced "gigas" (soft 'g')) does mean giant, yes, but gigabytes and gigahertz and gigawatts are not giantbytes and gianthertz and giantwatts. They're billions. Same as a terabyte and a terahertz and a terawatt are not monsterbytes and monsterhertz and monsterwatts. Those are trillions. Stop confusing standard convention in the scientific community with the etymology of ancient Greek words.

quote:
SI is base ten no doubt. But Bytes and bits are not SI units. So why should I case about SI prefixes?

Nobody was ever saying bytes and bits were SI units. We were saying the prefixes used to describe them in quantity (including kilo, mega, giga and tera) are SI prefixes and are attached to specific quantities.


RE: Which is it?
By the goat on 1/30/2008 8:31:35 AM , Rating: 2
quote:
From Merriam-Webster dictionary:

Giga
Etymology: International Scientific Vocabulary, from Greek gigas giant: billion (10^9) <gigahertz> <gigawatt>

Notice words number two through four there? International Scientific Vocabulary . In other words, your opinion counts for nothing. And if you still want to argue, take it up with the National Institute of Standards and Technology. Here, I'll even give you the link to the .pdf: http://physics.nist.gov/cuu/pdf/sp811.pdf

The definitions from Merriam-Webster and from NIST and anywhere else you can find are all taken from the SI definition. So it doesn't add any more weight to your argument.
quote:
You done now? I am Greek. I know the word. The word itself (???a?, actually pronounced "gigas" (soft 'g')) does mean giant, yes, but gigabytes and gigahertz and gigawatts are not giantbytes and gianthertz and giantwatts. They're billions. Same as a terabyte and a terahertz and a terawatt are not monsterbytes and monsterhertz and monsterwatts. Those are trillions. Stop confusing standard convention in the scientific community with the etymology of ancient Greek words.

Nobody was ever saying bytes and bits were SI units. We were saying the prefixes used to describe them in quantity (including kilo, mega, giga and tera) are SI prefixes and are attached to specific quantities.

If you are Greek why did you say the word giga was Latin?

You seem to have missed me point. SI does not own any of the prefixes they use. SI took/stole their prefixes from other languages. SI is not dictated to us by the God-Emperor on Arrakis. So guess what other people besides SI are allowed to invent units of measure and define prefixes to use with those non-SI units. The definition of kilobyte = 1024, megabyte (1024)^2, terabyte (1024)^3, etc. have been in the popular lexicon for close to 40 years. That has quite a bit of weight. If is hard to now say, "wait a second, you are using the wrong definition because I never gave you permission to define your units." Why should anybody ask permission from SI or anybody else before using words and units of measure that have been defined for several decades?

Let me point out a direct analogue to this argument. This is an example of one system of measure taking a word defined by another system of measure and redefining the word for their own use. Does the fact that the word now has many unequal definitions make one or the other definition more or less valid? The example I am talking about is the word "ton". The word ton means different things in different systems of measurement. In the USA ton = 2000lbs. in the imperial system of measure. In the UK ton = 2400lbs. in the imperial system of measure. In the SI system of measure ton = 1000kg = ~2205lbs. Which one is correct. Did SI illegally steel the word ton from the imperial system? Of coarse all are equally correct. The same as 1 kilogram = 1000 grams and 1 kilobyte = 1024 bytes. SI borrowed the word ton just like the computer memory industry borrowed prefixes from Greek and Latin. Nobody owns the word "ton". Nobody owns the prefix kilo, mega, tera, etc.


RE: Which is it?
By HaZaRd2K6 on 1/30/2008 10:10:23 PM , Rating: 2
quote:
First of all the word Giga is Latin not Greek...
quote:
If you are Greek why did you say the word giga was Latin?

If you go back and read this back-and-forth dialogue, you'll actually discover you said giga was Latin, not me.

And I refuse to keep this going any longer. My point is that using SI prefixes to determine values that are not SI standards is where confusion arises. The drive manufacturers use SI prefixes as exactly what they are, but programmers define standard SI prefixes somewhat differently. Whether or not they actually are SI prefixes is besides the point. Most people take the prefix "giga" to mean "billion". It's really that simple.


RE: Which is it?
By mindless1 on 1/31/2008 11:47:24 PM , Rating: 2
It's real simple: It makes no difference at all if it's Greek, Latin, or even if the term was "dogfood" instead of "giga". Literally, if the industry wanted to use the term dogfoodbytes instead, once it was a standard it doesn't matter that elsewhere dogfood comes in a bag and canines eat it.

What matters is that the entire industry standardize a term, it is irrelevant if that term means something else in another discipline before, during, or afterwards. The computer industry has clearly established the value of these terms and anyone who tries to pretend they're smart by declaring a standard term is invalid because some 3rd party says so decades after it was standardized, is fooling themselves.


RE: Which is it?
By MrPoletski on 1/29/2008 2:24:37 AM , Rating: 2
It comes down to the question:

Is one gigabyte 2^30 bytes or is it 10^9 bytes?

I, personally, prefer the 2^30 idea.

But, most people don't even know what a base-2 number system is. In fact, there are 2 types of people, those who do and those who don't.

When they see the number written in base-2 (BINARY for those who don't) it looks smaller because 2^30 > 3^10.

I think HDD manufacturers should list their capacities in gigabits but use a base-8 (OCTAL, for those 7 of you who don't) number system so people actually bother to learn about these simple things.


RE: Which is it?
By mallums on 1/29/2008 2:47:24 AM , Rating: 2
I think you meant 10 types of people. :)


RE: Which is it?
By HaZaRd2K6 on 1/30/2008 10:12:51 PM , Rating: 2
quote:
I think you meant 10 types of people. :)

I agree with ya there ;-)

And I think gigabyte should be 10^9. Seeing as giga is an SI prefix (as much as thegoat tries to say otherwise), it would only make sense. Calling it a gibibyte might sound a little weird, but technically it's correct.


RE: Which is it?
By Martimus on 1/28/2008 3:16:39 PM , Rating: 2
I thought so at first too, but then on review I relized it said teraBIT and gigaBYTE. So the second is just 1/8th the first.


RE: Which is it?
By Calin on 1/29/2008 3:15:46 AM , Rating: 2
One terabit or (about) 1/8 terabytes per square inch.
A byte is (usually) 8 bits


Not Really News
By Xodus Maximus on 1/28/2008 1:59:16 PM , Rating: 4
Im sorry for saying it, but this is not news, since nothing actually happened, its like saying "cold fusion solved" see paper napkin for details.

A bunch of guys wrote a theory on paper, then promise results as if a stable mass manufacturing technique was that easy to implement. They have an idea, got money to test it, and it may be decades before they get a prototype out the door, call me when that happens.

But as of now Im still waiting for my anti-gravity boots, jet-pack, and flying car.




RE: Not Really News
By fic2 on 1/28/2008 2:23:31 PM , Rating: 2
I picked up my flying car a couple of years ago. You must have been out sick that day. :o)


RE: Not Really News
By BigMTBrain on 1/28/2008 3:19:21 PM , Rating: 4
And I traded in my old jet pack for one of the newer models from Thunderbold Aerosystems just last month. Much smoother and accurate controls. I run into buildings much less often now. http://www.theregister.co.uk/2008/01/28/jetpack_ma...


RE: Not Really News
By winterspan on 1/28/2008 6:13:34 PM , Rating: 2
"I run into buildings much less often now."

HAHAHA.. I was laughing for minutes after I pictured that in my head....


RE: Not Really News
By omnicronx on 1/28/2008 2:58:35 PM , Rating: 2
quote:
Today's factories should be able to produce the first products, estimated at 100GB per chip, when the technology is expected to be unleashed for public consumption by 2010. The first samples will be available during 2009.

Sounds to me like they must have a working sample in order to make a claim such as this. Of course they will not start at 1TB, but we could see this on the market sooner than you think in much smaller sizes. As long as they can give me 100-200 Gigs with SSD-like performance without the maximum writes, I will be happy.


RE: Not Really News
By Xodus Maximus on 1/28/2008 3:32:20 PM , Rating: 3
well you may be right but call me a pessimist, but from the PDF press release they seemed to be very careful with their words

quote:
first prototypes later this year to support design verification testing and limited customer sampling in 2009


from that I gather that it is just a few tests of the technology, but since they still need "design verification" it means that everything they have done up to now is simulated or in their heads, they can make the first prototype, and it would be no surprise if it just did not work as predicted, that would be a multi year setback, but they are going with a positive outlook, "if we have no problems, then people will have the chips by 2009". Look at the synthetic benchmarks and test that allowed AMD to claim about the Phenom, but manufacturing realities gave them a HUGE setback.

I really hope they succeed, its just I have my doubts.


What exactly is this?
By Blood1 on 1/28/2008 12:27:33 PM , Rating: 2
What is the purpose of this new technology? From the article it's not clear. These are used in what devices? HD's?




RE: What exactly is this?
By Desslok on 1/28/2008 12:31:16 PM , Rating: 2
quote:
For now it seems that the flash SSD drives are going to be replaced before they even reach mass consumption -- which is a good thing.


By Master Kenobi (blog) on 1/28/2008 12:35:36 PM , Rating: 2
You really should re-read the article then, maybe read the links too for additional information.

It's a replacement to the flash chips used in Solid State Drives (SSD). This should allow much larger capacities and longer life, as well as faster read/write times write times. Read/Write on flash is not on par with hard drives yet, seems to be some technical limitations to work out.


Conjecture
By Andypro on 1/28/2008 1:28:06 PM , Rating: 3
For now it seems that the flash SSD drives are going to be replaced before they even reach mass consumption -- which is a good thing. The technology is expensive, doesn't provide a lot of storage space and is prone to failure, due to the low amount of write cycles available per cell. Flash is perfect for pendrives and resisting shock, not so good for regular, intensive, HDD usage.

"Mass market" for a product scheduled 2 years away is more likely to hit in around 5 years, if at all. I believe SSDs will reach mass consumption long before then.

Flash drives are not at all "prone to failure," either. They use write leveling algorithms for even wear across the flash cells, and based loosely on the MTBF figures as well as life expectancy for SSDs, they still beat the hell out of any traditional hard disk drive. Meanwhile, this new mysterious tech, while interesting, has no hard data associated with its life expectancy or MTBF.




RE: Conjecture
By winterspan on 1/29/2008 6:17:21 AM , Rating: 2
I totally agree. What the hell is this writer smoking? He's obviously much too uninformed to be contributing articles....

To say that SSDs are "prone to failure" is just complete hogwash, especially compared to HDDs.
regular hard drives are *MUCH* more prone to failure than SSDs. SSDs in fact are VERY reliable. I wouldn't be surprised if the ratio of MTBF of SSD to consumer HDD is 10:1.

I'm sick of hearing this crap of flash memory having a limited read/write cycle. With wear-leveling technology, the SDD should on average easily outlast the piss poor reliability of harddrives.

On the topic of the actual article content, this new MEMS thing looks to be totally unproven in the real world, and just something that has been computer simulated. And yet the author claims that flash SSDs will be replaced before they are mainstream? HA!
2008 will be the year of upper-mainstream SSDs.
I foresee "regular enthusiasts" buying 128 SSDs by Q4.

anyways....


Promises...
By kontorotsui on 1/29/2008 4:20:55 AM , Rating: 2
Promises... some women promised to love me forever. Guess what?




RE: Promises...
By Cullinaire on 1/29/2008 8:40:41 AM , Rating: 2
Yeah, but what was at stake?


By kilkennycat on 1/28/2008 12:59:49 PM , Rating: 2
... with magnetic bubble memory. Yet another technology supported by Intel that went nowhere because of access-time issues.




I wish..............
By CvP on 1/28/2008 1:48:28 PM , Rating: 2
i wish this turns out as the winner of "HD War"!!!!!!!!!




By mindless1 on 1/28/2008 6:55:13 PM , Rating: 2
"is prone to failure, due to the low amount of write cycles available per cell."

First I should write that I'm not really trying to be critical of Tiago Marques' statement because many many people have repeated this, it's become a very common urban myth.

Anything is supposedly prone to failure. Now tell us you or anyone you know has actually bought an appropriate device for their high write cycle use (meaning it has SLC instead of MLC flash chips) and had it fail from this wear-leveled, write cycle limitation.

Go ahead and list all the people who have had their SLC chipped flash drive fail due to this. I'd be surprised if you know anyone and even if you do it's only a handful of people. This means the opposite that it is less prone to failure than other storage mediums. Less prone than a hard drive, a floppy drive, a zip disk, a tape, a CD, DVD, etc.

Please, stop the madness. Practially all of us have flash thumbdrives and we dont' see them failing, we see the supposedly higher write cycle mechanical hard drive failing instead at a much higher rate. Even if you discount WHY the hard drive fails the fact remains that on average, we have a lower realized # of write cycles until failure with a mechanical hard drive and that will remain true until people actually have their flash drives fail instead of just trying to theorize based on a minimal guarantee of write cycles (Nevermind average or max!).

Something cannot be prone to failure and simultaneously there are rarely if ever any actual failures! I suppose people just like the easy answer, look at two numbers and see which is larger then ignore all other evidence.




"The whole principle [of censorship] is wrong. It's like demanding that grown men live on skim milk because the baby can't have steak." -- Robert Heinlein

Related Articles



Latest Headlines
4/21/2014 Hardware Reviews
April 21, 2014, 12:46 PM
4/16/2014 Hardware Reviews
April 16, 2014, 9:01 AM
4/15/2014 Hardware Reviews
April 15, 2014, 11:30 AM
4/11/2014 Hardware Reviews
April 11, 2014, 11:03 AM










botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki