Print 34 comment(s) - last by delphinus100.. on Jun 18 at 6:56 PM

New "Big Brain Computer" starts at only $30K, but can pack up to 4,096 cores, 64 TB of memory

Renowned physics supergenius and University of Cambridge research director Stephen W. Hawking said something about unlocking the secrets of the Universe as he received the first unit of a computer befitting his smarts -- SGI's "Big Brain Computer".

I. Meet the "Big Brain Computer"

In an era where supercomputers are slowly gravitating towards brute-force machines pillared by specialized hardware, such as graphics processing unit (GPU) based designs or field-programmable gate arrays (FPGA), many top firms are still focusing a lot of time on a more traditional objective -- specialty purpose-built scalable server rack units, used to build such juggernauts as International Business Machines, Inc.'s (IBMiconic Watson.

Another key entrant in this field is Silicon Graphics International Corp. (SGI).  SGI was born out of the remains of Silicon Graphics, Inc. a defunct 1990s firm that pioneered the OpenGL standard and designed graphics cards.  Today SGI is back at it and thriving, with its newly announced UV2 "Big Brain Computer" tower supercomputer.

Featuring custom-designed server rack units, the server is powered by Intel Corp.'s (INTC) Xeon processor E5 line, but is also compatible with NVIDIA Corp.'s (NVDA) Quadro and Tesla cards for GPU computing.

Now the author of The New York Times bestselling A Brief History of Times is receiving the first unit of the new supercomputer.  An ecstatic Professor Hawking stated, "I am very pleased to be receiving the first SGI UV 2 supercomputer in the world. 

Stephen Hawking
Prof. Hawking will use the UV2 to unlock science mysteries. [Image Source: Martin Pope]

New observations of our Universe, like the Planck satellite, are offering us exquisite new insights. In order to test our mathematical theories, we need to match this detail in our computer simulations. The flexible new UV 2 COSMOS system, soon to be supercharged with Intel’s MIC technology, will ensure that UK researchers remain at the forefront of fundamental and observational cosmology."

II. Entry Cost of UV2 is "Only" $30K

SGI claims the UV2 has the world's biggest shared memory system of any supercomputer.  It's scalable up to 4,096 cores and 64 terabytes of memory.  The system has a ludicrous peak data rate of 4 terabytes per second.  To put that in context, the entire contents of the U.S. Library of Congress only occupy 10 terabytes of space.

But the UV2 isn't design for simple human literary ponderings.  As Professor Hawking suggests, it's the ideal tool to chew through terabytes of chemical or astrophysical data looking for key correlations, trends, and observed events.

SGI UV2 rackServer blade
The rack (left) based UV2 rack unit (right) system starts at only $30K. [Image Source: SGI]

While its specs are intimidating, the system starts at only $30,000 USD for a bare-bones configuration and can run standard desktop apps, in addition to its specialty -- scalable multi-core/multi-GPU apps.

Writes SGI, "Users can focus on outcomes, not algorithms with the ability to rapidly innovate; taking analysis from a laptop, scaling up on SGI UV with no re-writing of code or additional data management required."

Dr. Eng Lim Goh, chief technology officer of SGI, brags that the new system is not only a world-class design -- it's also far cheaper than its predecessor, the UV1.  He remarks, "The technological advancement demonstrated in this next-generation SGI UV platform is not simply focused on increasing our lead in coherent memory size and corresponding core count. 

We have been able to deliver all of this additional capability while driving down the cost of the system. In fact, the entry-level configuration of SGI UV 2 is 40 percent less expensive than SGI UV 1. This creates a new level of accessibility for large shared memory systems for researchers and the ‘missing middle’, providing an effective lower overall TCO alternative to clusters."

Paired with genius algorithm and data-mining with scientific expertise like Professor Hawking, designs like the "Big Brain Computer" and "Watson" may indeed change our reality and prospective on the universe as we know it.  And that's good news for all of mankind.

Sources: Silicon Graphics, Data Center Knowledge

Comments     Threshold

This article is over a month old, voting and posting comments is disabled

RE: And the big deal is???
By dlapine on 6/15/2012 4:16:04 PM , Rating: 4
Um, no.

This is not a collection of individual systems, but 4096 cores and 64TB of ram that run on a single copy of the OS. You log in and the system says "Welcome, here's 4096 cpus and 64 TB's of RAM. Have fun!"

We run one of the older UV1's at NCSA, in addition to other supercomputer clusters:

This is a big deal. $30K is not a bad deal for a starter price either. The cost for this thing fully populated will be more than a set of blades with the same total Cores and Ram, and it will take both a service contract and good admin to run it. It has it's downsides.

The upside is that scientists who need all the memory and power of a big cluster but don't want to write the complicated code necessary to run simultaneously on hundreds of systems can treat it like a simple big system. This makes their jobs a lot easier, as they don't be both a scientist and a computer geek.

If it gets cheap enough, this could change things.

RE: And the big deal is???
By nafhan on 6/15/2012 5:18:20 PM , Rating: 2
From a hardware perspective, it's a collection of individual systems ... connected with SGI's networking stuff (i.e. "NUMAlink"). Having a single OS control a bunch of networked systems at the same time and see the memory and CPU resources as a single large pool isn't a new concept. In fact, that appears to be how the UV1 system you and I provided links to works.

To be clear, I'm certainly not trying to say this isn't impressive (It is, and I'm constantly amazed at how far we have come with computing), just that I don't see it as revolutionary, and I also think that the Stephen Hawking thing is probably a marketing stunt (from SGI/Rackables perspective, Hawking may do some cool stuff with it). Also, I'm ABSOLUTELY not trying to downplay any accomplishments in computing or science you or others may have made with computing devices like this. Sounds like you have a cool job. Enjoy!

Thinking about this for another minute, I think my problem may be that marketing fluff just makes me grumpy... Ah well, it's Friday. :)

RE: And the big deal is???
By GulWestfale on 6/15/2012 5:51:24 PM , Rating: 2
you mean like what beowulf clusters did a decade ago? separate systems working together as one, but this time simply inside the same enclosure?

RE: And the big deal is???
By TeXWiller on 6/15/2012 10:54:00 PM , Rating: 2
This is with a single standard kernel. No libraries (of Congress) needed. The magic is in the interconnect.

RE: And the big deal is???
By FaaR on 6/16/2012 1:03:40 PM , Rating: 2
Beowulf clusters did not have one unified memory address space like this big brain box has, they were merely standard PCs networked in a (then new) clever way.

RE: And the big deal is???
By nafhan on 6/15/2012 5:28:38 PM , Rating: 4
Also... not saying $30k is a bad deal for the $30k system. I was trying to say that $30k is literally orders of magnitude less costly than what you'd be paying for the systems they actually discuss (i.e. the ones with 64TB of RAM). Probably wasn't worth mentioning, as anyone out shopping for a supercomputer this weekend would be aware of this!

RE: And the big deal is???
By chemist1 on 6/16/2012 12:36:51 AM , Rating: 2
The upside is that scientists who need all the memory and power of a big cluster but don't want to write the complicated code necessary to run simultaneously on hundreds of systems can treat it like a simple big system.

Please correct me if I'm wrong (indeed, I'd love to be wrong on this one), but I don't think it's as turnkey as you suggest. Let's suppose you have some single-threaded C program that takes 60 hours to fold a very long RNA on a single modern CPU. You are absolutely correct that optimizing this for a standard computer cluster is extraordinary complicated, since that requires parallelizing the code so it can run on many cores simultaneously. But with this new machine, don't you still have to do your own parallelization? Yes, to submit a job to a computer cluster (say if you wanted to run 100 different instances of the folding program simultaneously, to fold 100 different RNAs), you do need a script. But that's fairly trivial (using the Sun Grid Engine, for instance). The real computer geek stuff is the parallelization, and surely this system doesn't do that for you. So while it's certainly a great convenience to have your own personal cluster, I don't see how that significantly reduces the required level of computer sophistication (vs. that needed to use a cluster). [And if it could automatically parallelize your single-threaded program, then couldn't it also run applications written for only a single core in multi-core mode?]

RE: And the big deal is???
By chemist1 on 6/16/2012 12:46:04 AM , Rating: 2
Ah, wait--maybe this is what you meant: Are you saying that if you *already have* a parallelized program (or an application able to run simultaneously on dozens of cores), then, with this system, you just run the program/app, and the OS just distributes it across all the cores for you; while, by contrast, running this highly-parallel program on a cluster (something with which I have no experience) can be somewhat complicated because you have to deal with MPI, etc.?

I.e., is it the case that, for those whose programs aren't parallelized, this system is not significantly easier to use than a cluster; however, for those whose programs are, it is?

RE: And the big deal is???
By Fritzr on 6/16/2012 2:00:19 AM , Rating: 2
Extreme simplification follows!

An 'ordinary' supercomputer is a cluster of nodes that share high speed interconnects. The program has to be aware of the interconnect and distribute subprocesses to take advantage of the multi-processing.

A multi-core computer runs a single OS that is able to execute simultaneously on all cores. The UV 2 is a 4096 (8192 with HyperThread cores) core CPU as seen by the OS. So if you are running an OS that is able to multi-thread off the shelf software or your compiler can generate code that is multi-threaded and capable of executing threads in parallel on multiple cores, then no additional programming skill is needed to take advantage of all installed cores on the UV 2.

I wonder if there will be a Windows 7 "S" edition for this machine :P

I can just imagine Windows 8 enabling a touch screen interface and a drag and drop program builder for this system. :P:P:P

RE: And the big deal is???
By chemist1 on 6/17/2012 1:09:00 PM , Rating: 2
Thanks; your explanation seems to correspond to what I described in my second post.

RE: And the big deal is???
By JediJeb on 6/18/2012 5:04:08 PM , Rating: 2
Does MS still charge by the processor/core on multiprocessor installations for their commercial licenses? I remember years ago when the first dual processor desktops came out MS was trying to charge you twice as much for a license to run windows on those machines. That would be an expensive Windows install if they still do.

"A politician stumbles over himself... Then they pick it out. They edit it. He runs the clip, and then he makes a funny face, and the whole audience has a Pavlovian response." -- Joe Scarborough on John Stewart over Jim Cramer

Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki