backtop


Print 57 comment(s) - last by airsickmoth.. on Jul 4 at 8:34 PM


Vishkin's prototype

(Source: Clark School of Engineering - University of Maryland)
A promise of a parallel processing system that is simple to program for

Researchers at the University of Maryland's A. James Clark School of Engineering claim to have developed a computer system that is 100 times faster than today’s desktops.

The research group, lead by Uzi Vishkin, developed a system based on parallel processing technology. The team built a prototype with 64 parallel processors and a special algorithm that facilitates the chips to work together and make programming for them simple.

"Suppose you hire one person to clean your home, and it takes five hours, or 300 minutes, for the person to perform each task, one after the other," Vishkin said. "That's analogous to the current serial processing method. Now imagine that you have 100 cleaning people who can work on your home at the same time! That's the parallel processing method."

"The 'software' challenge is: Can you manage all the different tasks and workers so that the job is completed in 3 minutes instead of 300?" Vishkin continued. "Our algorithms make that feasible for general-purpose computing tasks for the first time."

Vishkin began his work in 1979 on developing a theory of parallel algorithms. By 1997, advances in technology enabled him to begin building a prototype desktop device to test his theory; he and his team completed the device in December 2006.

"The manufacturers have done an excellent job over the years of increasing a single processor's clock speed through clever miniaturization strategies and new materials," he noted. "But they have now reached the limits of this approach. It is time for a practical alternative that will allow a new wave of innovation and growth—and that's what we have created with our parallel computing technology."

Despite the prototype’s forward-looking architecture, the hardware is nothing fancy by today’s standards. Vishkin’s prototype runs using standard PC components running at 75MHz.

At the ACM International Conference on Supercomputing (ICS) in Seattle, Vishkin allowed conference participants to connect to the device remotely and run programs on it in a full-day tutorial session he conducted. Vishkin also participated in a panel discussion at a special invitation-only Microsoft Workshop on Many-Core Computing.

"The single-chip supercomputer prototype built by Prof. Uzi Vishkin's group uses rich algorithmic theory to address the practical problem of building an easy-to-program multicore computer," said Charles E. Leiserson, professor of computer science and engineering at MIT. "Vishkin's chip unites the theory of yesterday with the reality of today."

"This system represents a significant improvement in generality and flexibility for parallel computer systems because of its unique abilities," said Burton Smith, technical fellow for advanced strategies and policy at Microsoft. "It will be able to exploit a wider spectrum of parallel algorithms than today's microprocessors can, and this in turn will help bring general purpose parallel computing closer to reality."

Vishkin believes that future devices utilizing parallel processing technology could be composed of 1,000 processors on a chip the size of a fingernail.





Comments     Threshold


This article is over a month old, voting and posting comments is disabled

Applications?
By Spivonious on 6/27/2007 10:14:10 AM , Rating: 3
So what if I have 300 people cleaning the house? If I can't mop the floor before dusting the counters and sweeping the floor, then the 299 other people are just standing around doing nothing. Most programming is sequential in nature, and has been since the 1950s. We're just now starting to see some developers thinking in a parallel way. It will be years before any real advantage will be seen for general applications.




RE: Applications?
By Mitch101 on 6/27/2007 10:35:19 AM , Rating: 1
Super computers have been parallel for a while. In fact most of todays home computers are about 12-15 years behind where super computers are.

I often wonder what will people be doing with thier PC's when we reach something on the level of that Earth Simulator which I believe was built about 3-4 years ago. Which by the way most super computers use CPU's and dont use the power of GPU or PPU's yet which I feel can cause some huge increases in performance.

What is missing is that I read the CPU's used are running at 75mhz. Probably what was available at a fair cost to prove the concept.

Also AMD and INTEL have demonstrated technologies that turn off power to cores that are not in use to conserve energy and heat.


RE: Applications?
By Amiga500 on 6/27/2007 10:54:48 AM , Rating: 2
There is a thought...

For future power conservation - dynamically adjust the frequency of each core to complete the parallel threads at the same time (thus saving a bit of energy), before taking the next sequential step (which is dependant on the completion of the afore mentioned parallel threads) and then breaking up into more parallel runs and repeating the process.

Maybe more hassle than its worth. :-)


RE: Applications?
By LogicallyGenius on 6/27/2007 6:24:45 PM , Rating: 2
Sorry but i think its all very easy.
Heres how :

Create a JAVA Virtual Machine that uses 100 CPUs to do all the processing it needs to be a JVM. The programs written in Java dont care about how JVM works, they will always be simple sequential traditional programs.

So the solution is to create a virtual Mono Processor.

JVM 2.0 ?


RE: Applications?
By TomZ on 6/28/2007 8:48:34 AM , Rating: 1
What the heck are you talking about?!?


RE: Applications?
By AntDX316 on 6/28/2007 8:49:49 AM , Rating: 1
what the guy invented seems to be another processor entirely like totally not the same as microsofts coding linux all the stuff its like another world that if built around would make when finished be like 100000 faster than todays computers because it seems like todays computers r maxed and adding more wouldnt increase performance much because if intels 80 core was truely awesome they would be benching like crazy but thats not happening just as DX10 is not happening


RE: Applications?
By TomZ on 6/28/2007 1:45:14 PM , Rating: 3
I tried to read that a couple times, and I can't tell what you're trying to say. Could you please translate it to English and we'll try again to understand your point?


RE: Applications?
By Fritzr on 6/27/2007 8:59:02 PM , Rating: 2
The 75mhz speed is a result of the 64 cores being firmware not hardware.

The FPGA is a chip that allows custom circuits to be written as software and executed as hardware. Google CommodoreONE. This is a home computer that uses 2 FPGA instead of dedicated CPU/GPU. There are several different CPUs that run on the C1. Another application a few years back used an FPGA that was dynamically modified during program execution so that it was three different custom chips for each stage of target recognition & acquisition.

This is a computer that you can ship via email and download to your hardware :)


RE: Applications?
By peternelson on 6/27/2007 11:29:52 PM , Rating: 2
More specifically an FPGA is a chip that can be wired and rewired (implemented by a switchable connecting fabric) based on a hardware design language like VHDL or Verilog. In advanced applications, the chip itself can rewire part or all of itself eg to workaround a detected fault or re-optimise for different tasks.

It may contain cpus designed using such logic, and memory for program code. Such program memory can be "reprogrammed" in conventional ways. The chip fabric itself I prefer to use the terms reconfigured, or in layman's terms the circuitry can be "rewired".


RE: Applications?
By peternelson on 6/27/2007 11:52:15 PM , Rating: 2
Fritzr wrote "The 75mhz speed is a result of the 64 cores being firmware not hardware."

This IS real hardware, just not fixed hardware. It's REAL logic gates. It is NOT firmware in the conventional sense but a stored configuration. The fact that this configuration is often stored in flash memory the way your computer bios is stored, does not make it the same thing. The fact that you program your design into a flash eeprom to store it does not make it the same thing. Indeed fpga configurations can exist entirely in RAM if desired.

The 75Mhz limit is imo the result of poor logic design. Processors running in fpgas from the same vendor Xilinx can be clocked 5-10 times that speed depending on how you design. For example you may have a trade off between gate utilisation and achievable frequency, or power usage. The xilinx toolchain lets you repeatedly alter your design or optimise for certain goals. Clearly, faster cores mean you need less of them to reach a given performance. Probably this project was aiming for scalability rather than speed.

Sure cpus in fpgas clock slower than an ASIC or dedicated chip like Intel Core2... and the chips cost more because of economies of scale and the overhead of the connection matrix, but then conventional processors can't be reconfigured and reoptimised the way fpgas can.

Some of this research group stuff is theoretical, or to implement the parallel theory in the real world. I think they may be just using fpgas for convenience for now, but intend their architecture to be later blown into an asic or mainstream processor in future, once they fix a design.

A different idea would actually be put some fpga-type gates into a conventional processor (AMD may one day do this as a development of Fusion/Torrenza ie "acceleration on die", but that's also an idea others have had before.)


RE: Applications?
By FNG on 7/2/2007 11:12:59 AM , Rating: 2
I am guessing since the project was in the works for six plus years that most of the 75Mhz limit is a lack of resources issue (i.e. money and experienced chip designers). I mean, it is a university and they did not mention DARPA big bucks or anything along those lines.


RE: Applications?
By thatguy39 on 7/3/2007 12:36:19 AM , Rating: 2
ORRR instead of jumping to conclusions, you could say the 75mhz chip was to prove the thing works by keeping it simple.


RE: Applications?
By TheRodent on 6/27/2007 10:54:31 AM , Rating: 2
I really dont think this is true. Most programming has been parallel for some time especially for internet servers, rendering, scientific applications and even CAD tasks. What exactly are these general applications? Only lower end desktop programs fail to take advantage of any parallel processing


RE: Applications?
By BladeVenom on 6/27/2007 11:58:22 AM , Rating: 2
Supreme Commander runs a lot faster with a quad core.


RE: Applications?
By The Sword 88 on 6/27/2007 2:39:09 PM , Rating: 2
It is considerably faster on dual core than single. I was bored and disabled one of my CPU cores to see what difference it made in SupCum and it was huge, cant remeber exact numbers.

So I can only imagine how much quad would help.


RE: Applications?
By TSS on 6/27/2007 10:54:53 AM , Rating: 2
i belive the whole point of this chip is to see increases in performance across the board, now. from what i understand from the news message is that they'v build a chip with alot of cores with 1 very special all controlling core, which is as easy to program for as normal (single core) CPU's.

you cant mop the floor when the counters are not dusted. but, you can start dusting the counters in the kitchen, sweep the floor where there are no counters, and mop the floor where it isn't all that dirty so the dust gets collected with the water. you might need 2 buckets of water then with 2 people to refresh them but heck you've got 300 workers around anyway.

basicly the reason programmers think non-parallel is simply because its easyer to program that then parallel. this chip takes away that problem which allows for more efficient programs which in turn gives higher performance.


RE: Applications?
By peternelson on 6/27/2007 11:33:23 PM , Rating: 2
TSS wrote "from what i understand from the news message is that they'v build a chip with alot of cores with 1 very special all controlling core, which is as easy to program for as normal (single core) CPU's."

No, that's what Clearspeed math processor does, or what Sony does with Cell.

These guys are doing something a bit different to that.


RE: Applications?
By masher2 on 6/27/2007 12:03:06 PM , Rating: 5
> It will be years before any real advantage will be seen for general applications.

It depends on what you mean by "general applications". I was using highly-parallel scientific simulations, capable of using 100+ processors, while in graduate school nearly 20 years ago. Today, there are thousands of PC-based applications in scientific, financial, rendering, cad/modelling, and media coding applications that are highly parallel. Sure a word processor isn't...but does it really need to be?


RE: Applications?
By Spivonious on 6/27/2007 1:56:27 PM , Rating: 2
General applications to me means programs that everyone will use. Web browsers, word processors, email apps, solitaire, minesweeper, etc.

These applications will see little to no improvement from parallel processing.

quote:
scientific, financial, rendering, cad/modelling, and media coding applications


Those aren't really general. In fact they're all pretty specific to certain fields.


RE: Applications?
By masher2 on 6/27/2007 4:01:35 PM , Rating: 4
> "General applications to me means programs that everyone will use. Web browsers, word processors, email apps, solitaire, minesweeper, etc"

How much faster does your email and minesweeper game need to be? In most cases, if an application truly is serial in nature, then it's already seeing as much performance as it really needs.


RE: Applications?
By thatguy39 on 7/3/2007 12:33:32 AM , Rating: 2
Please read the article as that was the point which you must have missed.


Hmm
By Duraz0rz on 6/27/2007 9:07:26 AM , Rating: 1
How many FPS does this get in CS:S? :)

Anyways, that's great that parallel computing is taking off...but isn't this something on the lines of Intel's Larrabee project (or whatever name it is...I can't remember how to spell it)? What would make this any different from Intel's development?




RE: Hmm
By FITCamaro on 6/27/07, Rating: 0
RE: Hmm
By Verran on 6/27/2007 9:48:36 AM , Rating: 5
I think the point of this isn't so much the "multi-core" aspect, but the intelligent algorithms that are written for it. It seems like the idea here is that this is relatively easy to code for, where as programming for current multi-core solutions is more complex.

Currently, multi-core only helps if your code is SMP enabled. I think the idea here is to make a hardware solution that can break down code into parallel threads transparently.


RE: Hmm
By TomZ on 6/27/2007 10:16:24 AM , Rating: 1
Yeah, and it's too bad this article doesn't talk at all about the software side, since it sounds like the software is the "special" part of this development.


RE: Hmm
By Verran on 6/27/2007 1:04:23 PM , Rating: 3
quote:
The team built a prototype with 64 parallel processors and a special algorithm that facilitates the chips to work together and make programming for them simple .

quote:
"The 'software' challenge is: Can you manage all the different tasks and workers... Our algorithms make that feasible for general-purpose computing tasks for the first time."

quote:
"The single-chip supercomputer prototype built by Prof. Uzi Vishkin's group uses rich algorithmic theory to address the practical problem of building an easy-to-program multicore computer ,"

quote:
It will be able to exploit a wider spectrum of parallel algorithms than today's microprocessors can, and this in turn will help bring general purpose parallel computing closer to reality."

Did you read the article? In fact, it even specifically states that the hardware is not the real focus.
quote:
the hardware is nothing fancy by today’s standards.


RE: Hmm
By Verran on 6/27/2007 1:29:25 PM , Rating: 2
I realize now that TomZ was probably implying that he would have liked to see more detail on the explanations of the algorithms. Sometimes its hard to tell sarcasm from serious in text. :(

If this is the case, just treat the above post as an outline of the higher points in the article :P


RE: Hmm
By TomZ on 6/27/2007 2:22:18 PM , Rating: 1
Uh, yes, I was hoping for a bit more detail compared to just a statement that there is a special algorithm. :o) Even some links would be nice.


RE: Hmm
By Fritzr on 6/27/2007 9:19:03 PM , Rating: 3
Some links

Article that is being discussed
http://www.networkworld.com/community/?q=node/1672...

What is Parallel Computing?
http://www.answers.com/topic/massively-parallel?ca...

Developer of the system
http://www.umiacs.umd.edu/~vishkin/

University of Maryland
http://www.eng.umd.edu/

They need help naming the computer
http://www.eng.umd.edu/

Explanation of PRAM
http://www.umiacs.umd.edu/users/vishkin/XMT/spaa07...

Get your own FPGA here
http://www.xilinx.com/products/silicon_solutions/f...

CommodoreONE Homepage
C1 is a computer that uses a software CPU. They have C-64, VIC-20 & Z-80 softloaded CPUs for this machine now. The multicore CPU from UMD could be ported to this machine.
http://c64upgra.de/c-one/


RE: Hmm
By peternelson on 6/27/2007 11:24:07 PM , Rating: 2
"The multicore CPU from UMD could be ported to this [C-ONE] machine."

Whilst the technology is similar, it's from a different FPGA vendor, and much much smaller in gate count than the Xilinx devices Vishkin's team are using. You wouldn't be able to fit many of their cores in, if any.


RE: Hmm
By therealnickdanger on 6/27/2007 9:22:12 AM , Rating: 3
quote:
What would make this any different from Intel's development?

Intel has retail channels? :P

It's interesting to see a comment by a Microsoft techie. Perhaps Microsoft will attempt to buy them out and make this the basis for the next Xbox? It's a stretch, but if this truly is such a breakthrough, I can imagine Prof. Uzi (sweet name) will be getting lots of expensive offers.


Smarter Compilers?
By SpatulaCity on 6/27/2007 10:52:35 AM , Rating: 3
I wonder if this technology can be used to make smarter compilers. Like somebody else stated, programmers seem to write code to be run in a sequential manner but what if the compiler was smart enough to divy up those tasks to each core. I dunno, just throwing out ideas.




RE: Smarter Compilers?
By Amiga500 on 6/27/2007 10:59:32 AM , Rating: 2
http://delivery.acm.org/10.1145/1260000/1250753/p1...

See here (or if that doesn't work try below)

http://portal.acm.org/citation.cfm?id=1250734.1250...

Hopefully intel have a good approach to this - and hopefully they open it up for everyone to have a look (licensed I suppose - but hey, they invented it).


RE: Smarter Compilers?
By Duraz0rz on 6/27/2007 10:51:02 PM , Rating: 2
http://www.techreport.com/onearticle.x/12730

Their C-based compiler for the Larrabee project will be open-source.


Heard it before...
By shamgar03 on 6/27/2007 1:30:38 PM , Rating: 2
How many times have we heard the "we can make parallel processing easy! 100X improvement over todays processors!" Yes processors will increase in speed. Yes in like 10 years parallel processing will be the standard, no it won't be much easier. The fact of the matter is that most programs are serial by nature, most algorithms are. Its just the fact of life...




RE: Heard it before...
By TomZ on 6/27/2007 2:33:58 PM , Rating: 2
It's funny you would mention that. If you read this DT article out of context, you might get the impression that there has been no work done towards automoatic/easy parallelization. But nothing could be further from the truth. For example, here is some information on Sun's and Intel's recent compilers that incorporate a lot of parallel processing technology.

http://developers.sun.com/sunstudio/cmt.html

http://www3.intel.com/cd/software/products/asmo-na...

There are also lots of other products, tons of research studies, papers, etc. on auto-parellelization, in addition to the above products that are already available today.

This is why I'm curious about the approach Dr. Uzi's team has taken, and why it is getting the attention that it is.


The point
By geddarkstorm on 6/27/2007 4:33:18 PM , Rating: 2
It seems to me that the point was they developed an algorithm that could break down code into parallel parts and then dole it out in the most effective manner possible to the processors--then put it back together again correctly at the end of the line. That's one of the great purposes of an algorithm: to break something into smaller pieces and put it back together in a logical manner. So even if you had a sequentially programmed piece of software, a good algorithm can break that apart, sending each step to a different processor so the whole thing is done instantly instead of waiting for one processor to do each step before going to the next. Even if steps were contingent upon each other, a good algorithm and memory manager could simply bounce the products between the parallel processors. It would be far superior to sequential processing, even with a totally sequential piece of code. The trick appears to be keeping track of every code fragment, which processor is chewing on it, and when which part of a code needs to be sent to another processor and/or integrated into the growing final product. No easy task!

It seems, that the discovery has been about taking that theoretical and actually doing it. Considering they claim that their 64 75Mhz (original Pentium?) processors could perform 100 times better than current desktops (quite the claim!)--although we cannot know the validity of that without actual raw data, or how to interpret that correctly. Still, parallel processing is apparently the only way to continue advancing, and that's why multicores are appearing. The hardware is there, but the dynamic management algorithms to make sense of all the streams and processors and keep communication flowing in a meaningful, robust way are what have been lacking.




RE: The point
By TomZ on 6/27/2007 5:19:50 PM , Rating: 2
What makes you believe it has been lacking? Try googling for "parallel processing," and then drill into some of the results. A good amount of research and development has been going on in that field for decades.

Also see my post above with the links to the Sun and Intel compilers that have a lot of automatic parallel processing algorithms already designed into a real product and shipping to real customers.

I'm not saying Dr. Uzi doesn't have a novel or interesting approach, but the general problem of simplifying parallel processing has been on people's minds for a long time.


By david99 on 6/28/2007 6:52:07 PM , Rating: 2
this is just another kind of KiloCore PPC FPGA chip , and very nice they are too.

http://www.rapportincorporated.com/kilocore/kiloco...

"Kilocore™ processors use a powerful new parallel computing architecture that dramatically lowers power consumption for equivalent computational performance.

Kilocore™ technology utilizes arrays of dynamically reconfigurable parallel processing elements optimized for performance.

Each processing element in a Kilocore™ stripe can be dynamically reconfigured in one clock cycle. Kilocore™ tools support both dynamic reconfiguration of processing elements and formatting of all types of data.

The unique Kilocore™ architecture provides the following benefits:

Flexibility: functions can be dynamically changed in a single clock cycle.

Performance: unprecedented performance via simultaneous computing of multiple functions.

Scalability: hundreds to thousands of processing elements on a single chip.

Efficiency: Extremely low power consumption."




By TomZ on 6/29/2007 11:25:40 PM , Rating: 2
I found it funny that the Kilocore chip only had 256 processors. But I guess calling it QuarterKilocore doesn't make for such good marketing.

(Yes, I realize it would scale up to 1K+ cores, I'm just kidding around.)


MultyProcessor/MultyTasking
By vladio on 6/27/2007 4:59:02 PM , Rating: 2
maybe it's Not that new as a concept,
But it can be Much-Much-Better,
as simple as that.




Not that impressive
By peternelson on 6/27/2007 10:43:02 PM , Rating: 2
Well, I'm doing similar using Xilinx FPGA chips with multiple parallel embedded processors, right here on my desk.

Unfortunately nobody is going to give me a PHD or government research grant for doing it.

Starbrige systems have been doing this stuff for years, with larger scale systems. Starbridge also came up with tools to make it easier for people to program for and utilise such hardware.

Comments about this board in particular:
a)It uses PCI-X (64 bit) not the more modern PCI Express.
b)They are using Xilinx Virtex 4 parts LX200 and FX100. Since Xilinx already moved onto the Virtex 5 series which is faster, this board is out of date already.
c) They are only achieving 75Mhz per "mini processor". That is slow, higher clock speeds are achievable even in Xilinx low-end Spartan series, let alone Virtex. Their core design must be poorly optimised if it can't go faster than 75.

Now, perhaps as they say it's prototype, "proof of concept" etc. Perhaps the clever stuff is in some algorithm to use that hardware. Maybe so, but then Starbridge have been doing that for years. Indeed so am I (and others).

Sure it's nice to see the technology at a Supercomputing stand (Xilinx FPGAs are already available for some Cray, SGI etc), but I wouldn't say Vishkin's team are leading the field based on anything I read here.

In-chip and inter-chip buses for connecting multiple mini-processors are widely available. Xilinx themselves offer picoblaze and microblaze core designs free or for a minimal one-off payment without royalties. Plenty of alternatives are available at the opencores site.

So this demo is more an example of what can be done, rather than particularly pushing forwards. Benchmarks of some applications would however be interesting to see.




Anti-Hyperthreading?
By phaxmohdem on 6/28/2007 3:10:28 PM , Rating: 2
Sounds alot like all the Anti-Hyperthreading FUD that was flying about prior to the release of the C2D. "AMD's answer to Conroe"




well
By ryedizzel on 6/28/2007 11:06:36 PM , Rating: 2
if all 300 people cleaning the house were French Maids then it would be a different story




Awesome ! for who?
By calaverasgrandes on 6/29/2007 3:57:56 PM , Rating: 2
Parallel processing even with todays multicore chips, is just wasted on most of us.
I do audio and video, so I use all the cpu power I can get. I also actually NEED large amounts of diskspace, ram etc. What you regular joes do with it I have no idea!
Massively parallel stuff like this will probably never make it into home computers. Though I can see its utility in server rooms for application servers, ASP websites, and of course virtualization (which is a lot hotter than I used to think, a recent project I know of consolidated 35 servers to just 10, and boosted performance)




It's called "Daisy."
By airsickmoth on 7/4/2007 8:34:55 PM , Rating: 2
Daisy, Daisy, give me your answer do. I'm half crazy all for the love of you. It won't be a stylish marriage, I can't afford a carriage. But you'll look sweet upon the seat of a bicycle built for two




What Exactly did they Achieve?
By spillai on 6/27/2007 11:23:25 AM , Rating: 1
The Scientists did not say what system speed they achieve, and how much scalable the system is etc.

Satheesh
www.knowledgevibes.com




Clark School of Engineering???
By Mitch101 on 6/27/07, Rating: -1
RE: Clark School of Engineering???
By Fenixgoon on 6/27/2007 10:32:40 AM , Rating: 2
UMD-CP is an excellent engineering school all around. my brother was part of their first computer engineering class, IIRC.

i don't quite get what your second part means though.. that's just confusing


RE: Clark School of Engineering???
By Mitch101 on 6/27/2007 10:38:31 AM , Rating: 1
Ok I just wasnt familiar with the school and I lived in Jersey close to the school but never heard of it.

Second part is I wondered if he applied to MIT and was rejected then wound up at what I felt before knowing the schoole what might have been on the level of a community college to something like MIT.

Thanks for clarifying Fenixgoon.


By Fenixgoon on 6/27/2007 11:10:10 AM , Rating: 1
Professors who teach don't necessarily have their phd from the school at which they teach :)


By Gridandcluster on 6/27/2007 11:33:10 AM , Rating: 2
By roastmules on 6/27/2007 12:05:47 PM , Rating: 4
It's not my school, but just because Clark isn't MIT, and you haven't heard of it don't knock it. Do a little research before you post.

Also, he is a faculty of UMD, not a graduate, with his doctorate from Israel.

"Graduate programs in engineering at the University of Maryland are the fastest rising in the nation. The Clark School's graduate engineering programs collectively rank 16th in the nation and 10th among all public universities, according to the U.S. News & World Report's "America's Best Graduate Schools 2008." from http://www.ece.umd.edu/about/rankings.html

So, do you have a doctorate from MIT and work there?


RE: Clark School of Engineering???
By povomi1 on 6/27/2007 12:35:22 PM , Rating: 3
It is ignorant to assume that just because someone didn't graduate or attend MIT or Ivy League or some other prestigous university, that they are stupid, unintelligent, or anything along those lines. As if someone who happens to teach at MIT is be default smart and a genious. Usually yes, probably yes, but not definite. Clark school of Engineering is a subschool at the University of Maryland. I'm sure not everything at MIT is called MIT, when you donate large sums of money you tend to be recognized. Hopefully that lifts the veil of ignorance a bit.

UMD has a very nice program at their school of engineering. I'm very curious, as it was not stated in the article or I missed it, whether Vishkin's team was grad or undergrad and if its undergrad then perhaps part of the Gemstone program.

I know at least 3-4 people that are already in Gemstone and my best friend is attending that program in the fall so I guess thats about 4-5 then.

Good job to the professor and his team. Hopefully top-down econo... I mean technology will trickle down sooner rather than later.


RE: Clark School of Engineering???
By Mitch101 on 6/27/2007 1:57:21 PM , Rating: 1
Wow you got all that from my comment.

A few of you need to loosen up maybe educate others instead of attack. Quite being such Whinny bitches.

Bet your next act would be to correct my spelling or grammer.


RE: Clark School of Engineering???
By TomZ on 6/27/2007 2:24:42 PM , Rating: 2
"whinny" -> "whiney." "Whinney" is what horses do.

Just kidding, sorry.


RE: Clark School of Engineering???
By Mitch101 on 6/27/2007 4:12:36 PM , Rating: 2
LOL. Believe it or not I did it on purpose. Thats why I had the following sentence about spelling and grammer because it just annoys some people to no end.


"Vista runs on Atom ... It's just no one uses it". -- Intel CEO Paul Otellini



Most Popular Articles5 Cases for iPhone 7 and 7 iPhone Plus
September 18, 2016, 10:08 AM
Laptop or Tablet - Which Do You Prefer?
September 20, 2016, 6:32 AM
Update: Samsung Exchange Program Now in Progress
September 20, 2016, 5:30 AM
Smartphone Screen Protectors – What To Look For
September 21, 2016, 9:33 AM
Walmart may get "Robot Shopping Carts?"
September 17, 2016, 6:01 AM







botimage
Copyright 2016 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki