backtop


Print 31 comment(s) - last by mindless1.. on Feb 18 at 10:58 AM


  (Source: NVIDIA)

  (Source: NVIDIA)
Tablets using Kal-El will launch in August of 2011

The rate of advancement in system-on-chip (SoC) designs for smartphones and tablets is advancing at a rapid rate. It was too long ago that we were marveling at single-core designs approaching 1GHz core clocks, but we are now seeing single- and dual-core processors surpassing the 1GHz mark and the announcement of quad-core processors.

NVIDIA is the latest to throw its hat in the ring with the announcement of its Kal-El tablet processor (whether this chip will be officially called Tegra 3 has not yet been determined). NVIDIA is making some big claims with this quad-core processor (Kal-El also features a 12-core GPU):   

  • It will have 5x the performance of the current Tegra 2 SoC
  • It will have lower power consumption than Tegra 2 despite the increase in performance
  • It has the ability to output video at up to 2560x1600

A Coremark 1.0 benchmark result of Kal-El in action shows it absolutely obliterating its Tegra 2 predecessor (11,354 for Kal-El versus 5,840 for the Tegra 2). In fact, it was even faster than an Intel T7200 Core 2 Duo processor (2GHz, 4MB cache) that managed to pull in a score of 10,136.

Of course this is just a single benchmark so we shouldn't get too excited yet; but these early numbers look very promising. Also keep in mind that clock speeds have not been finalized, so we don't know exactly what kind of performance we'll be seeing in production silicon.

NVIDIA is currently sampling Kal-El, and is prepared to have the chip in production tablets by the third quarter of 2011. In comparison, Qualcomm's recently announced quad-core APQ8064 won't even sample until early 2012. 

It's interesting to note that Kal-El isn't the only "superhero" SoC that will be coming from NVIDIA -- the codenames for it follow-up designs include Wayne (2012), Logan (2013), and Stark (2014). Wayne will arrive one year after Kal-El, Logan will arrive a year after that, etc. By 2014, Stark is expected to offer 75 times the performance of Tegra 2.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

wow...
By superPC on 2/16/2011 7:04:21 AM , Rating: 2
how can a RISC beat an intel core 2 duo in a benchmark? i wonder how efficient kal el is compared to core 2 duo ULV. still coremark is made by Embedded Microprocessor Benchmark Consortium. it might be better suited for RISC than x86.




RE: wow...
By StevoLincolnite on 2/16/2011 7:50:08 AM , Rating: 2
The Core 2 Duo is only running at 2ghz with 4mb of cache with a 667mhz FSB, it's hardly what I would call a speed demon.

Plus it was a mobile chip, if we were to compare it to a desktop chip, the Core 2 Duo T7200 would be pretty much classed as a low-end chip from 2006-2007, x86 has leaped significantly since then.

So SOC's like Tegra are still at least 3-5 years behind x86 and that's just in the CPU department, they can never match a full Desktop GPU that throws out 200+ watts.


RE: wow...
By Brandon Hill (blog) on 2/16/2011 7:51:02 AM , Rating: 4
I don't really see how desktop performance is relevant to a discussion about mobile products, but it is still nice to see ARM making some huge gains on the performance front.


RE: wow...
By StevoLincolnite on 2/16/2011 8:22:07 AM , Rating: 2
It's not relevant, but it's always nice to compare theoretical performance regardless of platform sometimes. :P


RE: wow...
By dgingeri on 2/16/2011 7:59:16 AM , Rating: 3
That thing will still clobber a xeon in a 1U server for performance per watt. I'd love to have a couple of these for my domain controllers.


RE: wow...
By corduroygt on 2/16/2011 11:03:49 AM , Rating: 2
Considering the first desktop C2D chips launched at the end of 2006 at 1.86 GHz, your timeline is off.


RE: wow...
By bpharri2 on 2/16/2011 12:24:33 PM , Rating: 2
Timeline is correct:

- C2D desktop chip: July 2006
- C2D mobile (T7200): August 2006

The original poster stated that the mobile chip could be considered "low end" for 2006-2007 when compared to the desktop chips Intel was releasing. If he had said the desktop chip was low end for 2006-2007 or that the mobile chip by itself was low end, then I'd agree with you.

As it was, the C2D desktop chips Intel released before, during, and after the T7200 launch were faster as they had more cache and faster FSB.

At least, that's the way I interpreted his statement.


RE: wow...
By mindless1 on 2/18/2011 10:58:36 AM , Rating: 2
You can't really claim "low end" compared to a desktop chip, that would be like saying a car is "low end" compared to a truck if the goal is not hauling stuff but rather the power envelope for mobile use.

Regardless, the comparison is meaningless, it's just a brief look at where mobile CPUs have been and where they are going. To try and paint it black and white instead of just some marketing trivia would miss the point that it's just PR like anything else.


RE: wow...
By lifewatcher on 2/16/2011 5:28:14 PM , Rating: 2
If we go the way your logic works, we could stack up as many mobile chips, as it takes to match the power consumption of the desktop chip. Then we can once again compare the performance. I'm sure that the combined output of a box filled with power-sipping mobile chips will outright kill anything not only from 5 years ago, but also currently available. All it needs is an optimized platform to handle the gazillion chips.


RE: wow...
By mindless1 on 2/18/2011 8:57:12 AM , Rating: 2
You seem to be a little confused about processors. In LATE 2007 the semi-low end was something like a Pentium dual core E2180 while the low end was still a P4 class Celeron. In 2008 a T7200 was still faster than what was sold in the average OEM system, by average I mean by sales numbers.


RE: wow...
By bug77 on 2/16/2011 8:56:58 AM , Rating: 3
Check this out: http://en.wikipedia.org/wiki/Reduced_instruction_s...

And if you're after a more technical explanation, this is worth a look too: http://www-cs-faculty.stanford.edu/~eroberts/cours...

Basically, RISC has a default speed and efficiency advantage. CISC is more compiler friendly, but this advantage has been eroded over the years, by compilers' advancements.


RE: wow...
By Shining Arcanine on 2/17/2011 8:16:29 AM , Rating: 2
RISC was designed to be compiler friendly. On the other hand, CISC was designed for human assembly programmers.


RE: wow...
By bug77 on 2/17/2011 10:43:56 AM , Rating: 2
When I said CISC is more compiler friendly, I meant a compiler for CISC has less work to do (e.g. it doesn't have to reorder instructions), therefore it is easier to write.
That's why many developers flocked to x86. Of course, support from intel didn't hurt either.
That's the story I know, but compilers aren't my strong point.

Also, if you're familiar with Andrew Tanenbaum's "Structured Computer Organization", he was making an argument about the death of x86 as a boon for the rest of the IT world, since almost 20 years ago. But we got Pentium 4 instead...


RE: wow...
By nafhan on 2/16/2011 10:13:43 AM , Rating: 2
There's a number of RISC processors that can beat any C2D (i.e. Power6/7). The amazing thing here is the amount of processing being done with so little power usage.
Still, you've got a brand new unreleased CPU beating something from about 5 years ago. So, this is basically right on schedule.


RE: wow...
By theapparition on 2/16/2011 10:22:09 AM , Rating: 3
Keep in mind that this single benchmark is very processor core friendly. So a quad-core competeing against a dual core can certainly be competitive. But I'd wager as more real world benchmarks come in, you'll find it won't have that sort of performance advantage.


RE: wow...
By Shining Arcanine on 2/17/2011 8:14:03 AM , Rating: 2
Why are you surprised? RISC is theroetically superior to x86. As long as the engineering resources in making them are the same, a well designed RISC processor should always outperform a well designed CISC processor.


Look carefully at Coremark 1.0 chart!
By stmok on 2/16/2011 8:43:34 AM , Rating: 5
For those who don't use GCC to compile code under Linux...

That Coremark 1.0 chart comparing Kal-El and Tegra 2 against the Intel Core 2 Duo T7200, has been intentionally rigged in favour of Nvidia's solutions.

ie:
Version 3.4.4 with -O2 enabled for the Intel CPU.
VS
Version 4.4.1 with -O3 enabled for Nvidia ARM CPUs.

To put it in plain english:
Nvidia intentionally used an older version of GCC and less aggressive optimization option for the comparison Intel CPU to look good.

Here's the close-up of that chart from Anandtech.
=> http://images.anandtech.com/reviews/SoC/NVIDIA/Kal...

Honestly, if one is doing comparisons; they can at least be honest about it. Such dishonest behaviour isn't going to encourage me to invest in future Nvidia solutions.




By Brandon Hill (blog) on 2/16/2011 9:57:14 AM , Rating: 2
Wow, I did NOT notice that. Good eye!


By Pitbull0669 on 2/16/2011 10:06:31 AM , Rating: 2
AWESOME Info bud. AND way to go on keeping check and the retards trying to put one over on us. Big Thumbs up!


By nafhan on 2/16/2011 10:33:54 AM , Rating: 5
Very interesting. Not a regular GCC user myself, but I did a little quick Googling just for grins:

From GCC release history.
-> GCC 3.4.4 May 18, 2005
-> GCC 4.4.1 July 22, 2009

C2D was released in 2006...

O2 - optimizations that decrease size without affecting speed and increase size without decreasing speed

O3 - O2 optimizations plus (essentially) anything that will increase speed

Those could be some serious differences!!!


By BSquared on 2/16/2011 12:40:34 PM , Rating: 2
Couldn't this be blamed on the Coremark software itself, as the results are based on what's already in the database of past results from other submitters? So far, all the Tegra results were submitted before the actual C2D results, so whomever submitted those results back in October is to blame for the skewed numbers coming from the Intel processor.

So if someone who has a C2D T7200, submitted results that were compiled with the latest build of GCC would effectively change the comparison value.


It's only weakness
By FITCamaro on 2/16/2011 10:40:50 AM , Rating: 3
If an application named kryptonite.exe is run on it, it is destroyed.




RE: It's only weakness
By TeXWiller on 2/16/2011 2:35:19 PM , Rating: 2
..Bumpgated to the oblivion..


RE: It's only weakness
By ShaolinSoccer on 2/17/2011 6:54:44 AM , Rating: 2
After seeing the pic of Superman's mom, I decided to look her up. She died recently at 72 years old. Time sure is flying. I had no clue she was that old. Makes me feel old...


x86/x64?
By Motoman on 2/16/2011 9:56:37 AM , Rating: 2
Is this an x86/x64 platform CPU? If so...did I miss how the licensing issue for that platform got fixed? I thought there was a big ruckus about whether or not Nvidia could legally produce for that platform between them and Intel.

I can't describe the ruckus, but I definitely heard a ruckus.




RE: x86/x64?
By bug77 on 2/16/2011 10:14:50 AM , Rating: 2
quote:
Is this an x86/x64 platform CPU?


No, it is not.


RE: x86/x64?
By Motoman on 2/16/2011 3:18:41 PM , Rating: 2
...so that benchmark is available on platforms other than x86/x64? If so...why? And also, if so, what point is there in comparing to the Intel chip, which is x86/x64?


simple math
By tastyratz on 2/16/2011 8:09:35 AM , Rating: 3
5 times the performance... is double the score?
Did anyone else find that curious?

Yes it is just 1 benchmark, and it does prove it's much faster - but it does not support the simple claim of 5x and just makes it look boldly inflated. Don't say 5x while showing me a picture of 2x.




RE: simple math
By MrTeal on 2/16/2011 9:19:22 AM , Rating: 2
Yeah, I don't get it either. Plus the Kal-El die is halfway between 1 and 10 on a log scale. If it really was 5x the performance, it should be 70% of the way towards the 10.


By MarioJP on 2/16/2011 1:18:57 PM , Rating: 2
does this mean that PC market as we know is doomed. No more MB's and dedicated gpu's. These mobile devices will be our next gen computers??. yet alone x86 reached a dead end, or am I missing something??




By KingConker on 2/17/2011 7:38:00 AM , Rating: 2
What were the Marketing guys thinking?




"Well, we didn't have anyone in line that got shot waiting for our system." -- Nintendo of America Vice President Perrin Kaplan














botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki