• After 15+ years, we've made a big change: Android Forums is now Early Bird Club. Learn more here.

cortexA9 vs tegra 3

the cortex a9 runs at 2.5 DIMPS per mhz where as the scorpion processors run at 2.1 DIMPS per mhz but the tegra 3 im cluless. My question is what is better between tegra 3 and A9 intearms of processing power, the ability to overclock and the power usage.


What are your thoughts on this ?
 
I believe that you'll find those are estimates per core (at GHz rather than MHz).

The only person I know maintaining trustworthy DMIPS benchmarks is Medion, also a member here. Here is one of his articles at another site.

Android - CPU Evolution - Android - abi>>forums

I think you'll find that when the S3 (Scorpion) is clocked at 1.5GHz it's roughly equivalent to an A9 at 1.2GHz.

I'll give Medion a shout to see if he can chime in here with the real facts. :)
 
  • Like
Reactions: Medion
Upvote 0
Got your message :)

Tegra 3 is Cortex A9, so the 2.5 DMIPS per clock cycle is correct. However, Tegra 2 was a scaled down Cortex A9, so while the DMIPS numbers were the same, NEON support was removed as well as a few other things. Tegra 3 should be as fast as or slightly faster than Tegra 2 clock for clock, depending on the operation.

A lot of people have harped on me for criticizing the Tegra 3, as if I'm against quad-core on a phone. Truth is, Tegra 3 is simply under-powered for a quad-core. It's the same Cortex A9 we saw in OMAP4, Exynos 4, and Apple A5 last year. This year we'll see OMAP5, Exynos 5, and Apple's A6 (latter is likely) using dual-core Cortex A15s. That's like comparing AMD's 1.3ghz Phenom X4 (entry level) to Intel's Core2Duos at 2.6ghz and above. Sorry, more cores is not > higher performing dual-cores. Tegra 3 will be first out the door because it needs to be. OMAP5, Exynos 5, and Apple A6 will kill it. Qualcomm's Krait (Snapdragon S4) will also abuse it quite handily.

As for the GPU, Tegra 3 is using a new GeForce ULP. You'd think that, this being Nvidia and all, they'd have graphics locked down. However, one again they are a generation behind. Preliminary benchmarks on the Transformer Prime put the Tegra 3's GPU performance between the Mali 400MP4 (Exynos) and the PowerVR SGX543MP2 (Apple A5). Given what's coming down the pipe this year (PowerVR SGX544, MALI T-604, Adreno 225), this is a bit underwhelming.

As is typical, there will always be something better if you continue to wait. Tegra 3 is the first quad-core, but I'm not impressed. It's a worthy upgrade if you're still on the old single-core Snapdragon S1/S2, OMAP3, or Hummingbird chipsets. However, if you're on a modern dual-core, either upgrade to the dual-core A15s that come out the second half of this year, or for those who like to upgrade every 2 years, wait for the quad-core A15s in 2013 (which includes Tegra 4).

TLDR: The thread title is misleading, as Tegra 3 is Cortex A9. So, saying cortexA9 vs tegra 3 is like saying "Intel Core i7 vs. Dell XPS."
 
Upvote 0
Got your message :)

Tegra 3 is Cortex A9, so the 2.5 DMIPS per clock cycle is correct. However, Tegra 2 was a scaled down Cortex A9, so while the DMIPS numbers were the same, NEON support was removed as well as a few other things. Tegra 3 should be as fast as or slightly faster than Tegra 2 clock for clock, depending on the operation.

A lot of people have harped on me for criticizing the Tegra 3, as if I'm against quad-core on a phone. Truth is, Tegra 3 is simply under-powered for a quad-core. It's the same Cortex A9 we saw in OMAP4, Exynos 4, and Apple A5 last year. This year we'll see OMAP5, Exynos 5, and Apple's A6 (latter is likely) using dual-core Cortex A15s. That's like comparing AMD's 1.3ghz Phenom X4 (entry level) to Intel's Core2Duos at 2.6ghz and above. Sorry, more cores is not > higher performing dual-cores. Tegra 3 will be first out the door because it needs to be. OMAP5, Exynos 5, and Apple A6 will kill it. Qualcomm's Krait (Snapdragon S4) will also abuse it quite handily.

As for the GPU, Tegra 3 is using a new GeForce ULP. You'd think that, this being Nvidia and all, they'd have graphics locked down. However, one again they are a generation behind. Preliminary benchmarks on the Transformer Prime put the Tegra 3's GPU performance between the Mali 400MP4 (Exynos) and the PowerVR SGX543MP2 (Apple A5). Given what's coming down the pipe this year (PowerVR SGX544, MALI T-604, Adreno 225), this is a bit underwhelming.

As is typical, there will always be something better if you continue to wait. Tegra 3 is the first quad-core, but I'm not impressed. It's a worthy upgrade if you're still on the old single-core Snapdragon S1/S2, OMAP3, or Hummingbird chipsets. However, if you're on a modern dual-core, either upgrade to the dual-core A15s that come out the second half of this year, or for those who like to upgrade every 2 years, wait for the quad-core A15s in 2013 (which includes Tegra 4).

TLDR: The thread title is misleading, as Tegra 3 is Cortex A9. So, saying cortexA9 vs tegra 3 is like saying "Intel Core i7 vs. Dell XPS."

Excellent post. I too am disappointed with Tegra 3 and I am personally looking forward to OMAP5. I will say however that when it comes to games nVidia spends alot of time and effort helping devs get the most out of their SoC with very good results. So even though Tegra 2 is behind OMAP4 and Exynos the games tend to run better and look nicer on Tegra. If nVidia continues that with Tegra 3 we could be looking at some nice games. Productivity wise I think the other SoC's should be better.
 
Upvote 0
Got your message :)

Tegra 3 is Cortex A9, so the 2.5 DMIPS per clock cycle is correct. However, Tegra 2 was a scaled down Cortex A9, so while the DMIPS numbers were the same, NEON support was removed as well as a few other things. Tegra 3 should be as fast as or slightly faster than Tegra 2 clock for clock, depending on the operation.

A lot of people have harped on me for criticizing the Tegra 3, as if I'm against quad-core on a phone. Truth is, Tegra 3 is simply under-powered for a quad-core. It's the same Cortex A9 we saw in OMAP4, Exynos 4, and Apple A5 last year. This year we'll see OMAP5, Exynos 5, and Apple's A6 (latter is likely) using dual-core Cortex A15s. That's like comparing AMD's 1.3ghz Phenom X4 (entry level) to Intel's Core2Duos at 2.6ghz and above. Sorry, more cores is not > higher performing dual-cores. Tegra 3 will be first out the door because it needs to be. OMAP5, Exynos 5, and Apple A6 will kill it. Qualcomm's Krait (Snapdragon S4) will also abuse it quite handily.

As for the GPU, Tegra 3 is using a new GeForce ULP. You'd think that, this being Nvidia and all, they'd have graphics locked down. However, one again they are a generation behind. Preliminary benchmarks on the Transformer Prime put the Tegra 3's GPU performance between the Mali 400MP4 (Exynos) and the PowerVR SGX543MP2 (Apple A5). Given what's coming down the pipe this year (PowerVR SGX544, MALI T-604, Adreno 225), this is a bit underwhelming.

As is typical, there will always be something better if you continue to wait. Tegra 3 is the first quad-core, but I'm not impressed. It's a worthy upgrade if you're still on the old single-core Snapdragon S1/S2, OMAP3, or Hummingbird chipsets. However, if you're on a modern dual-core, either upgrade to the dual-core A15s that come out the second half of this year, or for those who like to upgrade every 2 years, wait for the quad-core A15s in 2013 (which includes Tegra 4).

TLDR: The thread title is misleading, as Tegra 3 is Cortex A9. So, saying cortexA9 vs tegra 3 is like saying "Intel Core i7 vs. Dell XPS."

what is this S4 from snapdragon you sepak of ?
Dose it not use a qulculm chipset with an andero gpu ?

Honestly....i still cant see snapdragon beating tegra 3 let alone cortex A9.....just look at the htc sensation vs galaxy S2. One thing i find very strange about the htc edge is it is the first htc device not to use qulculm.
 
Upvote 0
I believe that you'll find those are estimates per core (at GHz rather than MHz).

The only person I know maintaining trustworthy DMIPS benchmarks is Medion, also a member here. Here is one of his articles at another site.

Android - CPU Evolution - Android - abi>>forums

I think you'll find that when the S3 (Scorpion) is clocked at 1.5GHz it's roughly equivalent to an A9 at 1.2GHz.

I'll give Medion a shout to see if he can chime in here with the real facts. :)

I think what you said there is very important for people to understand.
I am no android expert but I do know a few things about computer and currently attending College for a CS major. Something that most people forget is that even if you have to CPU's that are clocked at the same speed, it doesnt mean they are the same or even close for that matter. you need to see how they handle they're instructions first.
 
  • Like
Reactions: EarlyMon
Upvote 0
what is this S4 from snapdragon you sepak of ?
Dose it not use a qulculm chipset with an andero gpu ?

Honestly....i still cant see snapdragon beating tegra 3 let alone cortex A9.....just look at the htc sensation vs galaxy S2. One thing i find very strange about the htc edge is it is the first htc device not to use qulculm.

Then you're letting prejudice about the Qualcomm get the better of you.

Overall, an S3 at 1.5GHz is equivalent to an A9 at 1.2GHz. (Using CF-Bench or AnTuTu benchmarks as references, comparing HTC against Galaxy S2.) The Qualcomm contains the Neon extension, the Tegras do not. The S3 can run its cpu cores at two independent clock speeds, the A9 cannot.

Then comes the S4.
 
Upvote 0
what is this S4 from snapdragon you sepak of ?
Dose it not use a qulculm chipset with an andero gpu ?

Honestly....i still cant see snapdragon beating tegra 3 let alone cortex A9.....just look at the htc sensation vs galaxy S2. One thing i find very strange about the htc edge is it is the first htc device not to use qulculm.

S4 refers to qualcomms latest SoC yet to be used in a phone, Krait is the newer Quad or Dual Core CPU which runs upto 2.5GHz, the graphics processor will be either Adreno 225 or 305.

It should be faster than Tegra 3, I don't think Tegra 3 is that impressive, CPU performance if your using all four cores will be fantastic but most apps will only use one so it won't feel any faster than existing phones.

Also Tegra 3's graphics performance is right inbetween ARM's Mali 400MP from the GS2 and the PowerVR's SGX543MP2 from the iPhone 4S/iPad 2.

Krait clockspeeds looks very impressive, however anyone here who knows anything about CPU's knows you can't just judge performance on clockspeeds, but it will be great for marketing ..
 
  • Like
Reactions: EarlyMon
Upvote 0
what is this S4 from snapdragon you sepak of ?
Dose it not use a qulculm chipset with an andero gpu ?

Honestly....i still cant see snapdragon beating tegra 3 let alone cortex A9.....just look at the htc sensation vs galaxy S2. One thing i find very strange about the htc edge is it is the first htc device not to use qulculm.

Tegra 3 is Cortex A9, so they're the same. As for your Sensation vs. Galaxy S2 comparison, as Earlymon pointed out, they are largely comparable. Snapdragon S3 uses the Scorpion MPCore, which puts out 2.1 DMIPS per clock cycle. So, a 1.5ghz Snapdragon S3 puts out 3,150 DMIPS per core, 6,300 total. Galaxy S2 is a dual-core Cortex A9 @ 1.2ghz. A9 puts out 2.5 DMIPS per clock cycle, so that's 3,000 DMIPS per core, or 6,000 total. Snapdragon S3 is 5% faster.

Snapdragon S4 will utilize the new Krait MPCore which puts out 3.3 DMIPS per clock cycle. The first confirmed chipset from Qualcomm will be a dual-core @ 1.5ghz, which means 4,950 DMIPS per core or 9,900 total. Tegra 3, which is still A9-based, puts out 3,250 DMIPS per core or 13,000 total. However, you will never see a real-world application use 100% of each core. There are diminishing returns associated with multiple cores. Also, Krait can be clocked up to 2.5ghz in come configurations, so that first model I quoted was their low-end.

Tegra 3 is a relative beast and a harbinger of what is to come, but it will not be faster than Snapdragon S4, OMAP5, Exynos 5, or A6. The above are all using Krait (Qualcomm) or Cortex A15 (the others).

one point i forgot to mention is power usage.
And what would be faster comparing a tegra 3 1.5 ghz quad core to a intel 2.53 ghz core 2 duo ?

In terms of power usage, the Tegra 3 uses a low-powered companion core for standby and low=power operations. It's on the same 40nm process as Tegra 2, so the companion core and load-sharing over the cores will give it a slight batter edge over Tegra 2. The other chipsets are either 32 or 28nm, giving them a huge leg up in battery life.

Core2Duo would smoke a Tegra 3. The one Nvidia used in their benchmark was an original Core2 from early 2006, not the modern ones we have today. Even then, that benchmark was rigged to favor Tegra 3 and it came to a near tie.

The S3 is used in a number of phones, it's a renaming of their dual core pair into a single name. The S4 will be the quad, so I think you have a small typo there.

Just a minor correction, but Qualcomm has stated that their first Kraid-based chipsets will in sin single-, dual-, and quad-core configurations clocked UP TO 2.5ghz and all under the name Snadragon S4. The first confirmed chipset will be dual-core @ 1.5ghz, but that was announced in mid-2011. I think they're going to bump that if Samsung truly gets Exynos 5250 out early.

EDIT: I skipped over Shocky's post, no need to quote it when everything is pretty much dead on :)
 
Upvote 0
one point i forgot to mention is power usage.
And what would be faster comparing a tegra 3 1.5 ghz quad core to a intel 2.53 ghz core 2 duo ?

Just wanted to amend this part here. The performance of a Tegra 3 quad-core @ 1.5ghz is irrelevant because it does not exist. As you add cores to a chip you need to underclock due to thermal and other considerations. Nvidia has listed the official spec for Tegra 3 as being 1.4ghz in single-core and up to 1.3ghz in multi-core configurations. Since tablets tend to have better heat dissipation than smartphones, the 1.3ghz seen in the Transformer Prime should be noted as a best-case scenario. I would expect initial smartphones using Tegra 3 in quad-core configuration to be clocked at 1.0-1.2ghz.

The lack of a die shrink (still on 40nm process, same as Tegra 2) means that clock speeds can't be bumped. It's still a Cortex A9. The best we've seen from A9 on the 40-45nm process range has been 1.5ghz in a shipping product. Most stay in the 1.0-1.2ghz range. Krait and the various A15-based SOCs are on either the 32nm or 28nm process which is why you're hearing reports of clock speeds of 2.0-2.5ghz. So, lower process means better energy conservation, less heat dissipation, and higher clock speeds. New core means more power per clock cycle. A9 is last year's tech and you're going to see that with Tegra 3.

So basically, Tegra 3 is 2011's technology pushed to its limits. It will outperform anything shipped in 2011. However, it's a mere stopgap until all of the A15- and Krait-based products ship. Rumors peg Krait as being available as early as March. That's two months where Nvidia has the top performer, and their next product doesn't come out for another year. That's not good for them.
 
Upvote 0
Got your message :)

Tegra 3 is Cortex A9, so the 2.5 DMIPS per clock cycle is correct. However, Tegra 2 was a scaled down Cortex A9, so while the DMIPS numbers were the same, NEON support was removed as well as a few other things. Tegra 3 should be as fast as or slightly faster than Tegra 2 clock for clock, depending on the operation.

A lot of people have harped on me for criticizing the Tegra 3, as if I'm against quad-core on a phone. Truth is, Tegra 3 is simply under-powered for a quad-core. It's the same Cortex A9 we saw in OMAP4, Exynos 4, and Apple A5 last year. This year we'll see OMAP5, Exynos 5, and Apple's A6 (latter is likely) using dual-core Cortex A15s. That's like comparing AMD's 1.3ghz Phenom X4 (entry level) to Intel's Core2Duos at 2.6ghz and above. Sorry, more cores is not > higher performing dual-cores. Tegra 3 will be first out the door because it needs to be. OMAP5, Exynos 5, and Apple A6 will kill it. Qualcomm's Krait (Snapdragon S4) will also abuse it quite handily.

As for the GPU, Tegra 3 is using a new GeForce ULP. You'd think that, this being Nvidia and all, they'd have graphics locked down. However, one again they are a generation behind. Preliminary benchmarks on the Transformer Prime put the Tegra 3's GPU performance between the Mali 400MP4 (Exynos) and the PowerVR SGX543MP2 (Apple A5). Given what's coming down the pipe this year (PowerVR SGX544, MALI T-604, Adreno 225), this is a bit underwhelming.

As is typical, there will always be something better if you continue to wait. Tegra 3 is the first quad-core, but I'm not impressed. It's a worthy upgrade if you're still on the old single-core Snapdragon S1/S2, OMAP3, or Hummingbird chipsets. However, if you're on a modern dual-core, either upgrade to the dual-core A15s that come out the second half of this year, or for those who like to upgrade every 2 years, wait for the quad-core A15s in 2013 (which includes Tegra 4).

TLDR: The thread title is misleading, as Tegra 3 is Cortex A9. So, saying cortexA9 vs tegra 3 is like saying "Intel Core i7 vs. Dell XPS."

Then you're letting prejudice about the Qualcomm get the better of you.

Overall, an S3 at 1.5GHz is equivalent to an A9 at 1.2GHz. (Using CF-Bench or AnTuTu benchmarks as references, comparing HTC against Galaxy S2.) The Qualcomm contains the Neon extension, the Tegras do not. The S3 can run its cpu cores at two independent clock speeds, the A9 cannot.

Then comes the S4.

independent clock speeds ?...are those alarm clocks ?
 
Upvote 0
Tegra 3 is Cortex A9, so they're the same. As for your Sensation vs. Galaxy S2 comparison, as Earlymon pointed out, they are largely comparable. Snapdragon S3 uses the Scorpion MPCore, which puts out 2.1 DMIPS per clock cycle. So, a 1.5ghz Snapdragon S3 puts out 3,150 DMIPS per core, 6,300 total. Galaxy S2 is a dual-core Cortex A9 @ 1.2ghz. A9 puts out 2.5 DMIPS per clock cycle, so that's 3,000 DMIPS per core, or 6,000 total. Snapdragon S3 is 5% faster.

Snapdragon S4 will utilize the new Krait MPCore which puts out 3.3 DMIPS per clock cycle. The first confirmed chipset from Qualcomm will be a dual-core @ 1.5ghz, which means 4,950 DMIPS per core or 9,900 total. Tegra 3, which is still A9-based, puts out 3,250 DMIPS per core or 13,000 total. However, you will never see a real-world application use 100% of each core. There are diminishing returns associated with multiple cores. Also, Krait can be clocked up to 2.5ghz in come configurations, so that first model I quoted was their low-end.

Tegra 3 is a relative beast and a harbinger of what is to come, but it will not be faster than Snapdragon S4, OMAP5, Exynos 5, or A6. The above are all using Krait (Qualcomm) or Cortex A15 (the others).



In terms of power usage, the Tegra 3 uses a low-powered companion core for standby and low=power operations. It's on the same 40nm process as Tegra 2, so the companion core and load-sharing over the cores will give it a slight batter edge over Tegra 2. The other chipsets are either 32 or 28nm, giving them a huge leg up in battery life.

Core2Duo would smoke a Tegra 3. The one Nvidia used in their benchmark was an original Core2 from early 2006, not the modern ones we have today. Even then, that benchmark was rigged to favor Tegra 3 and it came to a near tie.



Just a minor correction, but Qualcomm has stated that their first Kraid-based chipsets will in sin single-, dual-, and quad-core configurations clocked UP TO 2.5ghz and all under the name Snadragon S4. The first confirmed chipset will be dual-core @ 1.5ghz, but that was announced in mid-2011. I think they're going to bump that if Samsung truly gets Exynos 5250 out early.

EDIT: I skipped over Shocky's post, no need to quote it when everything is pretty much dead on :)

you lost me at nm....so 40 nuton meters of force will do what ? What is nm in this context ?
 
Upvote 0
independent clock speeds ?...are those alarm clocks ?

No, not at all.

The GHz spec you are quoting is a clock speed specification. GHz means billion something per second - that something is the CPU clock cycle. When the CPU cycles, it can get some instructions and execute them next (for an app or the operating system) or manipulate data or bit of both.

Mobile (and modern laptop) processors rated at a some level - let's say 1.2 GHz - do not run the CPU cores at that speed constantly.

They will run a from a low set point, often a few hundred MHz (a few hundred million CPU clock cycles per second) up to the rated _maximum_ of (example) 1.2 GHz (1.2 billion CPU clock cycles per second).

When you think of Qualcomm vs Cortex-type chips (Exynos, Tegra, or anything claiming to be "A9" or "A15"), it's ok to think of them as kind of like AMD vs Intel - they run the same instruction sets, but they do things very differently under the hood.

In the case of the dual core Qualcomm processors, the two CPU cores have independent clock speeds - each core will run at whatever speed it needs. In an A9, both cores run at the same speed.

Hope that helps. :)
 
  • Like
Reactions: Crashdamage
Upvote 0
you lost me at nm....so 40 nuton meters of force will do what ? What is nm in this context ?

Nanometer, or 10^-9 meters aka one billionth of a meter. Specifies a level of the computer-chip manufacturing process.

Semiconductors are made using what we call submicron (below a millionth) or nanometer processes. A "40 nm manufacturing process" dictates the minimum size of a particular, standardized, semiconductor building block that can be made during manufacture.

The smaller the number, the less power is required, because you are moving fewer electrons across shorter distances. Usually, the smaller number benefits in higher speed as well (same reason as previous sentence).
 
  • Like
Reactions: Crashdamage
Upvote 0
No, not at all.

The GHz spec you are quoting is a clock speed specification. GHz means billion something per second - that something is the CPU clock cycle. When the CPU cycles, it can get some instructions and execute them next (for an app or the operating system) or manipulate data or bit of both.

Mobile (and modern laptop) processors rated at a some level - let's say 1.2 GHz - do not run the CPU cores at that speed constantly.

They will run a from a low set point, often a few hundred MHz (a few hundred million CPU clock cycles per second) up to the rated _maximum_ of (example) 1.2 GHz (1.2 billion CPU clock cycles per second).

When you think of Qualcomm vs Cortex-type chips (Exynos, Tegra, or anything claiming to be "A9" or "A15"), it's ok to think of them as kind of like AMD vs Intel - they run the same instruction sets, but they do things very differently under the hood.

In the case of the dual core Qualcomm processors, the two CPU cores have independent clock speeds - each core will run at whatever speed it needs. In an A9, both cores run at the same speed.

Hope that helps. :)

so just to see if i understand...the cores overclock and underclock them selves acording to how much power is needed at that time. Wouldnt this save battery ?

is this also known as asynchronous ?
 
Upvote 0
so just to see if i understand...the cores overclock and underclock them selves acording to how much power is needed at that time. Wouldnt this save battery ?

That's exactly the right idea. To save ourselves confusion, we can simply say that the cores "clock variably" or have "variable clocks" - we then reserve the terms overclock and underclock to the rooting community who will adopt various software modifications (to operating system kernel) and then change the maximum allowed clock speed to be less than as-shipped-when-new (underclocked) or greater than that (overclocked).

You have exactly the right idea, just too few words are available, so we use them that way by convention, or common agreement, in other words.

is this also known as asynchronous ?
When two CPU cores can run at different speeds at the exact same time, we call that design "asynchronous clocking" and when the two cores must run at the exact same speed at an exact same time, we call that design "synchronous clocking" - or asynchronous and synchronous for short.

Here is a picture of my phone's notifications, where I've added an indicator to tell me what is happening with my two CPU cores -

2011-08-24_23-22-29.jpg

Here, I took my phone with an S3 that came stock at 1.2 GHz and I overclocked it to 1.5 GHz, because I wanted the better performance I saw on the SGS2 at 1.2 GHz.

But in reality for my use at that point in that picture, both were clocked below that maximum (variable clock) and clocked differently from each other (asynchronous).

I don't mean to oversell you on the idea of asynchronous - just wanting you to know that there is more to this GHz business than the popular press would have you know.

In the end, it's the overall phone or tablet that counts and you should get the one that's right for you. As you've learned, being armed some knowledge about what's under the hood can help you decide what to believe and what to question.
 
  • Like
Reactions: Crashdamage
Upvote 0
there has been alot of talk about weather android will be able to use all 4 cores. But then what is this talk about it only bieng able to use 28 nm cores and not 40 nm cores...aint the 28 more powerful ?

There are a lot of design factors at work.

The manufacturing process size is just the size. They make things the best they can with an eye towards performance, efficiency and lowest cost (to keep it affordable to you).

We've yet to see the full details on all the quad core design possibilities, we just know about things just arriving or that we've heard are coming.

If the extra cores can't be used properly, then makers won't include them - no one wants to sell something that costs more and gets laughed at because nothing was added for the cost.

The advantages in the real world vs. on paper are just beginning to hit the point where we'll know the real benefits that the quad cores will bring to the table.

PS - if the ASUS Transformer Prime tablet is any indication, the quad cores are going to offer a lot of benefit.
 
  • Like
Reactions: Crashdamage
Upvote 0
so just to see if i understand...the cores overclock and underclock them selves acording to how much power is needed at that time. Wouldnt this save battery ?

is this also known as asynchronous ?

It depends on the CPU governor, but most Android phones use the "ONDEMAND" governor by default. This means that the CPU idles at a certain speed (245mhz for Snapdragon S1, for example), and only ramps up as needed. When you raise the clock speed, you also raise the voltage needed. Voltage is quadratic, meaning that 1V uses roughly 4 times the battery power as 0.5V (used as an example, number not totally accurate).

This is where dual-core comes in. If a load can be split evenly, 2 cores @ 500mhz will use less power than 1 core at 1ghz. Load splitting is a way of preserving power consumption.

As for synchronous muti-processing (SMP) versus async (aSMP), there is a LOT to go into there. Earlymon and I have talked about it, and the truth is that the actual results are theoretical and conditional (if one method was truly superior to the other, then only one method would be used). Here's how they differ;

With SMP, both cores stay at the same speed all the time. If Core one is 500mhz, then core 2 is 500mhz. The benefit to this is that the load is split evenly, but the downside is that the second core is rarely used at 100% so this leads to perceived waste.

With aSMP, each core is clocked individually based on the load. Sure, this sounds better, but there is one HUGE drawback...only one core can access memory at a given time. So, let's take an example of two tasks that can be completed simultaneously, and let's call them Task A and Task B. In an SMP setup, Core 1 handles Task A and Core 2 handles Task B. They do this at the same time. On an aSMP setup, Core 1 handles Task A, and then when done, Core 2 handles Task B. They are done in order instead of simultaneously.

So the benefit here is that a dual-core 1.5ghz SMP setup cannot ever truly deliver 2x performance over 1 core while an aSMP setup can. The downside is that tasks cannot truly be completed in parallel. The theory is that they are done fast enough that that the user can't possibly notice. Also, aSMP uses less power in most cases.

So there you have it, clear as mud :) No method is truly better, but it is of my opinion that SMP is better for code written with parallel processing in mind, while aSMP is better for most modern mobile apps (which are coded with single-threading in mind). It should also be noted that each core in any of these CPUs is capable of multi-threading on one core (similar to Intel's Hyper-Threading), so even basing multi-threading per core can be done regardless of SMP vs. aSMP.

there has been alot of talk about weather android will be able to use all 4 cores. But then what is this talk about it only bieng able to use 28 nm cores and not 40 nm cores...aint the 28 more powerful ?

As for using all 4 cores, that depends on what you're doing. In order to properly use a SMP setup (which Tegra 3 uses), you have to write code to be executed simultaneously. For example, the application must be thinking 4 things at once. You cannot possibly design an app to do this all the time, so you will not get 100% utilization of all cores. Now, there is spillover for single-threaded apps where one core picks up the slack for the other. For example, if you're playing Angry Birds, that might be offloaded to Core 2 and audio processing to Core 3 ,while the physics processing is offloaded to Core 4 while Core 1 handles any system tasks. This would prevent the system from lagging at anytime because anything that might pop up (notifications, syncing, text message, etc.) will be handled by the first Core. But, this isn't an issue with Android, it's a common computing issue in general.

As for the die size (40nm vs. 28nm), no coding is required. As Earlymon stated, this is just a manufacturing spec.
 
Upvote 0

BEST TECH IN 2023

We've been tracking upcoming products and ranking the best tech since 2007. Thanks for trusting our opinion: we get rewarded through affiliate links that earn us a commission and we invite you to learn more about us.

Smartphones