i read that too, but find a 450% performance increase hard to believe. very skeptical.
That's 450%
for a specific, cpu-bound task. LINPACK is a test of number-crunching - it's doing a lot of the same calculations, over and over, in tight loops. JIT (Just In Time) compiling is great for this - it identifies places in an application where a lot of work is going on, and converts those spots to native code.
Java normally is interpreted. A developer writes a program, and this is compiled into "byte-code" - an intermediate form that the Java virtual machine executes. (Dalvik on Android.) So, when executing Java, there's the overhead of examining that bytecode, and then telling the underlying hardware (ARM CPU on Android) what to do. JIT bypasses that - it "pre-converts" the bytecode directly into 'native' code (in this case, direct ARM CPU instructions). Thus, no need to 're-interpret' the code each time it's run.
For most typical applications, they won't see
that kind of speed boost. But most applications do spend most of their time in small parts of their code. (The old 80-20 rule, 80% of the time in 20% of the code on average.) Speeding up just those heavily-used parts can have a big impact on speed. Not 450% - that's probably the best possible case - but I'd believe a typical app would see at least a 50% speed boost, and most would probably see 100% or better.
Two caveats: First, this won't make the network any faster. If the browser's waiting on a website to give it data, JIT won't make any difference. Second, a lot of games are
already compiled to native code for speed - JIT won't make any difference for them.