Do any JVM's JIT compilers generate code that uses vectorized floating point instructions?

Let's say the bottleneck of my Java program really is some tight loops to compute a bunch of vector dot products. Yes I've profiled, yes it's the bottleneck, yes it's significant, yes that's just how the algorithm is, yes I've run Proguard to optimize the byte code, etc.

The work is, essentially, dot products. As in, I have two float[50] and I need to compute the sum of pairwise products. I know processor instruction sets exist to perform these kind of operations quickly and in bulk, like SSE or MMX.

Yes I can probably access these by writing some native code in JNI. The JNI call turns out to be pretty expensive.

I know you can't guarantee what a JIT will compile or not compile. Has anyone ever heard of a JIT generating code that uses these instructions? and if so, is there anything about the Java code that helps make it compilable this way?

Probably a "no"; worth asking.


So, basically, you want your code to run faster. JNI is the answer. I know you said it didn't work for you, but let me show you that you are wrong.


import java.nio.FloatBuffer;
import org.bytedeco.javacpp.*;
import org.bytedeco.javacpp.annotation.*;

@Platform(include="Dot.h", compiler="fastfpu")
public class Dot {
    static { Loader.load(); }

    static float[] a = new float[50], b = new float[50];
    static float dot() {
        float sum = 0;
        for (int i = 0; i < 50; i++) {
            sum += a[i]*b[i];
        return sum;
    static native @MemberGetter FloatPointer ac();
    static native @MemberGetter FloatPointer bc();
    static native float dotc();

    public static void main(String[] args) {
        FloatBuffer ab = ac().capacity(50).asBuffer();
        FloatBuffer bb = bc().capacity(50).asBuffer();

        for (int i = 0; i < 10000000; i++) {
            a[i%50] = b[i%50] = dot();
            float sum = dotc();
            ab.put(i%50, sum);
            bb.put(i%50, sum);
        long t1 = System.nanoTime();
        for (int i = 0; i < 10000000; i++) {
            a[i%50] = b[i%50] = dot();
        long t2 = System.nanoTime();
        for (int i = 0; i < 10000000; i++) {
            float sum = dotc();
            ab.put(i%50, sum);
            bb.put(i%50, sum);
        long t3 = System.nanoTime();
        System.out.println("dot(): " + (t2 - t1)/10000000 + " ns");
        System.out.println("dotc(): "  + (t3 - t2)/10000000 + " ns");

and Dot.h:

float ac[50], bc[50];

inline float dotc() {
    float sum = 0;
    for (int i = 0; i < 50; i++) {
        sum += ac[i]*bc[i];
    return sum;

We can compile and run that with JavaCPP using commands line these:

$ javac -cp javacpp.jar
$ java -jar javacpp.jar Dot
$ java -cp javacpp.jar:. Dot

With an Intel Core i7-3632QM CPU @ 2.20GHz, Fedora 20, GCC 4.8.3, and OpenJDK 7 or 8, I get this kind of output:

dot(): 37 ns
dotc(): 23 ns

Or roughly 1.6 times faster. We need to use direct NIO buffers instead of arrays, but HotSpot can access direct NIO buffers as fast as arrays. On the other hand, manually unrolling the loop does not provide a measurable boost in performance, in this case.

To address some of the scepticism expressed by others here I suggest anyone who wants to prove to themselves or other use the following method:

  • Create a JMH project
  • Write a small snippet of vectorizable math.
  • Run their benchmark flipping between -XX:-UseSuperWord and -XX:+UseSuperWord(default)
  • If no difference in performance is observed, your code probably didn't get vectorized
  • To make sure, run your benchmark such that it prints out the assembly. On linux you can enjoy the perfasm profiler('-prof perfasm') have a look and see if the instructions you expect get generated.


@CompilerControl(CompilerControl.Mode.DONT_INLINE) //makes looking at assembly easier
public void inc() {
    for (int i=0;i<a.length;i++)
        a[i]++;// a is an int[], I benchmarked with size 32K

The result with and without the flag (on recent Haswell laptop, Oracle JDK 8u60): -XX:+UseSuperWord : 475.073 ± 44.579 ns/op (nanoseconds per op) -XX:-UseSuperWord : 3376.364 ± 233.211 ns/op

The assembly for the hot loop is a bit much to format and stick in here but here's a snippet( is failing to format some of the AVX2 vector instructions so I ran with -XX:UseAVX=1): -XX:+UseSuperWord(with '-prof perfasm:intelSyntax=true')

  9.15%   10.90%  │││ │↗    0x00007fc09d1ece60: vmovdqu xmm1,XMMWORD PTR [r10+r9*4+0x18]
 10.63%    9.78%  │││ ││    0x00007fc09d1ece67: vpaddd xmm1,xmm1,xmm0
 12.47%   12.67%  │││ ││    0x00007fc09d1ece6b: movsxd r11,r9d
  8.54%    7.82%  │││ ││    0x00007fc09d1ece6e: vmovdqu xmm2,XMMWORD PTR [r10+r11*4+0x28]
                  │││ ││                                                  ;*iaload
                  │││ ││                                                  ; - psy.lob.saw.VectorMath::inc@17 (line 45)
 10.68%   10.36%  │││ ││    0x00007fc09d1ece75: vmovdqu XMMWORD PTR [r10+r9*4+0x18],xmm1
 10.65%   10.44%  │││ ││    0x00007fc09d1ece7c: vpaddd xmm1,xmm2,xmm0
 10.11%   11.94%  │││ ││    0x00007fc09d1ece80: vmovdqu XMMWORD PTR [r10+r11*4+0x28],xmm1
                  │││ ││                                                  ;*iastore
                  │││ ││                                                  ; - psy.lob.saw.VectorMath::inc@20 (line 45)
 11.19%   12.65%  │││ ││    0x00007fc09d1ece87: add    r9d,0x8            ;*iinc
                  │││ ││                                                  ; - psy.lob.saw.VectorMath::inc@21 (line 44)
  8.38%    9.50%  │││ ││    0x00007fc09d1ece8b: cmp    r9d,ecx
                  │││ │╰    0x00007fc09d1ece8e: jl     0x00007fc09d1ece60  ;*if_icmpge

Have fun storming the castle!

In HotSpot versions beginning with Java 7u40, the server compiler provides support for auto-vectorisation. According to JDK-6340864

However, this seems to be true only for "simple loops" - at least for the moment. For example, accumulating an array cannot be vectorised yet JDK-7192383

Here is nice article about experimenting with Java and SIMD instructions written by my friend:

Its general outcome is that you can expect JIT to use some SSE operations in 1.8 (and some more in 1.9). Though you should not expect much and you need to be careful.

You could write OpenCl kernel to do the computing and run it from java

Code can be run on CPU and/or GPU and OpenCL language supports also vector types so you should be able to take explicitly advantage of e.g. SSE3/4 instructions.

I'm guessing you wrote this question before you found out about netlib-java ;-) it provides exactly the native API you require, with machine optimised implementations, and does not have any cost at the native boundary due thanks to memory pinning.

Have a look at Performance comparison between Java and JNI for optimal implementation of computational micro-kernels. They show that Java HotSpot VM server compiler supports auto-vectorization using Super-word Level Parallelism, which is limited to simple cases of inside the loop parallelism. This article will also give you some guidance whether your data size is large enough to justify going JNI route.

I dont believe most if any VMs are ever smart enough for this sort of optimisations. To be fair most optimisations are much simpler, such as shifting instead of multiplication whena power of two. The mono project introduced their own vector and other methods with native backings to help performance.

Need Your Help

list Postgres ENUM type


The suggested query to list ENUM types is great. But, it merely lists of the schema and the typname. How do I list out the actual ENUM values? For example, in the linked answer above, I would want...