Does gcc support 128-bit int on amd64?

<>

This question already has an answer here:

Answers


GCC supports built-in __int128 and unsigned __int128 types (on 64-bit platforms only), but it looks like formatting support for 128-bit integers is less common in libc.

Note: <stdint.h> defines __int128_t and __uint128_t on versions before gcc4.6. See also Is there a 128 bit integer in gcc? for a table of gcc/clang/ICC versions.

How to know if __uint128_t is defined for detecting __int128


void f(__int128* res, __int128* op1, __int128* op2)
{
    *res = *op1 + *op2;
}

Save to test.c and compile with:

$ gcc -c -O3 test.c
$ objdump -d -M intel test.o

You get:

mov    rcx, rdx
mov    rax, [rsi]
mov    rdx, [rsi+0x8]

add    rax, [rcx]
adc    rdx, [rcx+0x8]

mov    [rdi], rax
mov    [rdi+0x8], rdx

As you can see the __int128 type is supported by keeping two 64-bits in sequence and then operating on them with the typical big int pattern of using two instructions, for example ADD and then ADC (add with carry)


Need Your Help

How does 'using' directive work with template member functions

c++ templates crtp using-directives

I am using the CRTP and the base class has a template function. How can I use that member function in a templated derived class?