# What is JavaScript's highest integer value that a number can go to without losing precision?

Is this defined by the language? Is there a defined maximum? Is it different in different browsers?

## Answers

**+/- 9007199254740991**

Note that all the positive and negative integers whose magnitude is no greater than 253 are representable in the Number type (indeed, the integer 0 has two representations, +0 and −0).

They are 64-bit floating point values, the largest exact integral value is 253-1, or 9007199254740991. In ES6, this is defined as Number.MAX_SAFE_INTEGER.

Note that the bitwise operators and shift operators operate on 32-bit ints, so in that case, the max safe integer is 231-1, or 2147483647.

Test it out!

var x = 9007199254740992; var y = -x; x == x + 1; // true ! y == y - 1; // also true ! // Arithmetic operators work, but bitwise/shifts only operate on int32: x / 2; // 4503599627370496 x >> 1; // 0 x | 1; // 1

Technical note on the subject of the number 9007199254740992: There is an exact IEEE-754 representation of this value, and you can assign and read this value from a variable, so for *very carefully* chosen applications in the domain of integers less than or equal to this value, you could treat this as a maximum value.

In the general case, you must treat this IEEE-754 value as inexact, because it is ambiguous whether it is encoding the logical value 9007199254740992 or 9007199254740993.

**>= ES6:**
Number.MIN_SAFE_INTEGER;
Number.MAX_SAFE_INTEGER;

**<= ES5**

From the reference: Number.MAX_VALUE; Number.MIN_VALUE;

console.log('MIN_VALUE', Number.MIN_VALUE); console.log('MAX_VALUE', Number.MAX_VALUE); console.log('MIN_SAFE_INTEGER', Number.MIN_SAFE_INTEGER); //ES6 console.log('MAX_SAFE_INTEGER', Number.MAX_SAFE_INTEGER); //ES6

It is 253 == 9 007 199 254 740 992. This is because Numbers are stored as floating-point in a 52-bit mantissa.

The min value is -253.

This makes some fun things happening

Math.pow(2, 53) == Math.pow(2, 53) + 1 >> true

And can also be dangerous :)

var MAX_INT = Math.pow(2, 53); // 9 007 199 254 740 992 for (var i = MAX_INT; i < MAX_INT + 2; ++i) { // infinite loop }

Further reading: http://blog.vjeux.com/2010/javascript/javascript-max_int-number-limits.html

In JavaScript, there is a number called Infinity.

Examples:

(Infinity>100) => true // Also worth noting Infinity - 1 == Infinity => true Math.pow(2,1024) === Infinity => true

This may be sufficient for some questions regarding this topic.

Jimmy's answer correctly represents the continuous JavaScript integer spectrum as **-9007199254740992** to **9007199254740992** inclusive (sorry 9007199254740993, you might think you are 9007199254740993, but you are wrong!
*Demonstration below or in jsfiddle*).

document.write(9007199254740993);

# To be safe

var MAX_INT = 4294967295;

# Reasoning

I thought I'd be clever and find the value at which x + 1 === x with a more pragmatic approach.

My machine can only count 10 million per second or so... so I'll post back with the definitive answer in 28.56 years.

If you can't wait that long, I'm willing to bet that

- Most of your loops don't run for 28.56 years
- 9007199254740992 === Math.pow(2, 53) + 1 is proof enough
- You should stick to 4294967295 which is Math.pow(2,32) - 1 as to avoid expected issues with bit-shifting

Finding x + 1 === x:

(function () { "use strict"; var x = 0 , start = new Date().valueOf() ; while (x + 1 != x) { if (!(x % 10000000)) { console.log(x); } x += 1 } console.log(x, new Date().valueOf() - start); }());

ECMAScript 6:

Number.MAX_SAFE_INTEGER = Math.pow(2, 53)-1; Number.MIN_SAFE_INTEGER = -Number.MAX_SAFE_INTEGER;

The short answer is “it depends.”

If you’re using bitwise operators anywhere (or if you’re referring to the length of an Array), the ranges are:

Unsigned: 0…(-1>>>0)

Signed: (-(-1>>>1)-1)…(-1>>>1)

(It so happens that the bitwise operators and the maximum length of an array are restricted to 32-bit integers.)

If you’re not using bitwise operators or working with array lengths:

Signed: (-Math.pow(2,53))…(+Math.pow(2,53))

These limitations are imposed by the internal representation of the “Number” type, which generally corresponds to IEEE 754 double-precision floating-point representation. (Note that unlike typical signed integers, the magnitude of the negative limit is the same as the magnitude of the positive limit, due to characteristics of the internal representation, which actually includes a *negative* 0!)

Other may have already given the generic answer, but I thought it would be a good idea to give a fast way of determining it :

for (var x = 2; x + 1 !== x; x *= 2); console.log(x);

Which gives me 9007199254740992 within less than a millisecond in Chrome 30.

It will test powers of 2 to find which one, when 'added' 1, equals himself.

Many answers earlier show the result true of 9007199254740992 === 9007199254740992 + 1
to tell that ** 9 007 199 254 740 991** is the max safe integer.

What if we keep doing accumulation:

input: 9007199254740992 + 1 output: 9007199254740992 // expected: 9007199254740993 input: 9007199254740992 + 2 output: 9007199254740994 // expected: 9007199254740994 input: 9007199254740992 + 3 output: 9007199254740996 // expected: 9007199254740995 input: 9007199254740992 + 4 output: 9007199254740996 // expected: 9007199254740996

We could found out that among numbers greater than ** 9 007 199 254 740 992**, only even numbers are

**representable**.

It's an entry to explain how **double-precision 64-bit binary format** work on this. Let's look how ** 9 007 199 254 740 992** be held (represented) using this binary format.

We start from ** 4 503 599 627 370 496** with the brief version of format first:

1 . 0000 ---- 0000 * 2^52 => 1 0000 ---- 0000. |-- 52 bits --| |exponent part| |-- 52 bits --|

On the left side of arrow, we have **bit value 1**, and a adjacent **radix point**, then by multiplying 2^52, we right move the radix point 52 steps, and it goes to the end. Now we get 4503599627370496 in binary.

Now we start to accumulate 1 to this value until all the bits are set to 1, which equals ** 9 007 199 254 740 991** in decimal.

1 . 0000 ---- 0000 * 2^52 => 1 0000 ---- 0000. (+1) 1 . 0000 ---- 0001 * 2^52 => 1 0000 ---- 0001. (+1) 1 . 0000 ---- 0010 * 2^52 => 1 0000 ---- 0010. (+1) . . . 1 . 1111 ---- 1111 * 2^52 => 1 1111 ---- 1111.

Now, because that in **double-precision 64-bit binary format**, it strictly allots 52 bits for fraction, no more bit is available to carry for adding one more 1, so what we can do is setting all bits back to 0, and manipulate the exponent part:

|--> This bit is implicit and persistent. | 1 . 1111 ---- 1111 * 2^52 => 1 1111 ---- 1111. |-- 52 bits --| |-- 52 bits --| (+1) (radix point has no way to go) 1 . 0000 ---- 0000 * 2^52 * 2 => 1 0000 ---- 0000. * 2 |-- 52 bits --| |-- 52 bits --| => 1 . 0000 ---- 0000 * 2^53 |-- 52 bits --|

Now we get the ** 9 007 199 254 740 992**, and with number greater than it, what the format could hold is

**2 times of the fraction**:

(consume 2^52 to move radix point to the end) 1 . 0000 ---- 0001 * 2^53 => 1 0000 ---- 0001. * 2 |-- 52 bits --| |-- 52 bits --|

So when the number get to greater than 9 007 199 254 740 992 * 2 = 18 014 398 509 481 984, only **4 times of the fraction** could be held:

input: 18014398509481984 + 1 output: 18014398509481984 // expected: 18014398509481985 input: 18014398509481984 + 2 output: 18014398509481984 // expected: 18014398509481986 input: 18014398509481984 + 3 output: 18014398509481984 // expected: 18014398509481987 input: 18014398509481984 + 4 output: 18014398509481988 // expected: 18014398509481988

How about number between [ ** 2 251 799 813 685 248**,

**)?**

*4 503 599 627 370 496*1 . 0000 ---- 0001 * 2^51 => 1 0000 ---- 000.1 |-- 52 bits --| |-- 52 bits --|

The bit value 1 after radix point is 2^-1 exactly. (=1/2, =0.5)
So when the number less than ** 4 503 599 627 370 496** (2^52), there is one bit available to represent the

**1/2 times of the integer**:

input: 4503599627370495.5 output: 4503599627370495.5 input: 4503599627370495.75 output: 4503599627370495.5

Less than ** 2 251 799 813 685 248** (2^51)

input: 2251799813685246.75 output: 2251799813685246.8 // expected: 2251799813685246.75 input: 2251799813685246.25 output: 2251799813685246.2 // expected: 2251799813685246.25 input: 2251799813685246.5 output: 2251799813685246.5 // If the digits exceed 17, JavaScript round it to print it. //, but the value is held correctly: input: 2251799813685246.25.toString(2) output: "111111111111111111111111111111111111111111111111110.01" input: 2251799813685246.75.toString(2) output: "111111111111111111111111111111111111111111111111110.11" input: 2251799813685246.78.toString(2) output: "111111111111111111111111111111111111111111111111110.11"

And what is the available range of **exponent part**? the format allots 11 bits for it.
Complete format from Wiki: (For more details please go there)

So to gain 2^52 in exponent part we exactly need to set e = 1075.

Anything you want to use for bitwise operations must be between 0x80000000 (-2147483648 or -2^31) and 0x7fffffff (2147483647 or 2^31 - 1).

The console will tell you that 0x80000000 equals +2147483648, but 0x80000000 & 0x80000000 equals -2147483648.

Try:

maxInt = -1 >>> 1

In Firefox 3.6 it's 2^31 - 1.

I did a simple test with a formula, X-(X+1)=-1, and the largest value of X I can get to work on Safari, Opera and Firefox (tested on OS X) is 9e15. Here is the code I used for testing:

javascript: alert(9e15-(9e15+1));

I write it like this:

var max_int = 0x20000000000000; var min_int = -0x20000000000000; (max_int + 1) === 0x20000000000000; //true (max_int - 1) < 0x20000000000000; //true

Same for int32

var max_int32 = 0x80000000; var min_int32 = -0x80000000;

# Let's get to the sources

##### Description

The MAX_SAFE_INTEGER constant has a value of 9007199254740991 (9,007,199,254,740,991 or ~9 quadrillion). The reasoning behind that number is that JavaScript uses double-precision floating-point format numbers as specified in IEEE 754 and can only safely represent numbers between -(253 - 1) and 253 - 1.

Safe in this context refers to the ability to represent integers exactly and to correctly compare them. For example, Number.MAX_SAFE_INTEGER + 1 === Number.MAX_SAFE_INTEGER + 2 will evaluate to true, which is mathematically incorrect. See Number.isSafeInteger() for more information.

Because MAX_SAFE_INTEGER is a static property of Number, you always use it as Number.MAX_SAFE_INTEGER, rather than as a property of a Number object you created.

##### Browser compatibility

At the moment of writing, JavaScript is receiving a new data type: BigInt. It is a TC39 proposal at stage 3. BigInt has been shipped in Chrome and is underway in Node, Firefox, and Safari... It introduces numerical literals having an "n" suffix and allows for arbitrary precision:

var a = 123456789012345678901012345678901n;

Precision will still be lost, of course, when such a number is (maybe unintentionally) coerced to a number data type.

In the Google Chrome built-in javascript, you can go to approximately 2^1024 before the number is called infinity.

Scato wrotes:

anything you want to use for bitwise operations must be between 0x80000000 (-2147483648 or -2^31) and 0x7fffffff (2147483647 or 2^31 - 1).

the console will tell you that 0x80000000 equals +2147483648, but 0x80000000 & 0x80000000 equals -2147483648

Hex-Decimals are unsigned positive values, so 0x80000000 = 2147483648 - thats mathematically correct. If you want to make it a signed value you have to right shift: 0x80000000 >> 0 = -2147483648. You can write 1 << 31 instead, too.

Number.MAX_VALUE represents the maximum numeric value representable in JavaScript.

Since no one seems to have said so, in the **v8** engine there is a difference in behavior for 31 bits number and number above that.

If you have 32 bits you can use the first bit to tell the javascript engine what type that data is and have the remaining bits contain the actual data. That's what **V8** does as a small optimisation for 31 bis numbers (or used to do, my sources are quite dated). You have the last 31 bits being the number value and then the first bit telling the engine if it's a number or an object reference.

However if you use number above 31 bits then the data won't fit in, the number will be boxed in 64 bits double and the optimisation won't be there.

The Bottom line, in the video below, is:

prefer numeric values that can be represented as **31bits** signed
integers.

Basically javascript doesn't support long. so for normal values that it can represent less then 32 bit, it will use the int type container. for integer values greater then 32 bit its uses double. In double represntation the integer part is 53 bit and rest is mantissa( to keep floating point information). so You can use 2^53 - 1 which value is 9007199254740991 you can access the value to use in your code by Number.MAX_SAFE_INTEGER

In JavaScript the representation of numbers is 2^53 - 1.

When the number is greater than 2 to the power 53 ie.

Math.pow(2, 53)

The javascript knows it as a big integer. Then javascript stores them as 'bigint' so comparing to 'bigint'==='bigint' becomes true.

Safer way to store their values in Math object itself.

const bigInt1 = Math.pow(2, 55) const bigInt2 = Math.pow(2, 66) console.log(bigInt1 === bigInt2) // false

Node.js and Google Chrome seem to both be using 1024 bit floating point values so:

Number.MAX_VALUE = 1.7976931348623157e+308

Firefox 3 doesn't seem to have a problem with huge numbers.

1e+200 * 1e+100 will calculate fine to 1e+300.

Safari seem to have no problem with it as well. (For the record, this is on a Mac if anyone else decides to test this.)

Unless I lost my brain at this time of day, this is way bigger than a 64-bit integer.