# Javascript & Numbers

Javascript is different from most programming languages when it comes to numbers. The majority of languages have several ways to define numeric data. Python, for example, can define numbers in four different forms: plain integers, long integers, complex numbers and floating point numbers. In Javascript all numbers are floating point numbers. More precisely, all numbers in JavaScript are double-precision floating-point numbers, that is, the 64-bit encoding of numbers specified by the IEEE 754 standard. These are commonly known as “doubles”.

Floating point numbers have some advantages and some disadvantages. One advantage is that they can represent numbers between integers. A second advantage is that they can easily represent numbers that are incredibly large or small without taking up a lot of space. The reason for this is that floating point numbers are essentially scientific notation. An example of scientific notation is Avogadro’s Number, 6.022×10²³. This is a far more practical way of writing the number than explicitly writing out all significant digits.

One of the disadvantages to floating point numbers is that we lose precision. In the example above we round-off Avogadro’s number to 4 significant digits. It is this rounding error which produces anomalies in Javascript such as:

`0.1 + 0.2 = 0.30000000000000004`

Hopefully by the end of this article the seemingly non-sensical calculation above will start to make sense (at least from Javascript’s perspective…)

# How Computers Interpret Numbers

Human’s and computers use different number systems. We (I’m assuming you, the reader, are a human) use the decimal, or base 10, number system. Computers on the other hand use a binary, or base 2, number system.

We normally write numbers as sums of small multiples of powers of 10, but the base 10 is somewhat arbitrary, an ancient cultural artifact of the number of fingers we possess. Computers are made of electrical elements that have only two states, usually low and high voltage (or ‘on’ and ‘off’). By interpreting these as 0 and 1, we can build circuits for storing binary numbers and doing calculations with them.

# How Javascript Stores Numbers

As mentioned before, Javascript encodes numbers into 64 bits. The breakdown is shown below. The value is the binary representation of the number. The exponent effectively represents the position of the ‘dot’ or floating point as a binary number. The sign is stored as 0 for a positive number and 1 for a negative number.

The important takeaway from this is that there is a limit to the storage space available to represent any single number.

# Floating Point Error

Before you can understand floating point error it’s important to understand how to convert a decimal number to a binary number.

Let’s start by converting the `decimal 0.25 into binary`. There are multiple methods one can use to calculate this conversion. I will be using a tabular method which I find the easiest to understand. The decimal `0.25` consists of a coefficient and remainder. The coefficient is `0` and the remainder is `.25`.

To convert to base 2 we take our starting coefficient of `.25` and multiply by `2`. We then tabulate the new coefficient and remainder, in this case it is `0` and `.50` respectively. We take our new coefficient of `.50` and repeat the process — we multiply it by `2`. We keep repeating this procedure until the remainder equals `0`. This example is simple — we hit a remainder of `0` after only 2 iterations as shown below. To represent the number in base 2 format we simply use the results of the coefficient column in the table.

Therefore, 0.25 (base 10) = 0.01 (base 2)

The example above is convenient because we can perfectly represent `0.25` in binary form. If we convert 0.01 (base 2) back to base 10 our answer will be exactly 0.25. There are situations, however, where we do not ever hit a remainder of `0`.

Let’s now try and convert the `decimal 0.20 to binary`.

You should notice that we hit a recurring loop where the remainders keep repeating perpetually, unable to ever reach `0`.

Therefore 0.20 (base 10) = 0.001100110011… (base 2)

When we convert `0.001100110011... (base 2)` to base 10 we get an answer of `0.199951171875... (base 10)`, i.e. not exactly the answer of `0.20` we want.

The problem with the result above is that computers do not understand the mathematical concept of recursive numbers. Humans know that `0.3̅3̅ + 0.3̅3̅ + 0.3̅3̅ = 1`. Computers, on the other hand, would see this value as `0.99999999…`.

It was previously shown that there is a limit to how many bits of storage space are available to store any number. Therefore, when we have a recurring number, a computer will store significant digits until it runs out of space. For that reason these numbers are not exact but rather are approximations of the true value of a number.

Both `0.10` and `0.20` are recurring numbers in binary form. That is why when we add them together we don’t get an answer of `0.30` but rather a value that is an approximation of that. Hence,

`0.1 + 0.2 = 0.30000000000000004`