or
Triynko imported from SE
sql-server azure-sql-database
`select POWER(2.,64.)` returns `18446744073709552000` instead of `18446744073709551616`.  It seems to have only 16 digits of precision (rounding the 17th).  

Even making the precision explicit `select power(cast(2 as numeric(38,0)),cast(64 as numeric(38,0)))` it still returns the rounded result.

This seems like a pretty basic operation for it to be flaking out arbitrarily at 16 digits of precision like this.  The highest it can calculate correctly is only `POWER(2.,56.)`, failing for `POWER(2.,57.)`.  What is going on here?

What's really terrible is that `select 2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.*2.;` actually returns the right value.  So much for terseness.
Top Answer
Paul White
The result of 2^64^ is exactly representable in `float` (and `real` for that matter).

The problem arises when this precise result is converted back to `numeric` (the type of the first `POWER` operand).

Before database compatibility level 130 was introduced, SQL Server rounded `float` to `numeric` implicit conversions to a maximum of 17 digits.

Under compatibility level 130, as much precision as possible is preserved during the conversion. This is documented in the Knowledge Base article:

[SQL Server 2016 improvements in handling some data types and uncommon operations][1]

To take advantage of this in Azure SQL Database, you need to set the `COMPATIBILITY_LEVEL` to 130:

```sql
ALTER DATABASE CURRENT SET COMPATIBILITY_LEVEL = 130;
```

---

Workload testing is needed because the new arrangement is not a panacea. For example:

```sql
SELECT POWER(10., 38);
```

...ought to throw an error because 10^38^ cannot be stored in `numeric` (maximum precision of 38). An overflow error results under 120 compatibility, but the result under 130 is:

~~~
99999999999999997748809823456034029568 -- (38 digits)
~~~

  [1]: https://support.microsoft.com/en-us/help/4010261/sql-server-2016-improvements-in-handling-some-data-types-and-uncommon-
Answer #2
Jack Douglas
From [the online documentation](https://docs.microsoft.com/en-us/sql/t-sql/functions/power-transact-sql), the implication is that whatever you pass as the first parameter is going to be implicitly cast to a `float(53)` *before* the function is executed. :

>     POWER ( float_expression , y )  
> ## Arguments
> float_expression
Is an expression of type float or of a type ***that can be implicitly converted to float***

However, this [is not (always?) the case](https://dba.stackexchange.com/a/6902/1396).

If it were the case, it would explain the loss of precision:

> Conversion of float values that use scientific notation to decimal or numeric is restricted to values of precision 17 digits only. Any value with precision higher than 17 rounds to zero.

On the other hand, the literal `2.` is type `numeric`…:

<>https://dbfiddle.uk/?rdbms=sqlserver_2016&fiddle=2468f9ee4e8494fbe568ea8298a7ad84

…and the multiply operator [returns the data type of the argument with the higher precedence](https://docs.microsoft.com/en-us/sql/t-sql/language-elements/multiply-transact-sql).

It appears that on 2016 (SP1), all the precision is retained:

<>https://dbfiddle.uk/?rdbms=sqlserver_2016&fiddle=79ed50a23c712d9d448800220ba79a84

…but on 2014 (SP2), it is not:

<>https://dbfiddle.uk/?rdbms=sqlserver_2014&fiddle=79ed50a23c712d9d448800220ba79a84
Why does TSQL return the wrong value for POWER(2.,64.)?

This is a dedicated room for discussion about this question.

Once logged in you can direct comments to the question poster (or any answer poster) here.