Anonymous 2530
Sup all y'all!

We run on SQL Server 2012EE (11.0.7507.2), and I know what year it is, nevertheless. So today every query that our app did lead to stack dump generation, there were like hundreds of 'em per hour.


> SQL Server Assertion: File: <"simpletypes.cpp">, line=12773 Failed Assertion = '-6 <= exponent && exponent <= 5'.


We traced the problem to one single op and field, basically this:

```
SELECT CONVERT(varchar(255), field, 128);
```

Field is of type "float". There are like lotsa floats in a table. By removing some of them we narrowed the problem down to the value - 1000000.

However simple selects didn't show us what was wrong with this value. By analyzing a page, it turned out that there were actually two distinct values - one true 1000000 and another was look-alike 9.999999999999999e+5. And that long 'un was giving us crap.

All other possible float-to-chars convert styles like 0,1,2,3,126,129 weren't producing smelly dumps with this value. Problem was confirmed on 2012 and 2014 versions of SQL Server, however it can't be replayed on 2016 or latest versions.

128 is backward compatibility style according to docs. App is a century old so we can't modify a query and so we solved the problem by updating all niney nines floats to 1000000.

Here come questions:
1. what's wrong with convert style 128?
1. can't find an explanation of 128 style anywhere, what did it mean back then?

n.b. Don't run this unless you wanna smash your old geezer!

```
SELECT CONVERT(varchar(255), 9.999999999999999e+5, 128)
```

Thxbye!
Top Answer
Paul White
>1. what’s wrong with convert style 128?

Any assertion failure is a product defect or the result of data corruption. In this case, it's a defect (see below).

>2. can’t find an explanation of 128 style anywhere, what did it mean back then?

There is very little documentation for styles 126, 128, or 129.

One [version](http://web.archive.org/web/20061020070648/http://msdn2.microsoft.com/en-us/library/ms187928.aspx) of the `CAST and CONVERT` entry from *July 2006* contains:

>The following table shows the values for *style* that can be used for converting **float** or **real** to character data.
>
>|Value|Output|
>|-----|------|
>|**0** (default)|A maximum of 6 digits. Use in scientific notation, when appropriate.|
>|**1**|Always 8 digits. Always use in scientific notation.|
>|**2**|Always 16 digits. Always use in scientific notation.|
>
> **If style 126 is specified for conversion from float and real the output is equivalent to the style value of 2 and 1, respectively.**

and

> To remove trailing zeros from a result set when you convert from **numeric** or **decimal** data to character data, use the value **128** for *style*.

I don't know if that latter statement was ever accurate, but it doesn't apply to `float` anyway.

### Analysis

Lack of documentation aside, inspection reveals:

* Style **126** performs the string conversion using the C runtime library (CRT) function [`_snprintf_s_l`](https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/snprintf-s-snprintf-s-l-snwprintf-s-snwprintf-s-l?view=msvc-170), with a [format specifier](https://docs.microsoft.com/en-us/cpp/c-runtime-library/format-specification-syntax-printf-and-wprintf-functions?view=msvc-170) of `%.15le` for double precision values and `%.7le` for single precision. This produces exponential string format with 15 or 7 digits of precision.

* Styles **128** and **129** pass the input value through the `SerializeXQueryFloatingPoint` method of the `CXMLTypeProvider` class before calling the same C runtime library function as above. This checks for special values like NaN and INF. Style **128** uses the `%.14E` format. Style 129 uses `%.6E`. This gives exponential string format with 14 or 6 precision digits.

Notice the inconsistent precision specifications. Styles 128 and 129 seem intended for use with XML data.

### Explanation

The assertion occurs in `SerializeXQueryFloatingPoint` when it checks which code path it is following. Why it happens is an interesting edge case.

For values with six significant digits or fewer, styles 128 and 129 produce a string representation **not** in scientific notation (no exponent).

The value 9.999999999999999e+5 cannot be represented exactly in double precision floating point. The closest exact representation is 999999.999999999883584678173065185546875.

The problem is that `SerializeXQueryFloatingPoint` decided to follow the non-scientific notation code path based on the exponent being 5 in the nearest exact representation.

Later in the code path, the number is **rounded** to 1e+6. Exactly *why* that rounding occurs isn't clear. Perhaps it's due to confusion about the rounding rules or data types. The module in question is about handling simple types in XQuery e.g. `xs:float` (single precision) and `xs:double` (double precision). It occurs when `_snprintf_s_l` is called with the `%.14E` format, limiting the output to 14 digits of precision. The closest exact representation has 15 nines, which would be rounded to 1e+6 given the specified 14 digits.

In any case, after the rounding to 1e+6, the assert fails because we should not be on the non-scientific notation code path with an exponent of 6:

> Failed Assertion = ‘-6 <= exponent && exponent <= 5’.

The problem doesn't occur with **style 126** because that code doesn't use `SerializeXQueryFloatingPoint` and uses different format specifications of `%.15le` (float) or `%.7le` (real) with *15 digits* of precision in the present case. It returns 9.999999999999999e+005.

The problem doesn't occur with **style 129** because it takes a different code path that converts the `float` to a `real` directly. This is ok because style 129 uses 6 significant digits (`%.6E`). The closest *single precision* representation of the input number is 1e+6.

That explains the issue you're seeing.

Floating point conversion and rounding has complex rules. Many edge cases were [addressed](https://docs.microsoft.com/en-us/troubleshoot/sql/database-design/sql-server-azure-sql-database-improvements) in the SQL Server 2016 release.

---


As a footnote:

SQL Server 2014 calls into version 100 of the CRT **directly** for the string formatting. SQL Server 2019 implements a *modification* of CRT version 120 code within its own code base. It does not call into the CRT directly.

The fact that SQL Server 2019 uses a *modification* of CRT version 120 code (rather than calling into the CRT directly as they do elsewhere) suggests they needed fine control over the specific implementation, perhaps to avoid an issue in the CRT itself.
Answer #2
Jack Douglas
This must be a bug with earlier versions of SQL Server. You can see it in action [with 2014 on db<>fiddle](https://dbfiddle.uk/um8qeyDl).

> *can’t find an explanation of 128 style anywhere, what did it mean back then?*

even in the 2008 docs `128` [is marked as marked as 'legacy'](https://learn.microsoft.com/en-us/previous-versions/sql/sql-server-2008/ms187928(v=sql.100)):

> Value | Output
> ------|-------
> 0 (default) | A maximum of 6 digits. Use in scientific notation, when appropriate.
> 1 | Always 8 digits. Always use in scientific notation.
> 2 | Always 16 digits. Always use in scientific notation.
> 126, 128, 129 | Included for legacy reasons and might be deprecated in a future release.

however the 2005 docs [do not mention `128`](https://learn.microsoft.com/en-us/previous-versions/sql/sql-server-2005/ms187928(v=sql.90)) at all!

This room is for discussion about this question.

Once logged in you can direct comments to any contributor here.

Enter question or answer id or url (and optionally further answer ids/urls from the same question) from

Separate each id/url with a space. No need to list your own answers; they will be imported automatically.