T-SQL.co.uk
0 votes

Never ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever ever store currency as a float in your database.

I completely agree with this statement. Never store values used in financial calculations as floating point values, because a floating point is an approximate representation of a decimal value, stored as binary. In most cases it is inaccurate as soon as you store it. You can read more in this excellent - if a little dry - technical paper.

With that out of the way, we get into an interesting discussion about the correct data type to store currency values.

MONEY and SMALLMONEY. I like to think I know a little bit about picking the right data type for a certain column.

So I was surprised when a friend on that Twitter thread mentioned that he stores currency values as integers (multiplying them by 100 first). Being as literal as I am, I thought he meant the INTEGER (or INT) data type, which is a four-byte data type that goes up to around 2.1 billion (for values greater than zero), but Michael reminded me that "big integers exist," so of course my friend was talking about BIGINT.

I wrote in a reply that "Decimal will have to do. I'm not going to store money as a BIGINT." Nevertheless, I found it an interesting thought experiment to write about here.

Assume that your currency has a hundred possible decimal values from 0.00 to 0.99. Now assume that you can eliminate pesky rounding errors by storing currency data in cents as a BIGINT. This requires you to multiply the currency amount by 100 when inserting it, and dividing by 100 when displaying it. That seems counterproductive in my mind, but bear with me as we investigate this further.

What problem is this meant to solve? Why avoid the use of the DECIMAL data type, which is engineered to store decimal values with a decimal precision? Is the risk of rounding errors so great as to avoid a data type completely as the floating point? Surely DECIMAL can handle this? Most of the time we use simple sums ( SUM), averages ( AVG), and regular arithmetic operations like addition, subtraction, multiplication and division, and in my experience - provided I put the parentheses in the right place - I've never run into a rounding error that was significant enough (i.e. outside of the tolerance of the organization calculating the amounts) to be a problem.

As I've pointed out before, BIGINT ranges from negative 9 quintillion, to positive 9 quintillion (a number that is 19 digits wide). That's a lot of squillions. Even storing these values as cents, you can store amounts that could handle even the ill-fated Zimbabwean dollar, which - at the time it was discontinued - was distributed in $100-trillion notes (100,000,000,000,000.00, or 17 digits wide). A BIGINT needs 8 bytes of uncompressed storage for each value.

Now DECIMAL is a different beastie. If you need the highest precision, a DECIMAL can use up to 17 bytes for each value. Generally though, I like using DECIMAL(19,4) for currency, which needs 9 bytes and can store numbers 19 digits wide, where the last four digits are after the decimal place. Coincidentally, this has the same range of values as the MONEY data type. Maybe it's not a coincidence that I chose that then, eh?

Purists will say that the MONEY data type is woefully inaccurate when it comes to calculations, including multiplication and division. To those purists I say "pish!" Unless we're talking Bitcoin, a currency amount usually has two decimal places. By using four decimal places, this means that rounding errors are less likely. In the vast majority of cases you will be doing things like summing values, calculating percentages,

by (1.1k points)

Please log in or register to answer this question.

Welcome to T-SQL.co.uk, where you can ask questions and receive answers from other members of the community.
12 questions
15 answers
0 comments
1,643 users