The single function is not behaving as expected

I'm trying to use single to save disk space because my files are very large. I know it could affect the precision if my numbers have many places after the decimal point. However, it is altering my integers as well.
>> single(340580097)
ans =
340580096.00
Is this a bug?

 Accepted Answer

Matt J
Matt J on 20 Nov 2025
Edited: Matt J on 20 Nov 2025
No, it is not a bug. The precision of a number isn't something that is measured only by the digits to the right of the decimal point. It is measured by the total number of digits present. Single float precision only has about 7 digits of precision, and your integer has more digits than that.

10 Comments

format long g
N = 340580097
N =
340580097
S = single(N)
S = single
340580096
N - double(S)
ans =
1
eps(S)
ans = single
32
Thus near that value, adjacent representable numbers are spaced 32 apart.
Thank you for the reply. I find it disturbing that when that happens, the system will casually change the value of my data, instead of displaying an error message.
The system assumes you know what you are getting yourself into when you perform a type conversion. It's the same as when you do,
A=pi,
A = 3.1416
B=uint32(A)
B = uint32 3
Or for that matter, under that proposal (taken beyond its logical extreme) the pi function should always error. The pi function does not return π(it would require an infinite amount of memory to do so) but the double precision value closest to π, as you can see by computing with it. The sine of πis 0 but the sine of pi is not.
y1 = sin(pi)
y1 = 1.2246e-16
That's why we have the sinpi function.
y2 = sinpi(1) % sin(1*pi)
y2 = 0
Alternately, you can define a symbolic value that represents the transcendental number and compute with it. By default, the sym function recognizes doubles that are of the form (p*pi)/q for "small" p and q and assumes you meant an actual multiple of π. See the description of the flag input, specifically the entry for the default flag value of 'r'.
p = sym(pi)
p = 
π
y3 = sin(p) % symbolic 0
y3 = 
0
Or if you wanted to do the symbolic equivalent of y1:
p = sym(pi, 'f');
y4 = sin(p)
y4 = 
double(y4)-y1
ans = 0
In your imagination, would the error trigger each time precision was lost when doing single(), or would the error only trigger when the integer portion chnaged?
Loss of precision happens a lot when using single(). For example
format long g
A = double(1.1)
A =
1.1
B = single(A)
B = single
1.1
fprintf('%.999g\n', A)
1.100000000000000088817841970012523233890533447265625
fprintf('%.999g\n', B)
1.10000002384185791015625
There has been a loss of precision.
"I find it disturbing that when that happens, the system will casually change the value of my data, instead of displaying an error message."
So you want SINGLE to throw an error whenever the value being converted from is not exactly representable as SINGLE value? The vast majority of DOUBLE and INTx and UINTx values are not exactly representable using SINGLE and so would throw an error: my back-of-the-envelope calculation told me that around 0.000000012% of all DOUBLE, UINTx & INTx values are exactly representable by SINGLE values, so your proposed SINGLE function would be completely useless for the vast majority of values that MATLAB can store and work with. Even calling x=0.1; y=single(x) would have to throw an error!
Your proposal would be force users to perform an explicit calculation that converts arbitrary values from other classes/inputs to a SINGLE-equivalent value before converting to SINGLE type (otherwise an error should be thrown, as you commented). What class can store the exact value of SINGLE 0.1 before it is converted to SINGLE? In any case, I suspect that some users might get tired of performing long explicit calculations using non-existent classes merely to convert to SINGLE values and might request TMW to be nice enough to introduce a new function that would neatly convert any value to the nearest SINGLE value, which would certainly be a lot easier.
Perhaps they could call it SINGLE?
I suppose it would not be too terrible to have a preference that caused an error when converting a double precision to a single precision resulting in a different integer part. Basically
if com.mathworks.single_precision_conversion.assertion_enabled
assert(isnan(SingleResult) || fix(double(SingleResult)) == fix(DoubleResult), 'Loss of integer precision')
end
This code has the property of also catching cases where single(DoubleResult) is +/- inf where DoubleResult is not infinite, since that case is also argueably a loss of precision.
Such an assertion would need to be run internally for all cases of double -> single conversion, not just cases where single() is explict. For example it would be needed for cases such as
[340580097 single(1.2)]
and
340580097 + single(0)
"I suppose it would not be too terrible to have a preference that caused an error when converting a double precision to a single precision resulting in a different integer part"
It would significantly slow down all SINGLE conversions for all users, just to appease those who do not understand what SINGLE does or what SINGLE is.
It would produce one of those lovely types of errors (similar to unexpected implicit expansion memory errors, operates along first non-scalar dimension errors, etc) which would work correctly with some data sets and then one day throws an error simply because the user ran their code on a new data set. Thus giving a situation which is correspondingly challenging to debug.
It would require documenting special cases which currently do not exist. No other engineering/mathematical scripting language does such a thing, and for very good reason: it would be terrible.
You know that single-precision numbers use exactly half the storage of double-precision numbers, reducing memory by 50%. What you might not know is that the conversion is really about how many significant digits a type can represent, which applies to the whole number (integer and fractional parts), not just the decimal part.
We don't know the context for converting the large dataset to single precision. If the goal is just to save space, consider storing values that only need about 7 significant digits in single precision and keeping higher-accuracy values in double precision. That way you can cut overall memory use compared with storing everything as double.
Because MATLAB doesn't automatically change a variable's storage size, you'd need to implement a class (see classdef) that stores values based on their "valid significant digits." I don't know whether mixed-precision storage would affect the accuracy of later calculations that mix single and double. You'll need test cases to verify!
If the goal is deep-learning training, check whether the MATLAB Deep Learning Toolbox supports automatic mixed-precision training. User @xingxingcui asked about this in 2021 but, as of now, there have been no technical replies.

It's been almost five years, and from what I understand now, the Deep Learning Toolbox still doesn't support mixed-precision training. Has TMW not noticed this issue?

Sign in to comment.

More Answers (1)

Stephen23
Stephen23 on 22 Nov 2025
Moved: Matt J on 22 Nov 2025
"However, it is altering my integers as well."
Then use an integer type of sufficient size to store those values; this would use the least possible memory.

8 Comments

I think @Stephen23's comment above deserves to be promoted to an "Answer."
d1 = 340580097
d1 = 340580097
d2 = single(d1)
d2 = single 340580096
d3 = int32(d1)
d3 = int32 340580097
whos
Name Size Bytes Class Attributes d1 1x1 8 double d2 1x1 4 single d3 1x1 4 int32
Leon
Leon on 22 Nov 2025
Moved: Matt J on 22 Nov 2025
Is there a reason Matlab can not program the single() function so that it can automatically identify large integers and use int32 instead?
Under that approach, what should be the result of
single([pi,340580097]) % ?
Should the first element be converted to int32 just to preserve the value of the second element?
If all of the data are integer values, why not just convert to an integer type, i.e., why use single at all?
Is there a reason Matlab can not program the single() function so that it can automatically identify large integers and use int32 instead?
You could write your own function that converts an input to its most memory efficient type, something along the lines of the example below,
A = [3000, 100.5]
A = 1×2
1.0e+03 * 3.0000 0.1005
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
B= [340580097,340580100];
Ac=compress(A);
Bc=compress(B);
whos A B Ac Bc
Name Size Bytes Class Attributes A 1x2 16 double Ac 1x2 8 single B 1x2 16 double Bc 1x2 8 uint32
isequal(A,Ac)
ans = logical
1
isequal(B,Bc)
ans = logical
1
However, you now have the problem that operations you previously took for granted might break,
C=A.*B
C = 1×2
1.0e+12 * 1.0217 0.0342
<mw-icon class=""></mw-icon>
<mw-icon class=""></mw-icon>
Cc=Ac.*Bc
Error using .*
Integers can only be combined with integers of the same class, or scalar doubles.
The bottom line is, all programming languages require you to plan ahead what data type your variables need to be, and what type they will need to transform into, based on the computations you intend to perform. Or, you can just accept liberal use of RAM and keep everything as doubles.
function A=compress(A)
A=feval(best_numeric_class(A), A);
end
function T = best_numeric_class(A)
%BEST_NUMERIC_CLASS Determine the minimal-lossless MATLAB numeric class for A.
%
% T = best_numeric_class(A) returns a character vector like 'uint8',
% 'int16', 'single', 'double', etc., selecting the smallest class that
% can represent A exactly (no precision loss).
%
% Works for: logical, integer classes, floating classes, and real/complex.
% Logical stays logical
if islogical(A)
T = 'logical';
return;
end
% Complex -> must remain floating (integer classes cannot represent imag parts)
if ~isreal(A)
% Check whether single is sufficient
if all(single(A) == A, 'all')
T = 'single';
else
T = 'double';
end
return;
end
% If already integer class, leave unchanged
if isinteger(A)
T = class(A);
return;
end
% For floating-point inputs:
% 1. Check if all values are integer-valued
if all(A == floor(A), 'all') && all(isfinite(A), 'all')
xmin = min(A(:));
xmax = max(A(:));
% Unsigned range?
if xmin >= 0
if xmax <= intmax('uint8'), T = 'uint8';
elseif xmax <= intmax('uint16'), T = 'uint16';
elseif xmax <= intmax('uint32'), T = 'uint32';
else, T = 'uint64';
end
else
% Signed range
if xmin >= intmin('int8') && xmax <= intmax('int8'), T = 'int8';
elseif xmin >= intmin('int16') && xmax <= intmax('int16'),T = 'int16';
elseif xmin >= intmin('int32') && xmax <= intmax('int32'),T = 'int32';
elseif xmin >= intmin('int64') && xmax <= intmax('int64'),T = 'int64';
else, T = 'double'; % Values exceed int64 range
end
end
return;
end
% 2. Floating-point but not integer-valued:
% Test whether single precision is exactly sufficient
if all(single(A) == A, 'all')
T = 'single';
else
T = 'double';
end
end
Hi Matt,
Can you expand on this comment "(integer classes cannot represent imag parts)" in the context of this example
int32(100 + 110i)
ans = int32 100 + 110i
No, I have no explanation for that. I didn't write the code. But feel free to modify as you see fit.
Is there a reason Matlab can not program the single() function so that it can automatically identify large integers and use int32 instead?
If you order a hamburger at a restaurant, would you be surprised / angry / insulted if the waiter looked at you and decided to serve you a salad instead?
To answer your question directly, there's no technical reason preventing MATLAB from doing that.
But if you explicitly ask MATLAB for a single precision value, I think most of our users would be very surprised to get something returned from that call to single that wasn't a single precision value. Users would report that as a silent wrong answer bug (the most severe type of bug, worse in many ways than MATLAB crashing; at least if MATLAB crashes it's obvious something went wrong) the second it went out the door in the Prerelease.
If a developer proposed making that change, most of the people reviewing that proposed change (and I would almost certainly be asked to review it, and this would be my review) would give it an OMDB (aka "You're shipping this Over My Dead Body.")
"Is there a reason Matlab can not program the single() function so that it can automatically identify large integers and use int32 instead?"
Yes, there are very good reasons:
1) you would force single() to be sloooooow. Instead of simply converting to one homogenous array (which can be preallocated by the MATLAB engine and then filled out with the values) you would force the MATLAB engine to either a) use a heterogenous array (i.e. container array) which is much slower to access OR b) parse all values into one array until it gets to a value which does not "fit" at which point it needs to create a new array and copy all of the existing homogenous array into a new memory location and then continue OR c) perform some other so-far-undefined-behavior. All of these will be significantly slower than using one homogenous array.
2) you would break an enormous amount of existing code. If single() could return int32, uint32, single, or any other type depending on input values, then every single line of code that calls single() would need defensive type checking. Consider:
data = single(userInput);
result = data * 0.5; % This breaks if data is int32!
Your proposal would break billions of lines of existing MATLAB code that reasonably expects single() to return single. The entire language's type system would become unreliable, calling external libraries and compiled code would be extremely fragile. Functions must have predictable return types, otherwise composition of operations becomes impossible.
3) Dispatch/overloading nightmare: MATLAB uses dynamic dispatch based on argument types. If single() returns different types based on values rather than input types, you break fundamental language semantics. Consider:
function process(x)
y = single(x);
% What methods are available on y?
% What operations are valid?
% This becomes undecidable without runtime value inspection
end
Method resolution, operator overloading, and function dispatch all assume type stability within a given code path. Value-dependent type returns would require the JIT compiler to give up all optimizations, making ALL MATLAB code significantly slower, not just single().
4) The documentation would be an opaque nightmare: "Returns single, unless the value is an integer that doesn't fit in single precision, in which case it returns int32, unless it's negative and too large, then int64, unless it is complex in which case return a unicorn..."
As Steven Lord described, explicitly requesting one thing and being given something completely different is a guarantee that the function is broken, not improved. Every function call site would need type guards. Generic algorithms would be impossible. Code review would require analyzing every possible value path to determine runtime types. This isn't "helpful automation," it's the destruction of language predictability - the very foundation that makes programming possible.

Sign in to comment.

Products

Release

R2025b

Tags

Asked:

on 20 Nov 2025

Edited:

on 23 Nov 2025

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!