Tools like imshow() and imwrite() expect image data to be scaled according to class-dependent limits. For floating-point image data, black is 0, white is 1. For integer images, black and white correspond to the minimum and maximum values that the integer class can represent.
You're taking five floating point images and adding them together. The result spans (approximately) [0 5]. What will imshow() do with this if it thinks 1 is white? The answer is that most of the data is treated as white.
Will normalizing the data to the extrema give a viewable image? Yes. Is that the appropriate result? Maybe.
If it were me, I'd just divide the image sum by 5. That way the relative range of the image doesn't get stretched.
Consider the example using a source image that has less range:
thresh = multithresh(I,7);
valuesMax = [thresh max(I(:))];
[quant8_I_max,~] = imquantize(I,thresh,valuesMax);
Shot_noise = imnoise(I, "salt & pepper", 0.20);
BG = poissrnd(mean, size_I);
noisyimg_sum = I+quant8_I_max+SI+Shot_noise+BGR;
noisyimg_avg = noisyimg_sum/5;
title("noisy image sum with ");
title("noisy image average");
Using the displayrange parameter with imshow distorts the image range and shows it having more contrast than it actually has.
The question should be whether the average of these images with noise applied is meaningful. I would imagine that what's intended is not the sum or average, but the composition of noise functions. In other words, each call to imnoise() should operate on the result from the prior call to imnoise(). That way the noise contribution doesn't get multiplied by 1/5. In that case, the final result will be within the proper range for the class and this whole rescaling question will be moot.