Entropy of an image
4 views (last 30 days)
Show older comments
I have some confusion about the entropy of the image. Should an image have more entropy or less? I want to assign weights to the images according to their entropy. Should I assign less weight to image having high entropy or vice versa?
0 Comments
Answers (1)
Walter Roberson
on 25 Aug 2015
Consider a positive integer. Should it have more prime factors or less? Should you assign more weight or less weight to a positive integer that has more prime factors? Answer: No, it should have however many prime factors it has, and the weight you should assign depends upon your purpose.
Entropy gives you an idea of how "predictable" the image is. An image that is all the same is entirely predictable and has low entropy. An image that changes from pixel to pixel might at first thought be unpredictable, but the change might follow a pattern, such as a "checkerboard pattern", so changing every pixel does not mean that it is difficult to predict. Compression algorithms try to discover predictability. Each aspect of predictability requires some storage to represent, and the more storage that an image requires to represent its predictability then the higher the entropy the image has. Suppose you had the ultimate most efficient general purpose compression algorithm, then entropy would correspond to the smallest file size that was needed to represent the image. (There is a formal definition related to this, Komogorov Complexity)
But this does not answer the question of how much weight you "should" give an image, because you have not indicated what purposes you are trying to achieve.
Approximately speaking, an image with higher entropy is "more specific", more detailed. If your purpose were, for example, to decide that the image represented "a feline" then high entropy is not needed, but if you are trying to decide whether you have correctly identified an individual lion, then you want a fair bit of detail in the picture, probably more entropy.
2 Comments
Sadia Iffat
on 20 Aug 2017
" If your purpose were, for example, to decide that the image represented "a feline" then high entropy is not needed, but if you are trying to decide whether you have correctly identified an individual lion, then you want a fair bit of detail in the picture, probably more entropy." I cannot understand this.Why entropy is not needed to decide "a feline" but "individual lion"?
Walter Roberson
on 20 Aug 2017
Consider
That is enough to decide that this represents a feline. It is not enough to identify an individual lion.
How do you indentify an individual lion? It turns out that you can do that by looking at its whisker spot patterns
Obviously you need to be relatively close to be able to photograph the spot pattern in enough detail. You would be able to distinguish the "lion-ness" of the lion from much further away, perhaps even from just a shadow near the horizon, needing to match only a relatively small number of key features; to identify individual lions you need high enough detail to be able to see the parts that are not easily predictable (high entropy.)
See Also
Categories
Find more on Image Data Workflows in Help Center and File Exchange
Products
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!