If you want to make money online : Register now

Why does taking a gamma value > 1 make the image darker?

, , No Comments
Problem Detail: 

If I've understood gamma correction correctly (and I probably haven't), a gamma transform to an image raises the value of the intensity level of the input pixel to the power of gamma, where gamma can be greater than one or less than one.

Pout = (some_constant) x Pin^(gamma)

In every example that I've seen, a gamma value < 1 makes the image lighter (raises intensity of lower intensity pixels) and a gamma value > 1 makes the image darker. This seems counter intuitive to me since a gamma value > 1 should increase the intensity levels of all the pixels making it lighter and more saturated.

Can someone explain this to me? Where am I going wrong?

Asked By : cybergla

Answered By : Evil

Because pixel intensity in any professional task is measured between [0; 1].

The most representations are 8bit per pixel but there are for example 16bit representations.
So with [0;1] intensities there is always good representation device independent.

Another important thing is chaining operations (multiply, add pixels etc.), floating points are better because they store fractional values.
If you have RGB from 0-255 it is one way of representing it, very common, discrete but device dependent.
In such representation operations on pixels like gamma needs normalisation by division: divide by 255, apply operation, then cut to [0; 1] and then multiply by 255.
In floating point operation this is not needed.
Also look out for shifted gamma corrections, in colour space conversion for example.

Summing it up [0; 1]$^\gamma$ is going counter-intuitive.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/45304

3200 people like this

 Download Related Notes/Documents

0 comments:

Post a Comment

Let us know your responses and feedback