Hi,
I forgot, if as I think PS works with real number
behind the scene, then it should be obvious that only when the original as more
then 8 bit, will those extra bit will be maintained all the way through the
editing process (16 bit mode only) but when those real numbers are
converted back to 8 bit integers it will be possible that a few pixel value will
fall 1 level higher or below its 8 bit counter
part.
Lets take a 12 bit value of 2047 for example, call
it pixel A, convert it into a real number 0.499877899, now convert this
one into an 8 bit integer we have 127 for pixel B, then save and
reload pixel B, PS will now convert pixel B (127) to 0.498039215
then it is this difference that will be carried all the way through the editing
process since it is unlikely that quantization errors would be
significant with real numbers especially when compared to 8 bit integers.
When done editing we convert both pixel A and B back to 8 bit integers.
Since editing was exactly the same for both pixel we could say we lost the last
digits of the above number with extensive editing and convert these values
to 8 bit integer, pixel A now 0.49987789 => 127 and pixel B now 0.49803921
=> 127. It happens this time there is no difference but for a large number of
pixels the only possible difference is plus or minus 1 and from the definition
of the standard deviation with such data the result will be a
value of less then 1 inivitably.
All this proves that editing in 16 bit mode as no
significant benefit in the case of an original image having 12 bit per pixel, if
we use the Apply Image>Subtract>Offset 128 on hole image processed as
above then the resulting image would be a nice very uniform mid grey level
all over the image area especially considering our inability to discriminate
between luminance level of less then 1% (1/100) and in this case the difference
is only 1/128 which imply that most of us wouldn't be able to perceive this
level of difference.
For higher bit resolution, one can expect
practically the same result but the standard deviation should increase a bit
since a larger number of pixels may have a difference other then 0 but visually
the contrast ratio will still remain under 1%
Voilà! Happy holidays,
Yves
PS By changing the offset to a lower value say
below 32 it may be possible to see the difference in pixel levels but a properly
exposed photo should have on average a mean level of around 128 which makes it a
good offset to use above. Lastly, I wouldn't change my workflow
because of this conclusion, feeling confortable with what we do is more
important then going with what the numbers are saying.
----- Original Message -----
Sent: Monday, December 17, 2007 6:15
AM
Subject: Re: OT: 16 bit editing myth or
reality?
Mark,
I think I have an idea for a short and simple
test one can do in PS and possibly other software as well.
Say we start with a hi bit color image ie.
anything above 8 bit and lets call it image A (an 8 bit original would also
work). Make a copy of it and convert it to 8 bit and lets call it image B.
Next do exactly the same editing on both image as long
as image A remains in 16 (15) bit,
when your done you save both image A and B and then convert image A to 8
bit mode. Then use Apply Image>Subtract>Offset 128 to compare image A to image B and
then check the mean and standard deviation of the resulting image. The
mean should be near 128 and the standard deviation will give you
a measure for the test. Values of 128 (mean) and 0 (stdDev)
mean a perfect match which you probably wont get but now we need to interpret
the actual difference in terms of quality which may not be so easy in RGB mode
but in Lab mode one could do the following, reload your saved image A and B,
convert A to 8 bit and then convert each to Lab. then Apply Image>Subtract with no offset this time, if the L* mean
is zero, try again but this time instead of applying image A to image B do
the opposite (B to A) then record the mean and stdDev of each
channel, assuming it is possible then you would
have:
L* (L*mean, L*stdDev)
a* (a*mean, a*stdDev)
b* (b*mean, b*stdDev)
now do the following:
(((L*mean)/ L*stdDev)^2 +
((a*mean)/ a*stdDev)^2 +
((b*mean)/ b*stdDev)^2)^(1/2)
Now we can now interpret the resulting value (~=
CIE delta Error 1976),
if the value is smaller or equal to 1, this
means the difference is imperceptible in other words the human eye can't see
the difference
if the value is above 1 but below say 4 then we
could say the difference is negligable
if the value is above 4 then we could say the
difference significant
Since I divided by the standard deviation above
the resulting value is not exactly the CIE dE1976 (square root(L*^2 + a*^2 +
b*^2)) and then the interpretation I gave is not technically
correct. We would need to use a more elaborate statistical method because we
are dealing with means and std dev and not with simple numbers as the CIE
dE implies. But if the standard deviations are small, (less then 1) then this
will increase the overall result and not reduce it which make the above
interpretation even more valid. If the standard deviations are larger then 1
then we could say right away that the difference between our image is probably
significant.
We could use this method to test B&W image as
well, when editing is done convert both image to RGB then to Lab or
directly to Lab if possible and proceed as above to evaluate.
Ideally, this would need to be done on more then
one image, 30 or more would give us what is called a confidence interval that
the difference is or is not significant based on the (variant) CIE dE method I
used. Then some could say this method (CIE dE1976) may not be so good to begin
with and I couldn't contest that but I could say this method is simple to
use because the math is relatively simple and especially if the CIE dE
value is small then more elaborate dE calculations wouldn't significantly
improve on this simple test result.
If I'm aloud an "educated" guess, I wouldn't be
surprised if there is no significant difference between editing in 16 bit mode
compared with editing in 8 bit mode and I also wouldn't be surprised to
ear that most proponant of 16 bit work would say I'll stick with my current
workflow. In fact, I'll probably do the same because however small the
difference may be it could make the difference between a fine print and an
ordinary one...
Happy holidays to all
Yves
----- Original Message -----
Sent: Sunday, December 16, 2007 7:01
PM
Subject: Re: OT: 16 bit editing myth or
reality?
Hi Yves,
I think your inquiry is
interesting—especially the cumulative effect that adjustment layers would
have on file data.
In the mean time, I have no doubt that 16 bit is
far superior—and even to take it a "bit" further, I would venture to say
that 16 bit RGB is better than 16 bit grayscale when working on the same
"grayscale" file—the numbers seem to work better. Bruce Fraser
confirmed that with me (before he passed away). I had contacted him
when I was writing my book about some anomalies I found working with 16 bit
gray scale files.
Anyway, good luck with your inquiry, I'll look
forward to hearing your conclusions—in the mean time I have no doubt that
working with 16 bit capture files is far superior to 8 bit.
Try
doing some sort of series that makes an adjustment followed by one that
reverses it and see how much error accumulates. BTW, it has been
suggested that even rotating a file can lead to some sort of degradation of
data.
Best Wishes, Mark Nelson
Precision Digital Negatives
- The System PDNPrint Forum at Yahoo
Groups www.MarkINelsonPhoto.com In a message dated
12/16/07 4:31:46 PM, gauvreau-yves@cgocable.ca writes:
Mark,
You could be right but I (we)
don't know if PS works with real number until the data needs to be save or
whatever. If it's not the case and this would be surprising, then each
editing step would effect the data and each additional editing steps
would have a cumulative effect. I don't think what you propose below is
correct even if PS works with real number because the starting point would
be different.
Let's say I choose arbitraly a
normalised [0..1] pixel level of 0.5, lets see what our
starting value could be in 8 bit resolution. This would mean we
started with a value of 0.5*255=127.5 => 127 or 128, now in 16 bit this
would be 0.5*65535=32767.5 => 32767 or 32768. Lets say our
original pixel value is 32767, PS would transform this in 32767/65535 =
0.49999237 and we could proceed with editing from this value but if we
convert this value to 8 bit and save it for later comparison, PS would
need to convert this to a value to an integer value between [0..255],
so lets do this by 0.49999237 * 255 = 127.4980545 and to 127 by
rounding. If we need to apply the same editing to the 8 bit data file, PS
will convert 127 to 127/255 = 0.498039215, a difference
of 0.001953155, though very small this difference could be increased
by editing the image further. Just out of curiosity say we apply
a small gamma correction of say 1/1.45 and see what effect it
as.
0.49999237^(1/1.45) =
0.6199955
0.4980392^(1/1.45) = 0.6183242
for a difference of
0.001671312
The difference is now lower, this
could be surprising and it's why a full analysis including both color
errors and quantization errors migth produce interesting results. I made a
search on Google to see if someone did this kind of analysis but as of now
(4) I saw only perceptual comparisons and no number crunching but
interestingly the few test I've seen seem to indicate the difference
are imperceptible from 8 or 16 bit editing on printed image. Food for
thought don't you think?
Regards
Yves
PS I don't recall any
discussion of this topic in particular but I have a bad memory so don't
take my word for it.
************************************** See
AOL's top rated recipes
(http://food.aol.com/top-rated-recipes?NCID=aoltop00030000000004)
|