HDR (which really stands for High Dynamic Range) refers to the concept of capturing scene details that span an exceptionally large range of brightness. Dynamic range (DR) is essentially the ratio between the lightest and darkest distinguishable tones in a scene. That sounds simple enough, but the word distinguishable actually has a somewhat imprecise meaning. For example, in darker portions of a scene, noise from both the camera and the light itself (photon shot noise) make distinguishing different tones a statistical thing; what probablity threshold do we apply to call two tones distinguished? A common answer is 50% certainty, but that seems rather generous.... There is also the issue of tonal resolution; how small must the tonal step be to consider a tone distinguished from a similar tone? In any case, measuring DR of a captured image isn't quite as straightforward as you'd expect.
Of course, imprecision in the defition doesn't prevent folks from measuring and talking about DR. DxOMark and sensorgen give lots of measurements. Usually, DR is expressed in terms of stops or (equivalently) EV. One EV change represents a doubling or halving of the brightness. In real life, it is not clear that there are limits on the EV that can be spanned within a scene, and it certainly isn't difficult to find examples with over 20EV. Imagine looking at the Sun from inside a dark cave... but don't actually do that. Why? Because human eyesight isn't designed to handle such great instantaneous DR. The exact number is debatable, but humans can see something 10-14EV DR at a glance; our pupils dilate/constrict (mydriasis/miosis for MDs, change f/number for us) to allow us to handle a larger DR over time. The problem is that most digital cameras only have about 9-11EV DR, which means they can't even record what we see at a glance. Some of the better sensors, notably many made by Sony, can capture 13-15EV DR... which is great, but still far from sufficient for that cave-to-Sun view. Perhaps more importantly, it's also insufficient for things like seeing the kid walking behind your car when the backup camera is blinded by another car's headlights. It's also not sufficient if you're trying to capture the lighting of the scene so you can duplicate it in computer-generated renderings that you are placing in the images (e.g., movie special effects).
There are quite a few different ways to extend the DR of cameras to enable HDR capture. We'll be talking about some of the more exotic ones later. However, this project is just about making your little PowerShot able to capture a lot more than the 9-10EV maximum it normally gets. You'll do it by combining multiple captures into a single image -- and you'll do it entirely within the camera using a Lua script.
One last note: ideally, we like to talk about DR in calibrated units of scene brightness. You're not going to do that here. Like most digital cameras, Canon does actually tag images with exposure parameters and an estimate of scene brightness -- so we could perform a calibration sequence to get fairly accurate scene brightness readings for each pixel. However, that's a lot of computation. Worse still, we really just want the camera to deliver the final image as a JPEG, and they can only represent about 9EV DR in any direct way. So, the HDR images you'll generate in-camera will crudely tone map into a JPEG. In other words, the delivered JPEG will be HDR in the sense of being Heuristically Detail Revealing, but will not contain calibrated scene EV values.
None of the Canon PowerShots we use in this course have been tested by DxOMark. In fact, none of the PowerShots using the same sensor have been tested. So, you will not find anything giving precise DR measurements. We don't care. All that you're going to do is to do some cleverly self-adjusting HDR. Self-adjusting? What?
It's really simple. For HDR, you just shoot a series of raw images with different exposures and then average them. I've posted the dumb little "average the last two raws" script I showed in class as an example of using the raw development facilities, rawavg.lua. The self-adjusting part has to do with the choice of how many exposures and at what values.
Normally, you would program the camera to capture a fixed number of images with the exposure varied (by changing shutter speed): for example, a shot at the default exposure, another -2EV, and a third at +2EV. However, I want you to be a bit more clever than that. You're going to take a scene-dependent number of images based on covering the full scene DR. Basically, the idea is to cover the complete brightness range in the image. This can be done by sampling the image histogram using get_histo_range(lo,hi), which allows you to sample the actual image data histogram and return the percent of pixels within the given value range... but only of the last image shot (i.e., the raw buffer contents). It is very slow, so you have to enable it by setting shot_hist_enable(1)... and don't want to leave it on when not needed.
There are six parameters for your script:
That's a lot of parameters -- but normally they should all be fine with some default settings. It is up to you to pick good default setting values. However, I'll give you a few hints. The step size should probably be around 2EV, or 192 APEX96 units. The lens probably can't deliver decent contrast for more than about 20EV, so setting the maximum number of shots taken above or below to 5 should cover everything you can get (meaning a total of no more than 11 shots taken). That's pretty generous: most camera HDR modes would be the equivalent of limiting m to just one or two. The values for the other parameters will take some tweaking, but be aware that the number of dark or bright pixels will probably never be zero -- give yourself a reasonable range The idea is simple enough:
Obviously, the above algorithm doesn't always result in the same overall brightness as the default exposure -- the number of shots underexposed is not necessarily equal to the number overexposed. That's as it should be. Your script can keep the raw images around, so the user could always combine and tone-map them differently in postprocessing if they wish. There is also a potentially big quality problem if the camera or scene moves during these captures. Again, don't worry about it; a user could always align the images in postprocessing, but doing alignment in-camera would make your Lua script unusably slow. For best quality, mount the camera on a tripod or hold it firmly on some solid surface during the exposure sequence.
One last note: I don't claim that the sequence of exposures described above is optimal. If you think you know a better sequence, feel free to use it. For example, perhaps interleaving under and over exposures would be better than doing all the underexposed images first? Or would it?
You have already played with CHDK Lua scripting... this is just a fancier script. It's probably 2-3 pages long. The biggest complications are:
You will be submitting source code (for your Lua script, autohdr.lua), a make file (which does nothing much for this project), and a very short implementor's notes document, formatted roughly as described here, that discusses any issues in implementation or problems with functionality.
For full consideration, your project should be submitted no later than before class November 2, 2017. Submit your .tar or .tgz file here: