Sure, photographs will still control aperture and exposure time — this just removes a constraint at one end: if you want shallow depth of field and motion blur, you could use this method (and get extra bits) rather than an ND filter.
Exactly. This is huge, and I was wondering why people didn't do it already. Or, for example, why don't cameras currently take multiple snapshots of the sensor? For example, if I'm exposing for 2 seconds, read the sensor at 0.1 sec (for the very bright lights) and again at 2 seconds, for the darker parts. That way, I can make an HDR image without having to take another photo.
Is reading the sensor destructive to the data, somehow?
A sensor with per-pixel reset to zero on readout would seem to be ideal — then you could get cumulative counts without wasting valuable photon landing space on the die.
Of course - that's the current standard way to produce HDR photos. The advantage here is being able to do it all in a single image, which prevents mismatches between images due to subject or photographer movement.
Yes, but if I understand correctly, unless those images occur at the same time (or really, really close) you run into issues involving subject and camera motion.
Because the images would look funny. Imagine you're capturing someone running - at frame 1, they would be at position x, and at frame 2, they'll be at position x + 1. If you try to stack them together, you'll get a weird ghosting effect.
Motion blur, but different parts of the image would be exposed differently due to the nature of HDR, resulting in an odd look. Imagine instead of a runner, that you're panning from a dark to light scene.
That's just really a completely misguided statement. No mater how much you fumble with aperture and shutter speed there's simply no way to expand the camera sensor dynamic range without taking multiple exposures.
Besides I have never seen "ordinary people" fumble with aperture size and exposure length so that's a solved problem. Sometimes it gives "wrong" results, like exposing for the highlights rather than the shadows, that's what this technology solves.
> there's simply no way to expand the camera sensor dynamic range without taking multiple exposures
That's the point of this paper†. They expand dynamic range by having the detectors wrap around (discarding high bits) and then recover the high bits computationally.
> No mater how much you fumble with aperture and shutter speed there's simply no way to expand the camera sensor dynamic range without taking multiple exposures.
Sure you can - you expose for the highlights and then you push the shadows in post with a crazy exponential curve. I do it all the time on digital. People used to do it via print manipulation too (filter grading, split printing, dodging and burning), except back then you exposed for the shadows and controlled the highlights (since you were capturing a negative, not a positive).
What would normally have been indistinguishable blacks are pushed into the midtones and you get a flat, grainy image. It just looks like garbage because noise gets out of hand and you lose all your color depth, but there's more dynamic range there in the sense of there being more stops of light crammed onto the final image.
There is a physical limit of the camera or the film where you cannot actually squeeze any more accuracy out of the ADC, but standard conditions are nowhere near that limit.
Some DSLRs and MILCs (and I think some semi-pro compacts too) have a wide enough dynamic range that they can work and HDR image from a single RAW file. Catch is, you have to shoot RAW and then process it with specific software. When it's done in-camera, it always is through multiple exposures. Even without specific software, you can work the effect from a single RAW file to get "multiple exposures" from it and then combine them into an HDR image.