We propose the learning of the pixel exposures of a sensor, taking into account its hardware constraints, jointly with decoders to reconstruct HDR images and high-speed videos from coded images.
An implicitly defined neural signal representation for images, audio, shapes, and wavefields.
This is the oral presentation of our Neural Sensor paper that got accepted at ICCP'20 in the T-PAMI journal track.