Friendly reminder: Everything that this new #OpenAI camera assistant / modell can recognize and diagnose will be in shortterm future in all #cameras in public spaces everywhere …
The screenshot below features Patrick Lichtsteiner and his work on mimicking retinal circuits in the design of the dynamic vision sensor (DVS), an event-based camera where the log difference of light intensity at time t and t-1 is emitted (the event), rather than a typical camera frame. This has extraordinary implications for visual processing, data transfer bandwidth and data storage.
Having determined that the DVS pixel noise is limited to 2x the shot noise, Tobi's group built a "Scientific DVS" targetting e.g., very fast imaging of neural activity with low noise. They've done it by tweaking the DVS pixel circuit and also binning 4 pixels together for spatial integration.
The result: 10x more sensitive.
Looking forward to seeing applications in neuronal activity imaging, which seems ideally suited for event-based imaging: large fields of view where largely nothing changes, with few, very sparse but fast changing pixels – where neurons are active.