Experience and interaction designer Bjørn Karmann has created Paragraphica, which he describes as a context-to-image camera that uses location data and AI to ‘visualize a “photo” of a specific place and moment’ and a video of Karmann using his physical prototype has sparked debate on Twitter over everything from its purpose, privacy concerns and the design looking like a star-nosed mole to the future of photography.

Karmann gives more details into how Paragraphica works on his website. He explains that the camera operates by collecting data from its location using open APIs, looking at factors like the address, time of day and weather to compose a paragraph which it then converts into a photo.

There are three physical dials on the camera that let you control the data and AI parameters. The first dial is intended to be similar to the focal length in an optical lens but instead controls the radius (meters) of the area that the camera searches for places and data. The second dials is meant to be comparable to film grain and the third dial controls how closely the AI follows the paragraph.

The camera resembling a star-nosed mole, which like Paragraphica has a different way of seeing that is not dependent on light, is not a coincidence. Karmann has called it as an inspiration and a perfect metaphor for how AI’s perception of the world ‘can be nearly impossible to imagine from a human perspective.’

Lens-free camera that produces AI-generated photos using location data

Credit: Bjørn Karmann.

Paragraphica specifications:

Created with Raspberry Pi 4, 15-inch touchscreen, 3D printed housing, custom electronics and using Noodl, python and Stable Diffusion API.


Related content:


Follow AP on Facebook, Twitter, Instagram, and YouTube.