Experience and interaction designer Bjørn Karmann has created Paragraphica, which he describes as a context-to-image camera that uses location data and AI to ‘visualize a “photo” of a specific place and moment’ and a video of Karmann using his physical prototype has sparked debate on Twitter over everything from its purpose, privacy concerns and the design looking like a star-nosed mole to the future of photography.
Karmann gives more details into how Paragraphica works on his website. He explains that the camera operates by collecting data from its location using open APIs, looking at factors like the address, time of day and weather to compose a paragraph which it then converts into a photo.
Introducing – Paragraphica! 📡📷
A camera that takes photos using location data. It describes the place you are at and then converts it into an AI-generated "photo".
— Bjørn Karmann (@BjoernKarmann) May 30, 2023
There are three physical dials on the camera that let you control the data and AI parameters. The first dial is intended to be similar to the focal length in an optical lens but instead controls the radius (meters) of the area that the camera searches for places and data. The second dials is meant to be comparable to film grain and the third dial controls how closely the AI follows the paragraph.
The camera resembling a star-nosed mole, which like Paragraphica has a different way of seeing that is not dependent on light, is not a coincidence. Karmann has called it as an inspiration and a perfect metaphor for how AI’s perception of the world ‘can be nearly impossible to imagine from a human perspective.’
Created with Raspberry Pi 4, 15-inch touchscreen, 3D printed housing, custom electronics and using Noodl, python and Stable Diffusion API.
- With Generative Fill, Adobe Photoshop takes AI to a whole new level
- Samsung ‘fake’ moon photos: Has AI photography gone too far?
- The human cost of artificial intelligence in photography
- Will Adobe’s AI fixation ruin photography as we know it?