AI Made Friendly HERE

AI Photography Features: Reframing reality: New AI features in photography

What does photography mean anymore? The idea of photography has changed a lot since I did my photography course almost a decade ago, as part of my university curriculum. Smartphone image resolutions and processing capabilities were getting better. However, people relied on professional cameras and editing tools like Adobe Photoshop.
It was difficult to get your hands on good professional software.For one, they were expensive and one had to power through YouTube tutorials to learn the tools. Second, editing apps for smartphones were inadequate. Anything beyond applying filters was a bonus.
I was particularly poor at using masking, layer manipulation and advanced techniques in Photoshop, which needs dexterity and an understanding of complex functions. So I switched my editing projects for some grunt work with a friend to avoid PS. Prograde editing required technical expertise as well as an aesthetic sense.
Over the subsequent years, smartphones democratised photography and editing tools improved substantially. But now, with the AI wave, smartphones are disrupting photography again.
Pixel perfect
You can see that in the recently introduced Google Pixel 9 series. Features like Magic Editor and Best Take are not new, but they’ve got better. What’s new is that the phone lets you add and remove background elements/ objects into your photos with just a text prompt in a feature called ‘Reimagine’, and users can merge two different pictures under ‘Add me’ as an alternative for a selfie or asking a stranger to take your group photo.
You can also tweak the framing by generating a similar-looking background in what is called an ‘Auto frame’. Users can type in commands and bring trees or dark clouds into their photos. These features have sparked a great deal of curiosity among gadget enthusiasts.
The features are easy to use because of the tight integration of Pixel 9 phones with features inside the camera app and Google’s other apps. Pixel phones have Google’s AI-specialised tensor chips, on-device AI (Gemini Nano) and applications on top of that.
What does this mean?
Ananthakrishnan L, a senior photojournalist in Chennai who has been in the business since 1991, has seen it all. “During black and white photography, if an image gets underexposed, we could correct for that by printing them on hard paper instead of glossy paper. And then Adobe Photoshop 4.0 came (in 1996), and I was astonished at what it could do for editing images,” he says.
As a photojournalist, he is worried about image manipulation using AI, he thinks it could mislead audiences and create problems. “Subjects can be altered, tampered and used to destroy someone’s reputation, and there is very little control now on such misuse. But the features could be useful for modelling, corporate and wedding shoots,” he says.

K Prasanth Kumar, a media specialist with corporate portfolios, says right now the features are not producing great results, but it will enable people without technical skills to access them with prompts.
Laya Mathikshara Mathialagan, an artist, too agrees that AI has made it easier for individuals, but raises concerns about its impact on user behaviour. “Just like social media filters, these types of inbuilt tools can make everyone conscious of body image and push for the ideal beauty standards set by society – like a face without pimples or textures.”
Senthil Nayagam, founder of Muonium AI, a GenAI startup working in media-tech, says the tools will help all of us get creative without knowing technical skills. “I can have my own imprint on the creative output,” he says.
Originally Appeared Here

You May Also Like

About the Author:

Early Bird