Home AI Virtual try-on with Google AI: See how clothes look on different body...

Virtual try-on with Google AI: See how clothes look on different body types

Virtual try-on with Google AI: See how clothes look on different body types

Google, ever eager to embrace generative AI, is launching a new shopping feature that shows clothes on a lineup of real-life fashion models. As part of a wide range of updates to Google Shopping rolling out in the coming weeks, Google’s virtual try-on tool for apparel takes an image of clothing and attempts to predict how it would drape, fold, cling, stretch, and form wrinkles and shadows on a set of real models in different poses.

Virtual try-on is powered by a new diffusion-based model that Google developed internally. Diffusion models—which include the text-to-art generators Stable Diffusion and DALL-E 2—learn to gradually subtract noise from a starting image made entirely of noise, moving it closer, step by step, to a target.

Google trained a model using many pairs of images, each of which included a person wearing a garment in two unique poses. For example, one image might show a person wearing a shirt standing sideways, while another image might show the same person standing forward. To make the model more robust (i.e., to combat visual defects such as folds that look misshapen or unnatural), the process was repeated using random image pairs of garments and people.

“You should feel just as confident shopping for clothes online,” Rincon continued. Virtual try-on technology is not new. Amazon and Adobe have been experimenting with generative apparel modeling for some time, as has Walmart, which since last year has offered an online feature that uses customers’ photos to model clothing.

AI startup AIMIRR takes the idea a step further, using real-time garment rendering technology to overlay images of clothing on a live video of a person. Google itself has piloted virtual try-on technology in the past, working with L’Oréal, Estée Lauder, MAC Cosmetics, Black Opal, and Charlotte Tilbury to allow Search users to try on makeup shades across an array of models with various skin tones.

As generative AI increasingly encroaches on the fashion industry, it has met with pushback from models who say it is exacerbating long-standing inequalities. Models are largely low-paid independent contractors who are responsible for high agency commission fees (around 20%), as well as business expenses such as plane tickets, group housing, and promotional materials required to land jobs with clients. Reflecting biased hiring preferences, the industry is also quite homogenous.

According to a 2016 survey, 78% of models in fashion advertisements were white. Levi’s, among others, has tested AI technology to create customized AI-generated models. In interviews, Levi’s defended the technology, stating that it would “increase the diversity of models shoppers can see wearing its products. ”

However, the company did not respond to critics who asked why the brand did not recruit more models with the diverse characteristics it is seeking. In a blog post, Rincon stressed that Google opted to use real models — a diverse range, spanning sizes XXS-4XL and representing different ethnicities, skin tones, body shapes, and hair types. However, she did not address the elephant in the room: whether the new try-on feature might lead to fewer photo shoot opportunities for models down the line.

Previous articleBlackbird’s Risk Management Tools Gain Traction with $20M Investment
Next articleApple’s AI Offensive: New Products and Features Signal a Major Commitment to Machine Learning
Abhinav is an experienced software engineer who has transitioned into the world of blogging. With over 13 years of expertise in the software industry, he brings a deep understanding of technical concepts and trends to his writing. TechyNewsNow provides valuable insights and practical advice for fellow professionals, combining his technical knowledge with a passion for effective communication.


Please enter your comment!
Please enter your name here