ad: Annual 2024 Now Open For Entries!
*

The importance of curating AI generated content

Published by

In an era of pervasive machine learning, Generative AI has captured public fascination with accessible tools like Stable Diffusion and Chat GPT, enabling users to effortlessly craft realistic creative content. However, these tools can also unintentionally perpetuate biases, like stereotypes, based on their training data.

So, while the potential of these tools is very enticing, they beg the question of whether they are suitable for public-facing, social experiences.

I spoke to Helen Bellringer, Associate Creative Director at Imagination about the impact of Generative AI on the creative process. Helen addresses the allure of these tools, sheds some light on bias concerns, and expresses how brands can use AI to proactively shape innovative experiences.

How have tools such as ChatGPT contributed to the increased use of Generative AI?

Generative AI has become the buzzword of the creative industry this year, making creative content production easier and more accessible than ever before. These tools stand out for their user-friendly interfaces, making it easy for users to harness the power of generative models. With just a few simple prompts, people can create impressively realistic and imaginative content, from visuals to text, on free platforms like Midjourney or ChatGPT.

*

This accessibility has contributed to the widespread adoption of generative AI, allowing a broader audience, even those completely inexperienced, to explore and leverage its creative potential.

What are the potential risks of using AI in public-facing experiences?

Because these AI models process vast amounts of data, they often inherit biases that are present in their training datasets. For example, search results for terms like 'racing driver' on platforms like Getty Images tend to favour individuals who are white and male.

This bias is then embedded within these large language models, posing a risk when used in public-facing scenarios, where output lacking careful curation may perpetuate harmful stereotypes or favour certain genders or racial groups.

Could you give an example of bias that has emerged as a result of AI?

We were experimenting with various models to see if we could turn a photo into a character from a film or game. One that claimed to 'turn you into a character from Grand Theft Auto', unfortunately turned out to contain some alarming red flags; individuals with darker skin tones were more likely to be transformed into characters embodying negative stereotypes, such as 'gangsters’.

*

This highlighted a concerning racial bias where the system associates societal roles based on appearance, reinforcing negative stereotypes.

How do you address these biases within AI systems at Imagination Labs?

We believe in making AI diverse and inclusive and as such we are experimenting and training our own models. If we can control the input, we have a better chance at a non-biased outfup. Essentially, good input data equals good results.

Obviously this is a long-term plan, and for the immediate future we’re using publicly available models where we're careful about the prompts we use, engineering them to ensure the system generates content that is inclusive of all gender, ethnicity, or age. We're hands-on throughout the AI process, making sure our models learn from fair data, use a mix of prompts for varied outcomes, and have a strict quality check.

Also, having diverse teams, listening to user feedback, and teaming up with others in the industry are crucial steps to keep biases in check when using AI in public spaces.

How do you envision the collaboration between AI and human intuition to create socially conscious experiences?

We believe in blending artificial intelligence and human intuition to create innovative and socially conscious experiences. By combining AI's computational abilities with human oversight, we can develop data-driven and socially conscious content that values diversity and inclusivity.

*

However, it's important for brands to understand that this blend isn't a one-size-fits-all solution. It needs constant refinement, adaptability, and ethical considerations. Brands should prioritise transparency and accountability to ensure that their use of AI aligns with societal values and avoids reinforcing biases.

Comments

More Industry

*

Industry

The Strategy Myth Unmasked

Have you noticed how often “strategy” gets kicked around? Like its the magic ingredient to sell everything. And still, it means different things to different people. Let me share what I think. As a designer, like anyone in service or...

Posted by: Marc Posch
*

Industry

10 top tips to help you win creative awards

After 11 year's of running The Annual we've picked up a few key pointers on how to craft a winning awards entry. Below is the Top 10 tips from ourselves, previous judges and winners on how to create the perfect submission. 1. Tell A Story An...

Posted by: Creativepool
*

Industry

Annual 2024 - Who will see your work?

Will my work get seen by the "right" people is a key question when deciding on whether or not to enter any award. As a part of our commitment to making The Annual the "Most Award" we make sure that your entries are delivered to, and downloaded by,...

Posted by: Creativepool
ad: Annual 2024 Now Open For Entries!