After the event, we chatted and swapped war stories about cybersecurity (malware-riddled USBs in car parks, bazaars frequented by the military in Afganhistan, and long social engineering plays etc.), before the conversation turned to artificial intelligence bias and photo generation.
Text-to-image applications are quite extraordinary in 2024 and far removed from two years ago where the results tended to be not quite there with flesh, faces and limb issues. Apart from text issues that are still to be resolved, it's impossible now to tell real from fake when it comes to AI photo generation.
You can create the most amazing photos/illustrations in under 20 seconds using few words thus opening the door to a riot of creativity that is, let's not forget, software interpreting what you want, taking inspiration from its dataset, and then deciding what to produce.
This is in contrast to a human creating digital art based on their own ideas and then using software as an assistant to bring their creation to life.
All of these have the ability to be created, tweaked or changed according to various prompts and templates with human involvement.
Yet there is a problem with T2P. Unless you're descriptive in the prompt, you'll get a - most likely - a stereotypical result.
There are many T2P applications out there and they all operate independently. There are no rules, regulations or law adherence because there are none. They were all built using different datasets. Yet, despite this, they produce AI biased results.
See our infographic below (with the actual AI prompts included) about why this matters.