- THE NEXT THING NOW
- Posts
- Why can't AI tell time?
Why can't AI tell time?
We are looking at you marketing.....
This last week I made an astonishing discovery.
The time in AI land is always 10 past 10. I know… it’s good that you were sitting down.
Let me explain.
If you ask the AI, ChatGPT for example, to generate you an image of an analogue watch at a specific time…
.. lets say three o’clock, or 5 minutes past severn, or anything else.. whatever it is…
… it will always generate an image of a watch or a clock with hands at the time of 10 past 10!
You can try this with any of the other models. You will get a similar result.
It doesn't matter when you ask for a time or what time you ask it to give you, it always shows you 10 past 10.
The marketing dept, it's always the marketing dept.
Now, it took me a little while to get my head around this; to figure out what was going on. But I think this is actually down to the training data and the bias that is now ingrained in the system.
These models have been trained on a massive amount of internet imagery. And it seems that the vast majority of images of watches and clocks that are online are marketing images.
As all clock faces are essentially the same…
12 numbers
2 hands
a little logo just underneath the 12, which tells you who made it.
… the marketing norm is to frame the logo with the hands of the timepiece at approximately 10 past 10. It frames the image nicely and makes for a nice marketing shot.
This has been true for time immemorial… (sorry)
So consequently, AI image generation models struggle to show a clock face with any other time.
Even to the point where; if you a ask chatbot to describe the image that it's just produced for you, it will parrot back the time that you originally asked for… even though it's showing you 10 past 10.
What’s the point of this little anecdote?
Well, as you asked, this is a fantastic example of bias in models.
Bias is baked in
We all know that these tools are incredibly powerful. They can do a great many things. But it's just a little reminder to all of us that we need to ensure that when we are applying AI tools in our day to day activities; not just image generation, but also any kind of reporting task, or generating text or some kind of data analysis …
.. the AI isn't entirely neutral.
In its current form at least, the models reflect the historical patterns that they were trained on. Those patterns can be flawed; they can be misleading.
Therefore, as individuals, as well as organisations more broadly, we should always have one critical eye on the outputs that we are generating.
We are not simply generating correct answers every time. We need to be aware of the biases that may be going into the decisions.
AI is super powerful. And these tools are only going to become more powerful. This is just the beginning. But for the moment at least, it's only as good as its training data and that is amplified by whether or not the people using it, the humans using these tools, are using AI wisely and in good faith.
About the Author
This article was written by me! I'm CEO and co-founder of Dootrix | Do the next thing now 🤖, a pioneering software technology consultancy specialising in mobile applications, cloud-native solutions and digital innovation.
For more search "The Next Thing Now Podcast" on YouTube.