Spotting AI Content: How to Avoid Being Fooled Online

With the incredible rise of AI technology in recent years, more people than ever before can now create both written and visual content without needing any technological skills. Easily accessible software like ChatGPT, as well as updates to popular image software like Adobe Photoshop, has meant that anyone can create interesting content online.

With this new found accessibility more and more people are using AI to create content for their businesses, blogs and social media. While a lot of use cases are perfectly harmless, there are other cases where AI content is being used for nefarious purposes such as false advertising and spreading fake news.

It’s therefore important that internet users learn how to spot AI content, both written and visual.

Spotting AI-Generated Written Content

Between written and visual content, AI written content is often the hardest to spot. Copying words is easier than copying detailed images and so AI has become pretty good at it. If you’ve ever had a chat with a service like ChatGPT or Bard, you’ll understand just how human they can sound. Even when you talk to companies through their chat systems online such as Amazon or your car insurer, you are more than likely talking, at least initially, to an AI bot.

So how do you spot when you’re reading AI content? Here are some tells to look out for:

  1. Overly Polished and Repetitive Language

AI loves using “big” words. If a piece of content is written using AI it’ll use fancy adjectives instead of varying between regular and fancier words, so instead of “the business meeting was important” it’ll say “the business meeting was crucial”, or instead of “the document still needed improving” it’ll write it as “the document still needed enhancing”. Obviously these words are in a human writer’s vocabulary but if a piece of content consistently uses these “bigger” words instead of a more varied vocabulary, then this is a good indication that the article is AI-generated.

As well as that, its use of repetition is actually one of the best ways to spot an AI written piece of content. It loves using “lists of three’s” to describe things, for example, “the dog was big, friendly and happy”. It’ll use this structure over and over again when describing things. So if you’re reading a piece of content where things constantly have three descriptors, then the chances of it being written by an AI are pretty high. Human writers are more likely to vary their sentence structure and length, while this is something that AI doesn’t seem to factor in when writing.

  1. Lack of Personal Experience or Anecdotes

You may not actively notice it when reading a regular human written article but they often have personal experiences, references or anecdotes in them. Perhaps they use a pop culture reference or talk about a relatable situation.

AI struggles with this. It hasn’t had personal experiences and it can’t see the similarities between a situation and a moment from pop culture. If you’re reading a piece of content and it seems very bland and lacking any sort of personality then it could be written by AI.

  1. Perfect Grammar but Unnatural Phrasing

AI text will almost always be grammatically flawless, but the phrasing might not be quite right. Certain sayings or words might not be used in quite the right way, a bit like a person who learned a language studying textbooks rather than through natural conversation with other people.

Spotting AI-Generated Images and Videos

While their written text content is very impressive and hard to spot, AI still hasn’t quite got the hang of visual content. There are so many nuances to visual content that we never think about, as to us, that’s just how the world looks, but for AI, perfectly replicating all the little details in visual content is hard to get right.

However, saying that I still scroll through places like Facebook and see plenty of accounts posting AI images with users in the comments believing them to be real. So if you’re struggling to tell real from fake, here are the easiest ways to tell if an image is AI generated:

  1. General Glitches and Artifacting

The biggest tell an image has been created using AI is to check the background. Even if the AI has got the subject of the image correct, it will have taken less notice of the background and smaller details.

As such, their images will often have issues like walls warping into each other, peoples limbs combining together, or even faces looking like they’re out of a horror film!

Even the subject matter can suffer from this, be it too many limbs on a person or strange and uneven items on cars and buildings. AI makes a lot of mistakes so once you notice one you’ll start seeing more and more.

Take a look at the below photo, how many mistakes can you spot?

The few things that stand out to me straight away are:

  • The cat’s face is distorted and looks more like a painting than a photo.
  • The wood panelling on the left behind the sofa mixes with the curtain when it passes behind the lamp.
  • The frames of the window at the top in the curved section aren’t straight and look like they’ve been drawn on.
  1. Inconsistent Facial Features

When you think about it human faces are actually very detailed and as such it really struggles with human features. If you’re not sure whether a photo is real, take a look at the face and the body features. Hands having too many fingers or their teeth seeming to blend into one another are easy ways to tell that an image is AI generated.

It also creates faces that have near perfect skin. Even the best looking people in the world have slight imperfections in their faces but AI struggles with these little details and as such AI faces will often have perfect, air-brushed skin that doesn’t look real, making them look a bit like cartoon characters or CGI.

  1. Unnatural Situations

You know how the classic saying goes: “if it looks too good to be true, then it probably is”. The same goes for AI images. If the image you’re looking at seems unbelievable then most likely it isn’t real. You might look at an image of a cat surfing and think “yeah that’s obviously not real” but the amount of people who get fooled by images like that is alarmingly high!

If you’re not sure if something is real, have a quick Google search for the same image or scenario and see if it exists. If it’s real then lots of people will be talking about it, so if you can’t find multiple pictures or articles about it from reputable outlets, it’s not going to be real.

  1. Lighting That’s Too Perfect

A detail that people often overlook in AI images is the lighting. Take a look at the below image:

Notice how the shadows of the railings cast on the wall on the left, and how there’s more than one light source (behind camera and from the other room). Natural light is something that AI is currently incapable of properly understanding. Now have another look at this image from earlier that was created by AI:

While it gives it a good go, AI doesn’t understand the nuances of shadows and lighting. It always makes images in high contrast.

It doesn’t take into account the light from the lamp and everything is bathed in this warm yellow sunlight. In reality the light would be much flatter and less colourful with more shadows and dark areas. It’s as if the AI wants you to be able to see all the details in the picture so none of the shadows are dark enough. The bookshelf especially should have a shadow cast on it from the plant that is between it and the light source but doesn’t!

Interestingly, if you were to put this image into Photoshop and average out all the colours it would become just one, bland colour. Whereas doing the same with a real photo would result in more than one colour. This is because AI images, unlike real life images, always have a 50:50 spread of dark and light spots. There is a really interesting video by the YouTube channel Corridor Crew which explains this phenomenon better than I can which you can watch here.

Tools to Help Detect AI Content

While manual detection is a useful skill to have, being able to get a second opinion is always useful. Here are some tools to check for AI content if you’re still unsure:

  • AI Text Content Detectors: Tools like ZeroGPT and QuillBot are good ways to get an idea of whether something might be AI generated, however, they aren’t perfect. I’ve had pieces of content that these checkers have flagged as being 100% AI despite being written entirely by me. Even this article has sections being flagged as AI despite being 100% written by myself!

    A good rule is if multiple different checkers give a piece of content a score of 50% or more then the chances of that content being AI written, or at least partly AI written, is very high.

  • Reverse Image Search: Using Google’s reverse image search is a handy way to check if an image is real or not. If Google comes back with lots of results or even better, different angles of the same subject or event, then you know that the likelihood is high that that image is real.
  • Deepfake Detectors: Tools like Deepware and Reality Defender help analyse images and videos for AI creation or manipulation.

Conclusion

As AI generated content becomes more advanced, distinguishing between human and machine becomes increasingly difficult. However, by paying attention to the details and not taking something on face value you can become more adept at spotting fake content. It’s a skill that might take time to learn, but it’s an important one that all internet users should know.

So the next time you’re reading an article or looking at an image online, take a closer look, you might just uncover an AI’s handiwork.

Book a call
Chat now
Speak To Us