Older Generations vs. AI.

If you’ve been on social media long enough, you might have come across several AI-generated posts, especially if you use X. Many of these posts can be so obviously AI to our eyes, but not to the eyes of our older generations. It seems as if every AI post I see, there are several people in the comments that are always fooled by the content, and when I visit their profiles, it almost always is someone over the age of 65. It isn’t always Boomers, as there are plenty of others fooled, especially younger people. But I do think there is a much higher pattern with older people.

It’s the same reasoning behind why they can have difficulty with technology, confusion, poor judgement. I think one of the biggest contributors is cognitive decline. But, there could be other big factors such as how they intake information. For example, growing up in my generation with the internet, we were surrounded by clickbait, false information, fake news, and by experiencing all of these we naturally question any information given to us on the web. Just by being on the internet, most of us are able to differentiate misinformation / disinformation from accredited information. Comparatively, many older people do not do this. During their formative years, I believe that both misinformation and disinformation were not anywhere near the levels they are at now. I’m not saying that it didn’t exist in newspapers and in the radio and wasn’t an issue then, but I believe that there was far less than in society today. They did not grow up in a time of the internet to where they would have to question many things early on, so I believe that this behavior can lead to many of the comments and engagement on AI posts being done by them.

When I looked for examples to cite in this blog, I came to realize that I was far too quick to judge the older generations, as I recognized that many of these people commenting were not real. It was just AI bots posting images for a targeted audience, then AI bots commenting on these posts as an attempt to increase interaction with the people reading said post. I’ve always considered myself good with judging whether something was generative or not, so this confused me and made me question if I was completely oblivious. I don’t think I was, but I could see myself being misled on some occasions now that I think about it. So it’s very odd to think about myself falling for the exact thing I was making this post about and not having a clue.

Images from @venturetwins on X.com

Another argument about why older people have trouble, and the one that is the most obvious and clear, is that they just do not have enough experience with AI in their lifes to be able to differentiate anything. They aren’t anywhere near familiar enough with what it is and what it can do, so how could they tell if they aren’t aware of it?

I’m curious if there are any training courses/guides out there that are designed to help older people differentiate AI vs human, or maybe we aren’t quite there yet to have a course on it. I believe that we will very shortly in the future though, as it seems more and more people are becoming less aware of AI generated content, including myself.

1 comment

  1. A very stimulating post! Suppose I say that generative AI requires different skills to spot in different media; an image can look wrong in so many different ways, but the words “looks great 100%!!!” or whatever–those words are the same, regardless of who or what produced them.

    The problem of “inauthentic” writing isn’t exactly the same as the problem of fake images; and this might be a problem for the training course you propose, Wyatt. AI-authored words already look exactly like human-authored words; what do we do once image capabilities have caught up?

Leave a comment

Your email address will not be published. Required fields are marked *