Futurism released an article titled “Fully AI-Generated Influencers Are Getting Thousands of Reactions Per Thirst Trap” outlining numerous fake profiles with AI generated images and content.
The author, raised the question: Do the humans interacting with these accounts even realize they don’t exist in the real world? Would it even bother them if they knew? Or do they realize, and it’s part of the draw?
It’s a good guess that most people have no idea they’re interacting with a fake profile with fake images.
That is the level of due diligence people have towards the internet and content in general.
There has been much digital ink spilled about the rise of bot farms (a great recent example of a Russian propaganda bot farm bust) and the impact they can have on internet discourse and public opinion, which in turn impacts real world outcomes. With the rise of easily accessible and rapidly advancing AI capabilities, the bot farms are already evolving to utilize this technology and will continue to become even more effective and dangerous.
The first thing you should ask yourself when you see comments from people on the internet which you do not know for certain are real people is: Are these comments genuinely from a person who is actually representing themselves? The more a comment induces a strong emotion, one way or another, the more you should pull back and consider that it could be fake and attempting to manipulate you.
It’s easy to see people get riled up, discouraged, encouraged, or perceive things as opposite of what they actually are when masses of fake profiles and fake comments converge to paint a narrative. It can motivate real people to do and feel real things, good or bad, despite those actions and emotions being based on a false narrative.
Governments, public figures (think political candidates, celebrities, influences, etc), and businesses can generate fake support. They can also launch campaigns against their opponents to make it seem like a mass of people are outraged at them. Both of these in turn get people at the edge to join in. There no longer needs to be a first follower, AI takes care of that perception, which induces other potential followers, real people, to feel safe enough to join in. People are swept into something they otherwise would have never participated in. Public opinion can be manipulated, even affecting how people vote, or drive them to inaction.
It can also generate fake sensationalism, which in turn grabs the eyes of real people, who then devote their time to covering and promoting it, either for or against. It snowballs even more.
This harnesses some of the great human weaknesses, which also can be our strength at times as well. Our emotional drive to protect and fight when our senses are driven to indicate danger, and that is a massive sliding scale which could be as simple as wanting to help fight cancer by donating or participating in a walk, and as big as seeing someone as a real threat to your very way of life and existence.
The only protection from all of this is awareness and due diligence. Humans have so far largely proven they’re not capable, at least not yet, of identifying and defending against this level of manipulation. We’ve all felt and shared, either online or offline, something that turned out to not be true. Some do it far more than others.
None of this actually touches on the other obvious danger, which are scam artists manipulating people into giving them money and resources. Unfortunately, this pales in comparison to how this power will be wielded in the near future by entities with darker agendas and even more resources at their disposal.
This all might sound alarmist to you. All that can be asked is that you take time to consider what you’ve read here and take the time to guard yourself from things which might be looking to highjack your attention and feelings.
