4 Warning Bells as AI Continues to Influence the World

Robots taking over the world? It’s been a theme in literature for at least a century. The idea of artificial intelligence (AI) overthrowing humanity has been part intrigue and part fear for generations. But in the last few years, the actual possibility of it happening has become a more serious topic to some. However, “[T]he existential threats that have been posited by Elon Musk, Geoffrey Hinton and other AI pioneers seem at best like science fiction,” says Michael Bennet, the business lead of responsible AI at Northeastern University. But that doesn’t mean there are no dangers of AI.

Rather than worrying about an AI-takeover, get to know the real dangers of AI and what ramifications they could have. Here are 4 warning bells as AI continues to influence the world.

1. AI regulations vary by country.

Every world power wants to have the smartest, fastest, and most powerful AI. But there’s been discussion about whether to slow down innovation to keep AI in check or allow AI companies to go forward to see how far they can go. “While the United States has championed a limited government role over the internet and deferred to private technology companies so as to support freedom of speech and technological innovation, the EU has pursued a greater regulatory role to protect other human rights, including privacy, and China has assumed complete state control, with extensive censorship and surveillance capabilities,” according to the Carnegie Endowment for International Peace. Not everyone agrees on how it should be done, and this has created some friction and wariness among nations.

While AI continues to advance worldwide, keep in mind that countries are governing it differently, and so far, there isn’t a global consensus on how to manage its risks as well as its opportunities.

2. The use of AI can spread mass amounts of disinformation.

If you occasionally get your news from social media, along with half of all U.S. adults according to a recent survey by Pew Research Center, you may have noticed some inaccuracies in the reporting. Now, with AI, that could get even worse. While “teams of people” created “influence campaigns” on Facebook and elsewhere in the 2016 presidential election, says journalist Jeremy White, now a single person with a computer “can generate the same amount of [inaccurate] material, if not more.”

Imagine a chatbot trained on inflammatory remarks scraped from discussion boards and spat out in large quantities on various platforms. It’d be hard to tell whether a person or a bot was behind them. “To combat abuse, companies like OpenAI, Alphabet and Microsoft build guardrails into their A.I. tools. But other companies and academic labs offer similar tools that can be easily tweaked to speak lucidly or angrily, use certain tones of voice or have varying viewpoints,” says White.

In an election year, it’s frightening to think that disinformation (deliberately created false information spread with the intent to deceive) could be generated in vast quantities within minutes and then put online to inundate feeds from accounts posing as real users. Used in this way, AI could create further divisiveness in our country. When you read content online going forward, do so with skepticism. You don’t know who or what produced the remarks you’re reading and with what purpose unless they’re linked to a credible source.

3. AI could monopolize our information flow.

Because AI assistants provide lighting-fast answers to our questions, many of us are choosing to skip Google searches and go straight to a chatbot. When we do this, and rely solely on AI’s responses, the original sources are less visible, say Jeremy Heimans and Henry Timms. (Indeed, search engine volume is predicted to decrease 25% by 2026 due to the rise in chatbot use, according to Gartner Tech.) Because a few big-tech companies “are likely to control the ‘base models’ for these interfaces,” continue Heimans and Timms, the risk is the information we get is inaccurate, biased, and/or filtered through one narrow viewpoint. Obviously, that wouldn’t be good. It’s going to be important for parents, leaders, and organizations to make sure they’re not relying on just one tech company or AI tool for all their information.

Chatbots are fun and very engaging, but as parents, it will be prudent to keep track of how much your kids (and you) rely on them for information. Keep conversation flowing with your children about all the ways we already rely on AI—and continue to ask questions to stay on top of their learning.

4. AI can be used to harm others.

In the United States, there have been several cases involving the creation of deepfakes involving celebrities but also regular kids in schools. This past winter, two teenage boys in Florida were arrested and criminally charged for making and sharing fake explicit images of classmates, according to WIRED. As the world catches up to the quickly changing capabilities of AI, more people are going to be held accountable for using AI to do harm. Unfortunately, this won’t stop some people from continuing to find ways to manipulate AI to their advantage.

Globally, AI is also being used to create propaganda. A “flourishing genre” on Chinese social media, for example, is the creation of AI-manipulated videos “that use young, purportedly Russian women to rally support for China-Russia ties, [and] stoke patriotic fervor,” say journalists Vivian Wang and Siya Zhao. Using AI, people can create believable images and videos and then push their particular cause across various platforms.

Having a savvy and skeptical attitude toward social media, discussion forums, and videos is something we need to cultivate as AI continues to develop and evolve. The more we stay on top of this tech, the less likely we’ll be tricked into believing everything we see.

Sound off: What dangers of AI worry you most?

Huddle up with your kids and ask, “What, if anything, worries you about how much AI is in our world?”