AI Made Friendly HERE

Is AI too unethical to harness in political media? – The Badger Herald

The rise of artificial intelligence has transformed society and thereby permanently changed the political landscape. Political campaigns are beginning to rely more on artificial tools, according to PBS Wisconsin, but is the use of artificial intelligence ethical?

AI is quite helpful when generating content for political campaigns because it can quickly create images and text that are easily implemented in marketing. AI can generate personalized messages tailored to a specific audience, according to PBS Wisconsin.

Generative tools can also expedite operations, allowing for targeted content messages, according to CMSWire. AI can be used to engage voters and give constituents information surrounding issues that are important to them. Ultimately, artificial tools can help educate and inform the public to promote civic engagement.

Marketing teams also use AI to analyze a large amount of data and predict trends regarding which issues are most important to the majority, according to DOMO. AI tools can also expedite processes associated with answering questions online and managing social media, according to the article. Such functions can be consolidated to generate algorithms that predict what voters want to see and hear, according to the PBS Wisconsin article.

While these developments are positive, giving artificial intelligence the power to analyze voter trends and desires can lean into ethical issues. Because AI is so powerful, it can misinform and create content that can lead voters in a dishonest direction.

For instance, campaigns can create AI Chatbots for their social media pages that flood comment sections with incorrect information or push a certain narrative, according to The New York Times. This can end up falsifying the amount of support political posts actually have and serves to skew public perception on the chosen topic.

AI can generate claims on behalf of a political opponent that are either outright lies or twisted misleadingly to garner support for themselves, according to PBS Wisconsin. As artificial intelligence has become more advanced, so have the consequences.

Typically, voters are not educated about or given the tools to distinguish between fake and real information online. This can lead to subconscious bias for or against a candidate or general misinformation. When it comes time to vote, people can have dangerously misguided notions swaying their decisions.

How can voters determine if information is authentic when it’s generated by a machine? If an employee or representative for a candidate is creating media posts, then at least the certainty of a human intent and approval process is there.

But, when a product is AI-generated, only one click of a button is needed before the product appears. The need for thoughtful production disappears. As a result of AI use, voters might come to doubt the media. Such uncertainty can erode the public’s trust in campaigning and candidates. 

AI’s ability to target individuals can also cause people to feel unsafe because of high levels of specificity that AI has. Political campaigns can send unique messages to specific people, enabling uncertainty about what a candidate really stands for, according to a study in the American Journal of Political ScienceFor instance, a candidate can target environmentally focused voters to proclaim their commitment to climate policies, but then turn around and downplay environmental impacts to voters that are more concerned with the economy.

If voters are receiving messages that are tailored to their interests, they never truly know if the candidate will actually address the issues they are concerned about.

Addressing these issues is another problem in and of itself. Regulating AI, especially when it is a relatively new tool, is difficult.

Some countries are beginning to implement regulations surrounding the more negative aspects of AI like deep fake videos, according to PBS Wisconsin. But, there are loopholes in restrictive policies. The EU proposed the Artificial Intelligence Act which categorizes AI based on how risky they could be, which serves to regulate the more dangerous forms of AI. It mandates that  AI be categorized on risk levels that each require different extents of regulation. However, loopholes arise because the classification of what is high risk may be difficult to categorize. Exceptions also exist under certain categories and evaluation criteria is not clearly specified by the website.

and creating policies that address every aspect of AI is a very delicate line to walk, according to a report by Brookings. Technology is a free space and creativity can often blur the lines between moral and immoral.

To address this difficult balance, increased levels of transparency are needed. Political campaigns need to disclose when they use AI and what content is real and generated. The public needs to know if the content they are consuming contains the true beliefs of the candidates. Such practices will aid in increasing integrity as well as minimizing false persuasion tactics. Vigilance is key to managing the rapid evolution of artificial intelligence.

Careful consideration of ethics could allow for AI to be safely integrated into the political landscape. We must find a balance between prioritizing democratic values and using innovative tools that boost voter engagement. The misuse of AI is threatening, but if we can harness it for good, we can promote democracy and a value-driven political process.

Sammie Garrity ([email protected]) is a sophomore majoring in journalism and political science.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird