Microsoft’s Bing is one of the many platforms that is using Generative AI to offer new features to its users. The company is also working on another Large Language Model called MIA-1 and will likely integrate it with its existing products and services.
However, the use of artificial intelligence also includes potential risks of misinformation. The European Commission sees this as especially concerning regarding the upcoming elections.
As a result, the EU has warned Microsoft Bing among other big platforms such as Google Search, TikTok, and YouTube that it will take action against inappropriate use of AI.
EU wants to evaluate the risks of AI features in Microsoft Bing
The EU wanted to examine risks associated with generative AI in Microsoft Bing and asked the company to hand over its internal documents by May 27 for the purpose. To be more specific, the commission wants information about generative AI features in ‘Copilot in Bing’ and ‘Image Creator by Designer’.
However, Microsoft didn’t respond “fully” to the EU’s request for internal documents. It only said it was committed to addressing the commission’s demands.
“We have been fully cooperating with the European Commission as part of the voluntary request for information and remain committed to responding to their questions and sharing more about our approach to digital safety and compliance with the DSA,” said Microsoft. According to the spokesperson, the company is also taking steps to “measure and mitigate potential risks” across its products and services.
However, if Microsoft does not provide the specific internal documents to the commission, it will fine the giant with up to 1% of its total annual income from Bing along with periodic penalties of up to 5% of its average daily income.
Bing may have breached the DSA for risks linked to generative AI
Tech companies need to comply with the EU’s content moderation law known as the Digital Services Act. It came into action last year. The EU suspects that Bing may have breached the DSA for risks linked to generative AI. It points to the hallucination problem of generative AI models, where the model provides the user with wrong information. For those unaware, it can occur in any type of large language model and inside platforms/services that use it.
The European Commission is also concerned about the “viral dissemination of deepfakes” (that may impersonate public figures), and “automated manipulation of services that can mislead voters”.