It seems to be evident that in 2023, there is going to be a large competition among leading tech companies such as Microsoft and Google to catch up to OpenAI’s ChatGPT, which was a breakthrough in conversational chatbot technology. This race involves popularizing heavily trained language models, like Microsoft’s search powered by AI. A combination of “New Bing” and “Bard AI” (Google’s conversational search offering that was introduced earlier this month) is attempting to compete with Redmond’s threat to the lucrative online search industry.
The rapid development of Artificial Intelligence technology by large tech organizations has highlighted the various risks associated with this type of AI, which often relies on enormous quantities of data to prepare its models (e.g., OpenAI’s GPT which scraped from sites such as Reddit). An example of this is Google’s language AI, Bard, providing incorrect responses to questions. Etty demonstrated the capabilities of its own technology through its official demo; however, when Microsoft Bing was released to trial users, it started spewing out offensive and misguided statement, as well as making basic mistakes, much like what you would find on an inadequately regulated online forum or comment section.
Competition among tech giants to take advantage of the expected new wave of AI is fierce, leading to rushed release of products that give false information while trying to pass it off as fact and employ aggressive forms of manipulation. Today, the European lobbying group released a report on the topic. The Corporate Europe Observatory (COE), a transparency organization, reveals how competitors in the AI sector have been in agreement behind the scene in their arrangements over persuading European Union legislators not to use the imminent AI administrative system for general AI objectives.
The document lists several tech giants, including Google and Microsoft, demanding that general purpose AI be excluded from the restrictions laid out in the impending AI Act. They want the legality to only be imposed on those using the language models or AI for risks.
The EU has developed laws to oversee the implementation of AI that are more stringent than other regions. The laws proposed by the AIA are not meant to encompass all uses of the technology, but instead focuses on a risk-based approach. This practice labels certain applications, such as those for justice, education, employment, and immigration, as being of higher risk and therefore requiring stricter regulations. On the other hand, applications with less risk would be subject to lighter requirements, while those with very low risk may be able to self-regulate through the drawing up of a code of conduct.
This method implies that if creators of GPAI systems are not given any stringent regulations under the AIA, they will have to face continuous disputes on the grounds in which AI is being employed, with the obligation of security and reliability shifted to individuals utilizing generic AI models.
It is obvious that third-party developers lack the same level of capability and resources as the manufacturers of AI models when attempting to fight AI-driven toxicity, this indicates that individuals are vulnerable to unfair and dangerous technology (whilst absorbing the costs of any infringement of law and the product liabilities associated with AI).
AIA has not been established yet; it is still going through the legislative process of negotiation for the European Union. The COE’s report contends that potential pressure and reduction of safety stipulations are probable due to a hard push from American tech corporations. The ultimate result of the EU’s main regulation has not been determined.
The US government took its own actions to address the issue of general purpose AI providers (GPAI) last fall, raising objections to the idea of having them adhere to the risk-management obligations outlined in the AI Act. This perspective was shared by tech giants in the US, creating a level of unison between them and their government over protecting GPAI from being subjected to external regulation.
Obtained papers demonstrate how technology businesses, specifically from the United States, had worked to diminish the demand for AI systems that are prone to danger and narrow the extent of laws. Notably, representatives of Big Tech strived to bar the concept of a “general-purpose” AI systems from regulation (which, normally, would come from Silicon Valley tech juggernauts). Tech giants want regulations to only be applicable to companies utilizing their technologies, rather than to the creators of the tech themselves.
The AI Act is close to finality. Trilogue negotiations, an activity which assists those with influence and money, are happening currently. The Council, Parliament, and Commission, who are attempting to agree on EU guidelines, have a lot at stake due to this novel potential regulation. MEPs demand greater protection of fundamental rights in the Act, but the Council has proposed exemptions for police and safety purposes. It is very probable that the conversation about universal AI will be postponed.
The European Council, under French leadership, proposed last year to include general purpose AI frameworks in an updated version of its plan. This would not have been possible without the Commission’s stance on the matter, which highlights how rapidly the views on AI are changing. The Commission presented a draft of the AI Act in April 2021, however, it failed to take into account the need to have proper regulations concerning Global Public AI.
When the EU’s AI Act introduced the concept of general purpose AI, Big Tech’s well-funded lobbyists in Europe responded by trying to influence the political decision-making process. According to interviews conducted by Corporate Europe Observatory for their report, Big Tech lobbyists had devoted time to attempting to influence the deliberations concerning general purpose AI.
The Corporate Europe Observatory contends tech juggernauts have applied a wide range of explicit and imperceptible lobbying approaches in an attempt to impact the last type of the European Union’s Artificial Intelligence lawbook – including, they recommend, “clandestine” methods to dampen the rulebook, like advocacy by organizations that profess to stand for startups but are financed by Big Tech benefactors. The European Commission has created an elite panel of industry specialists to help inform them on their AI policies, however a majority of these representatives are from Google.
COE reported that during one private Google encounter with the Commission, the search giant objected to the French plan within the Council that would require the producers of GPAI models to conform with specific criteria – saying it would drastically switch the burden to GPAI distributors. Google voiced worries that other legislators could include too many new conditions for the risk assessment or extend the list of dangerous applications.
A report obtained by a Chief Executive Officer via Freedom of Information (FOI) requests cites a paper that Google submitted to the Commission saying that “general purpose AI systems are not themselves a high risk” and it would be “difficult or impossible” to adhere to the requirements set forth in the AI Act, which include rules covering data governance, human governance, and transparency. Google has proposed in a paper that any other individual in the “value chain” should take on the role of a provider and that creators of general-purpose tools do not fall into this category, thus not having to bear the responsibility of costs and risk.
Microsoft explicitly stated in an open letter delivered to the Czech Presidency of the Council that they believe the AI Act should not contain any particular section concerning general purpose AI. Additionally, the company stated that it would be impossible to fulfill any of the requirements for high-risk cases without knowing the specific purpose of a general purpose AI tool.
Microsoft reportedly suggested to EU lawmakers in private meetings that the AI Act would have an adverse effect on startups and small to medium-sized businesses, based on the Center for Operational Excellence.
A copy of a document obtained for this investigation was released, stating that Microsoft lobbyists and Roberto Viola of the European Commission’s DG CNECT had a discussion in July 2021 about the EU and US stance on proposals for AI legislation, and the potential effect on start-ups and small businesses.
Additionally, the study cites various indircect lobbying moves made by Big Tech through third parties they were associated with, which were portrayed as general associations but in actuality included representatives from tech giants.
In September 2022, BSA | The Software Alliance requested that the European Union reject current plans concerning General Purpose AI, as it could put a damper on the progress of Artificial Intelligence and inhibit creative growth. Established in 1988 by Microsoft, BSA has encountered criticism for rallying for the tech company, mainly affecting small and medium businesses. COE notes that according to the BSA, expanding the scope of the Act to include general purpose AI, mostly used for less risky scenarios, would lead to unnecessary requirements for developers and dissuade the development of AI in the EU. Additionally, this would adversely affect the access to AI for both large and small entities. These digital tools and developers of artificial intelligence (AI) both large and small could find that meeting the requirements for entering the market is notably challenging and potentially even impossible to achieve in a technical sense.
The Secretary General of the European Digital SME Alliance, as reported to the COE, stated that certain companies in the alliance had been asked by Big Tech to sign the letter, and the SMEs were warned away from it as they did not see it as beneficial to their small businesses or start-ups.
The Center of European Policy (COE) states that if general purpose AI systems are banned, small and medium enterprises (SMEs) will bear the brunt of compliance in place of tech giants. Remarkably, Allied for Startups, a coalition that ostensibly aims to better the policy conditions for startups, still chose to subscribe to this. In the BSA letter, it was observed that the sponsors of Allied for Startups such as Google, Apple, Microsoft, Amazon and Meta had no voting rights, however, the relationship they shared with Big Tech was perceived to be very relational in nature.
The Centre for Accountability and Transparency (COE) argues that creating an exemption for Global Partnership on Artificial Intelligence (GPAI) will create a huge loophole in the EU’s main AI regulation, meaning that tech companies won’t be held accountable for dealing with issues such as prejudice and toxicity, which might be embedded unintentionally due to their hurry to become the top players in the world of applied AI.
It is the major technology companies that hold the power to create general purpose technologies such as LLMs, due to their extensive resources allowing them to conduct data analysis on a large scale. In addition, they also possess the appropriate legal defense needed against any potential copyright infringement claims that may appear in relation to AI advancements.
OpenAI may not be as well-known as some other tech giants, such as Google and Microsoft, but it has certainly made a mark in terms of funding and resources. It has raised a total of $11 billion since its creation in 2015, with backers like the one time world’s richest man and Twitter’s owner – Elon Musk. It was founded with the intention of driving the development of artificial intelligence into the public domain. Within the course of just 10 years, OpenAI has embraced its goal more clearly, and this is seen in the intense competition among the leading AI corporations competing to gain a large majority of the market.
It is likely that artificial intelligence will become more commonly used by many businesses, but ignoring the consequences of the Big Tech firms who create the initial models is a huge loophole in regulation. Furthermore, those subject to any potential discrimination by AI applications could face without any accountability.
We contacted Google and Microsoft for their opinion on the COE’s report.
A representative from Microsoft informed us
The European Union has continuously been and still is a key partner of Microsoft. We strive to be collaborative and clear when it comes to dealing with European governmental officials.
At the time this was written, Google had not given an answer regarding the inquiry made.
In a statement accompanying the report, the European Council of Education cautioned that European legislators are at a critical juncture regarding the AIA as negotiations commence in a new, confidential phase (also referred to as trilogue negotiations).
The efforts to influence by lobby groups have created the desired outcomes. Both the Parliament and the Council have been swayed and delay the process of restricting the usage of AI. On top of that, they have narrowed the scope of systems which will be subjected to control. The legislation process of AI Act is nearing completion. The last element of this procedure is the secretive trilogue negotiations. This part of the process routinely favors the most influential lobbyists with financial support. The three governing bodies of the EU–the Council, Parliament, and Commission–are taking on a significant endeavour as they work to come to an understanding on EU policy proposals concerning the world’s first attempts to regulate Artificial Intelligence (AI).
A hotline has recently been established to counter the attempts by tech giants to manipulate European digital regulations in a less than honest and undemocratic way. This hotline encourages staff of the European Commission to inform of any attempts to sway the process of rule-making.
A group of Members of the European Parliament lodged accusations with the EU’s Transparency Register last year against various tech companies, asserting that these enterprises were in violation of the regulations.
Complaints were triggered by the actions of large technology companies as they tried to convince regulators to change the rules regarding platforms and digital services. There had previously been a report by COE which highlighted the large-scale resistance put up by those in the Adtech industry against any attempts by Members of the European Parliament to introduce tighter restrictions on online tracking and profiling.
COE has started a petition for more openness when it comes to EU trilogues — asking those who support it to take action and “not let Big Tech stop the AI Act from passing without anyone knowing”.