top of page
  • Writer's pictureMartha and Rashi

Generative AI: Benefits, Risks and Policy Response


Generative AI is broadly defined as an AI system capable of generating text, images, video, code, data, and other content in response to prompts. 


“Generative AI is the key to solving some of the world’s biggest problems, such as climate change, poverty, and disease. It has the potential to make the world a better place for everyone.” ~ Mark Zuckerberg. 


The excitement around generative AI is growing, and Zuckerberg is certainly not alone. The global generative AI market is currently experiencing a boom, with its predicted market value expected to grow from $43.87 billion in 2023 to $667.96 billion by 2030. Whilst many famous figures in the tech sphere are wholeheartedly in support of such technologies, it is also imperative that the risks of generative AI are not overlooked.


What are the benefits and risks of generative AI?


First and foremost, generative AI is typically associated with being an invaluable tool for businesses. Companies are beginning to utilise the technology to automate routine tasks, optimise the use of resources, and reduce labour and manufacturing costs. Automation of assembly line operations and the use of machine learning algorithms to better predict future demand patterns means that, so far, generative AI has made the biggest impact in sectors such as manufacturing and supply chain management. However, its use is now also being expanded to a variety of sectors, such as healthcare, education, and digital marketing.


Aside from business, generative AI has also been making waves in the creative industry. Collaboration between AI and humans to create art, music, and pieces of writing is becoming increasingly popular, with many artists attesting that AI is the perfect tool to overcome “writer’s block” or simply serve as inspiration for a project.


On a much larger scale, some even theorise that generative AI could be pivotal in solving major global issues such as the ongoing climate crisis. Due to its ability to analyse large quantities of data related to climate patterns and atmospheric conditions, it may allow us to have access to early predictions about extreme weather events as well as better track where we need to focus our efforts.

 

Nevertheless, generative AI is not without some significant drawbacks. Recently, issues have been raised around intellectual property theft, plagiarism, and security, as well as ethical considerations surrounding unchecked bias in AI algorithms. Because of this, there is growing pressure on governments to intervene to put in place stricter laws and regulations on the use of generative AI.


What are the different policy responses to generative AI around the world?


AI has had varying policy responses, with countries still in different stages of the development of their regulations. Pro-innovation countries such as the UK have favoured a lighter approach to regulation on AI in general. The UK published a white paper stating the overarching principles that would apply to all existing regulators. The government will take a less centralised role in the coordination and monitoring of the new regulatory framework. This is in sharp contrast to the EU, which has introduced the AI Act. This Act aims to categorise AI tools by their potential for risk and imposes a ban or more detailed documentation for ‘high-risk’ tools.


However, the AI Act places ‘foundation’ models in its own category, which includes generative AI tools. The regulation applies to GenAI centres around publishing copyright material used in training data, to avoid the tools from producing law-breaking content, and requiring disclosure of when content has been generated by AI.


The issue surrounding copyrights, as GenAI produces content based on existing data sources, has been a prevalent issue, especially in the USA, with Microsoft and OpenAI both being affected by lawsuits due to this. Meanwhile, the USA still lacks a country-wide policy specifically on the use of AI, but some states and cities have their own AI-related rules.

On the other hand, China has put in place significant AI legislation and regulation, and since last August, they have begun to enforce rules that are aimed specifically at the use of generative tools such as Chat-gpt and Dall-e.


How tough does regulation on generative AI need to be?


More stringent policies, such as those adopted in China and the EU, may potentially be better as they allow regulation of AI to disincentivise a potential race to the bottom in AI ethics. Free market mechanisms may not be sufficient for companies to self-regulate through policies such as what the USA have encouraged. As there is an increase in the number of consumers, economies of scale in data for AI companies may lead to a race to the bottom in privacy policy. Therefore, there may be a need for the government to intervene.


Another alternate theory put forward by Erazchi and Stucke (2016) says that it is possible that competing AI companies would learn to tacitly collude, given the common goal of maximising long-term profits. This may involve deploying algorithms that are programmed to learn how to set prices in oligopolistic competitions with other firms. However, this is, for now, more of a speculative antitrust concern that could occur.

 

Another downside of regulation would be that it may disincentive innovation, leading to a slowdown in the technological progress of the AI sector. According to Gordon (2016), the impact of information technology, including AI, is still in the early stages. Therefore, more stringent regulation may not be ideal right now. There is evidence that government-mandated privacy regulation has led to less innovation and that it slows down the diffusion of technology that relies on data, as is the case for GenAI.


However, this does not rule out the role of regulation, especially for those who hold a more pessimistic view of AI, such as those influenced by “effective altruism” who hold the belief that AI may be the means to the end of humanity. Although it is wise to not fixate on such a notion, policymakers may wish to take into account all potential concerns and views in their decisions.


More stringent regulation on generative AI is also necessary to ensure that the social and ethical concerns around its usage are addressed. Implementing policies such as stricter data privacy laws, establishing clear guidelines for developers, and demanding that documentation on AI system functionality is widely available are the first steps to protecting user rights and ensuring that AI use is restricted to ethical uses. In an ideal world, an international consensus on generative AI regulation would be reached to prevent a disjointed approach to tackling the problem.


However, there is a fine line between prioritising these ethical considerations and hindering the development of what could be such a vital technology. It is essential that we find the right balance.

bottom of page