Semiconductors: Applications of AI
Our shared experiences of lockdown, isolation and social distancing over the last two years have driven our hunger for interaction through digital mediums. We’ve achieved this with an array of devices that have challenged traditional modes of work, education and entertainment. The transformation of these domains has been enabled by decades of semiconductor research and development; Oisin Moriarty and Steven Wheeler provide an overview of the sector, explore the geopolitical debate and take a deep dive into the AI applications of chips.
Courtesy of AnandTech
The fortunes of semiconductor manufacturers have overwhelmingly been positive. Qualcomm, AMD and Nvidia have accelerated away from a venerable Intel in the US, having better capitalised on the behavioural shifts that we have become familiar with during the pandemic. Imagination Technologies has recently appointed Barclays and Citi to advise the company on an IPO in 2022. Taiwan Semiconductor Manufacturing Company (TSMC) has solidified its reputation as the go-to contract semiconductor manufacturer; in 2020, production for Apple accounted for 25% of TSMC’s $48bn revenue. Nvidia remains locked in a battle with regulators on both sides of the Atlantic as it seeks a green light to acquire ARM from SoftBank for $40bn. A hubbub of recent M&A activity and princely ROEs for investors veils the structural challenges of the ongoing supply chain crunch.
Having cut capacity in the first half of 2020 to respond to the drop in demand brought about by lockdowns, vehicle production lines are being constrained by global chip shortages. UK car production declined by 41.4% when comparing October 2021 to October 2020; in Spain, average used car prices increased by 7% in the year from October 2020 to October 2021 as new cars trickled into the market at a slower rate. Ford and GM have signed partnership agreements with GlobalFoundries and TSMC, respectively, to collaborate on semiconductor R&D and protect manufacturing capacity for the automakers.
What factors have prevented semiconductor manufacturers from fulfilling demand? Chip fabrication is a hugely expensive undertaking; it is a multi-step process that requires a highly skilled workforce. Foundries are concentrated geographically – companies operating in Taiwan, China, South Korea, and Japan control most of the global production capacity and are reliant upon the same, strained distribution chains to get final products to international markets.
Commentators now point to evidence of “semiconductor nationalism” emerging in post-pandemic economies. Over a decade of a broadly uninterrupted flow of chips from the foundries of Asia has led the West into a false sense of security that has been cruelly exposed.
Both in the US and the EU the discussion about protecting domestic supply chains has been re-energised with measures to enact protective legislation (CHIPS for America Act) and authorise state aid/investment in the process of being ratified. In November 2021, Samsung Electronics announced a $17bn investment in a new foundry in Texas, USA. In September 2021, Intel broke ground on two new foundries in Arizona and TSMC is investing $12bn in the same state. The European Commission prepared a white paper, the Digital Compass initiative, that aims to double the market share of EU chip manufacturing by 2030 and reduce reliance upon Asia and the US to secure its “digital sovereignty”.
Further, China has also stated an ambition for semiconductor self-reliance; in 2020 only 30% of demand for chips was met with domestic supply. One of the key objectives of the trade war enacted under President Trump was to address forced technology transfers and IP infringement – these have impacted upon the ability of Chinese manufacturers to scale up production.
One of the major battlegrounds emerging between the US and China concerns the application of artificial intelligence (AI) and their requirement to develop more capable AI. As AI continues to grow and become more important as a technology, it has begun to have an ever-larger influence on the semiconductor industry.
AI and more specifically machine learning have seen tremendous growth in the past decade, from visual recognition techniques, to most recently predicting weather and protein folding better than ever done before. However, AI is not only limited by the ingenuity of algorithms being used or the amount of data, but also increasingly important is the hardware it runs on.
There are numerous examples of where the semiconductor industry has pivoted into developing AI specific chips. Perhaps the most striking example of which is Nvidia. Originally Nvidia solely produced graphics cards for computer games, now close to 40% of their revenue comes from card sales for AI applications. This figure is particularly staggering when put into the context of the company's near 500x growth in value from its founding just 20 years ago. A growth that can, in part, be attributed to the adoption of GPUs (graphical processing units) for AI applications. Furthermore, Nvidia developed a low-level API called CUDA which enables users to easily run software on many graphics cards at the same time, a process known as parallelisation. It was this technology that led to one of the biggest recent breakthroughs in AI; when Google’s AI, AlphaGo beat the world's best Go player, a feat previously thought to have been decades away.
Whilst Nvidia is one of the most obvious examples of this recent trend there are also many others. For instance, ARM has started implementing neural processors in their chips for phones which, for the time are being mainly used for photo enhancement. As well as this, Google has released its TPU, tensor processing units which it currently runs all its ML-based APIs on. Another notable example is Tesla’s FSD chip used for their “self-driving” cars, Tesla initially used Nvidia graphics cards but then moved to its own more specialized chips, citing benefits in power consumption.
Whilst the semiconductor is, and undoubtedly will continue to have an impact on AI, AI methods are also already beginning to help in the design and manufacturing of semiconductors. It was estimated by a McKinsey report that in the next two to three years, AI could generate between 35 and 40 billion dollars annually for the semiconductor industry, and this number could go up to 95 billion in the longer term. Given the current size of the semiconductor industry of about 500 billion dollars, AI could have close to a 20% impact on the returns of the industry annually. AI will have an influence on the entire semiconductor development stack; from organizational design, forecasting, project selection, supply chain efficiency, and most importantly manufacturing and design. It is estimated that about 40% of the impact of AI will be via faster R&D. This can be seen most recently after Google trained convolutional neural networks on 10,000 semiconductor floorplans (the layout of the semiconductors) and was able to generate new floor plans for their TPUs in less than one day, a process which would usually take months.
Increasing competitiveness between the US and China as technology superpowers means that a drive to develop both AI and semiconductors will only become more important as their use cases become broader.