Recent media coverage has drawn attention to the important fact that, despite its seeming intangibility, AI is not cost-free from a carbon-accounting perspective. Environmental concerns should not eclipse AI’s clear potential to generate positive societal impacts and, indeed, environmental benefits (see, for example our blog discussing ways in which AI can help tackle climate change). However, businesses should be alert to the hidden carbon footprint of such technologies—particularly as their use becomes more widespread and more deeply embedded into business models.
Recent research has made the striking prediction that, unless sustainable AI practices are implemented rapidly, by 2025 AI “will consume more energy than the human workforce, significantly offsetting carbon zero gains”. Where, however, does this energy consumption happen? The answer, as recently reported, is that training and running advanced AI models consumes vast quantities of water and electricity. For example:
- Training GPT-3 alone is estimated to have required 3.5 million litres of water (through datacentre usage) and consumed 1,287 MWh, generating more than 550 tonnes of carbon dioxide equivalent; and
- User queries are also resource-hungry, with a model like ChatGPT ‘drinking’ an estimated 500ml of water (about a standard water bottle) per 20-50 interactions. Given that ChatGPT is estimated to receive around 10 million queries each day…that’s a lot of water bottles.
AI’s carbon footprint and the regulatory landscape
Businesses should take note of such impacts —regardless of whether they develop, or merely deploy, AI. Aside from reputational considerations as customers become ever more environmentally conscious, new sustainability reporting frameworks will soon require companies to provide detailed reporting of their carbon footprint across their value chain.
For example, companies in scope of the Corporate Sustainability Reporting Directive (CSRD) are required to report in accordance with the European Sustainability Reporting Standards (ESRS). The ESRS (which we recently discussed here) will require that companies should, where material, report on their Scope 1, 2, and 3 greenhouse gas emissions. This could include, for example, emissions generated through cloud computing and data centre services (as suggested by the delegated regulation) and emissions generated by training and operating AI models. Furthermore, the International Sustainability Standards Board’s IFRS S1 and S2 standards, published in June 2023, will similarly require disclosure of Scope 3 emissions when they come into effect in 2024. As mentioned in our previous blog post, the UK intends to endorse these standards within the next year, before adopting them into law.
The current draft of the EU’s AI Act (as proposed by the EU Parliament and which is subject to the ongoing trilogue negotiations), is another example of AI’s environmental impact being acknowledged. The draft requires AI developers to design foundation models making use of applicable standards to “reduce energy use, resource use and waste, as well as to increase energy efficiency” and be “designed with capabilities enabling the measurement and logging of the consumption of energy and resources” (proposed Article 28b). Recital 46 to the proposed Act further signals that the EU plans to develop guidelines on a harmonised methodology for calculating and reporting such information in future.
AI’s carbon footprint and industry players
The environmental impact of AI is also very much on the mind of industry players. For example, Google has developed the ‘4Ms’, which set out best practices to reduce energy usage and carbon footprints in machine learning and include: (i) selecting efficient model architectures; (ii) using optimised machine processors; (iii) mechanising, by computing in the cloud; and (iv) choosing optimised map locations with the cleanest energy—and has called for greater transparency and consistency in energy usage reporting. These initiatives mirror responses to equivalent concerns about the energy consumption of cryptocurrency mining (which we previously discussed here), and it seems likely that increasing scrutiny of AI’s carbon footprint will apply further pressure on developers to improve the energy efficiency of AI models and their underlying hardware.
The key takeaway: businesses should be alert to AI’s hidden environmental impact. Businesses which use or expect to use AI should start to consider how they will calculate, monitor and mitigate its environmental impacts. It is unlikely that an ‘out of sight, out of mind’ industry inertia in relation to AI-linked emissions will be sufficient.
 As noted in our previous blog post, reporting obligations under the CSRD will start to apply at different stages, depending on the type of company (noting that the CSRD will catch both certain EU and non-EU companies). At the earliest, this will be from 2024, with reports due in 2025.
 Under the current draft, ‘foundation model’ is defined as ‘an AI system model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks’ (proposed Article 3 para 1, point 1 c).