Professor's Insights | Machines talking to machines about Nature
Source:emlyon business schoolDate:2025-05-13

Jean-Baptiste Vaujour
Professor of practice in consulting and green finance, emlyon business school
The Role of Large Language Models in ESG Reporting
Large language models are revolutionizing the ESG reporting landscape by automating data collection, processing vast amounts of unstructured data, and synthesizing insights that were once time-consuming for human analysts. These AI-driven solutions reduce reporting burdens by integrating sustainability metrics across supply chains, financial records, and stakeholder disclosures. Trained on carefully selected internal company data, they allow for the automated production of qualitative reports and comments that would otherwise require days of work.
Companies like Greenomy and Dydon AI are leveraging LLMs to enhance ESG compliance. Greenomy, for instance, streamlines sustainability data collection to align with CSRD requirements, while Dydon AI’s Taxo Tool automates EU Taxonomy reporting by extracting and structuring relevant data. This field is not however limited to start-ups and established companies such as Capgemini are seizing the opportunity to leverage on their technical expertise. Capgemini’s AI-driven solutions help businesses manage ESG data in areas such as carbon accounting, risk assessment, and sustainability disclosures. By automating data validation and ensuring consistency in reporting, these tools not only enhance compliance but also improve corporate sustainability strategies.
Ethical and Practical Challenges
Despite these advancements, reliance on machine-generated ESG reporting is not without challenges. While increasingly addressed in specialised professional tools, issues related to data privacy, transparency, and accountability remain at the forefront of discussions for off-the-shelf AI-solutions. Can an AI-generated report be fully trusted and who is liable for potential mistakes? Can AIs generate an auditable paper-trail that can be reviewed in a certification process? How do businesses ensure that biases in training data do not skew ESG assessments? In most cases human oversight remains necessary to interpret and contextualize AI-driven insights.
One emerging challenge is the compounding effect of AI-generated content being read and analysed by other AI systems. When AI models rely on machine-produced reports to generate further insights, there is a risk of information distortion, loss of nuance, and the propagation of inaccuracies. Without human intervention, feedback loops may reinforce biases or incorrect conclusions, potentially compromising the integrity of ESG assessments.
Another key concern is the over-reliance on technology for sustainability decision-making. While AI can process data efficiently, human judgment is essential in making ethical and strategic decisions that align with a company’s broader sustainability goals.
The Future of M2M Communication in ESG
As regulations continue to evolve, so will the technology supporting ESG reporting. Future developments may include greater interoperability between AI-driven platforms, improved transparency in machine-generated assessments, and enhanced regulatory oversight of automated reporting. Regulatory bodies, such as the AI Office in Europe, may increasingly focus on establishing frameworks for AI accountability, ensuring that machine-generated reports adhere to standardized criteria and are subject to verification processes.
This could involve requiring companies to maintain auditable records of AI-generated data, implementing independent third-party reviews, and developing policies to mitigate the risks of biased or misleading AI-driven conclusions. Striking a balance between leveraging AI for efficiency and maintaining robust governance will be key to ensuring the credibility and reliability of ESG reporting in the long term.
Environmental considerations
An often-overlooked aspect of AI-driven ESG reporting is the environmental footprint of the AI models themselves. The computational power required to train large-scale AI models is immense, with a study estimating that training a single large language model (LLM) can emit approximately 270 metric tons of CO₂, equivalent to the lifetime emissions of about five average cars. Once deployed, these models continue to consume significant energy; for example, running GPT-3 on cloud infrastructure is estimated to generate around 8.4 tons of CO₂ per year. Given that ESG frameworks emphasize environmental responsibility, companies integrating AI into their sustainability reporting must also consider the trade-offs in energy consumption and carbon emissions.
Data centers powering AI models contribute to growing electricity demand, which in the US is projected to reach 12% of total national energy consumption by 2028. Moreover, AI infrastructure requires vast amounts of water for cooling, with some large-scale data centers using millions of liters annually. While AI plays a crucial role in automating ESG reporting, reducing compliance burdens, and increasing transparency, companies must ensure that their reliance on AI aligns with sustainability principles. This necessitates investments in energy-efficient AI models, the use of low-carbon cloud computing, and commitments to power AI operations with renewable energy sources to mitigate the unintended environmental consequences of digital automation.
However, recent advancements are rapidly enhancing the energy efficiency of LLMs. Researchers at the University of California, Santa Cruz, have developed innovative techniques that eliminate the most computationally expensive elements of LLMs, such as matrix multiplication. This breakthrough has enabled the operation of billion-parameter-scale language models on just 13 watts of power, roughly equivalent to the energy consumption of a standard lightbulb. Moreover, companies like DeepSeek have introduced highly efficient LLMs that require less than a tenth of the computing power needed for previous models, such as Meta’s Llama. This significant reduction in energy requirements not only lowers operational costs but also diminishes the environmental impact associated with AI-driven processes.
These advancements are particularly pertinent for ESG reporting, where the environmental implications of utilizing AI must be carefully weighed. By adopting more energy-efficient AI models and integrating renewable energy sources, organizations can align their sustainability reporting practices with broader environmental goals, ensuring that the tools used to measure sustainability do not inadvertently contribute to environmental degradation.
Machine-to-machine communication and AI are set to redefine corporate sustainability, integrating ESG obligations with technological innovation. Rather than being at odds, regulatory compliance and sustainability now converge through intelligent automation. AI-driven systems streamline reporting, reducing compliance burdens while ensuring greater transparency and accountability. The ability to process vast environmental data sets enhances investor decision-making by surfacing critical insights that were once difficult to access.
However, long-term success will require a strategic equilibrium—leveraging AI’s efficiency while maintaining human oversight to ensure ethical and informed sustainability practices. Looking further ahead, the emergence of a fully automated reporting and evaluation system that would inform, analyse and act based on AI-generated reports derived from smart meters and other connected devices remains the unspoken objective of the increasing convergence between machine-to-machine communication and AI agents.