Researchers from King’s College London explore how machine learning professionals perceive and engage with environmental sustainability in their daily work. Martin Cooper MBCS reports.

Despite growing awareness of the environmental costs of machine learning (ML), many practitioners still view sustainability as a secondary concern, according to a new study presented at the ACM Conference on Fairness, Accountability, and Transparency (FAccT ’25) https://tinyurl.com/yc2fcza4. The research, based on interviews with 23 ML professionals across academia and industry, reveals that while interest in ‘green AI’ is rising, practical action remains limited due to perceived lack of agency, disciplinary pressures and uncertainty around impact measurement.

One participant, a PhD student in life sciences, captured the tension between ethical concern and professional obligation: ‘I need to do my research and if I was to tell my supervisor no, I’m not going to use the HPC [high performance computing system] because I feel bad for the penguins in the Antarctic, then that wouldn’t go down so well.’ This quote, emblematic of the study’s findings, highlights the moral discomfort some practitioners feel about the environmental footprint of their work, even as they continue to prioritise performance and deadlines.

Machine learning and sustainability in practice

The study, conducted by researchers from King’s College London, aimed to explore how ML professionals perceive and engage with environmental sustainability in their daily work. Interviews covered a range of roles, from engineers and data scientists to lecturers and PhD students, with participants based primarily in the UK, Turkey and China. The researchers found that even among those with strong personal concern for climate change, sustainability was often treated as a ‘nice-to-have’ rather than a core professional responsibility.
Several participants expressed eco-anxiety and guilt, yet felt powerless to effect change. One academic researcher expressed concern, but also indicated a sense of powerlessness as an individual. Others framed the dominance of large language models (LLMs) and big tech firms as the real culprits, arguing that their contributions were comparatively negligible.

The nature of ML work itself reinforced this perception. Many interviewees described their models as ‘lightweight’ or ‘basic,’ often running on local machines rather than energy-intensive cloud infrastructure, with one senior data scientist noting the miniscule energy use compared to LLM pre-training. Such comments reflect a broader belief that responsibility lies elsewhere — whether with managers, infrastructure providers, or policymakers.

The study also identified disciplinary challenges that hinder sustainability efforts. ML practitioners are typically incentivised to optimise for performance, accuracy and speed, with little room to consider environmental impact. One participant remarked that sustainability is ‘not a metric of performance.’ Others cited time constraints, lack of client demand, and absence of internal accountability mechanisms as barriers to change.

For you

Be part of something bigger, join BCS, The Chartered Institute for IT.

Even when eco-feedback tools were available, their influence was mixed. Some participants welcomed the idea of carbon tracking technologies, suggesting they could raise awareness and inform better practices. However, others were sceptical, questioning the accuracy of such tools and doubting their ability to drive meaningful behaviour change.

Integrating sustainability into AI

Despite these challenges, the study outlines several paths forward. Technical solutions such as reusing models, adopting smaller architectures, and using more efficient programming languages were mentioned. Workplace policies, managerial leadership and more precise metrics were also seen as potential enablers. One Responsible AI Lead admitted: ‘We’re now debating whether sustainability should be one of the responsible AI measures. It should be. But we don’t know how to measure that.’

Regulatory approaches were widely supported but viewed with scepticism. Participants discussed carbon taxes, data retention limits, and incentives for sustainable practices, yet doubted their feasibility due to lobbying, intergovernmental competition and the complexity of ML’s distributed infrastructure. One participant commented: ‘I’ll be amazed if I ever see any regulation directly related to AI and environmental impact.’

Ultimately, the study calls for a cultural shift in how ML is approached. Several interviewees criticised the overuse of AI, arguing that simpler solutions are often more appropriate. As one industry professional put it: ‘If your problem can be solved with a simple Excel bot or macro, then why are you going for a LLM agent?’ Others suggested that slowing down development could foster more thoughtful innovation.

The researchers conclude that while individual action is limited, collective responsibility and systemic change are essential. They advocate for better data, interdisciplinary collaboration, and integration of sustainability into ML education and training. As the environmental impact of AI continues to grow, the voices of practitioners — often caught between ethical concern and professional constraint  — offer a crucial perspective on how the field might evolve.