AI for Public and Planetary Benefit
As OpenAI falls apart, it seems very appropriate that the House of Lords are holding a hearing about Large Language Models today. The chaos in the world’s most prominent AI company is causing unrest right across the sector and rightly encouraging many businesses to rethink their quick embrace of the company’s products.
However, despite a range of experts from industry and the regulators being invited to give evidence in Westminster today, it’s deeply disappointing to see so little representation from civil society. This is a familiar and seemingly ingrained problem in how our political system tries to understand AI. It was only three weeks ago that we saw the Prime Minister courting billionaires’ advice on how to fix problems that they created – whilst ignoring the expertise that exists within civil society.
As our AI and Society Forum last month showed, there is hardly a dearth of experience within civil society. Speakers at our forum talked about the impact of Large Language Models (LLMs) on immigration decisions, welfare and benefits claims, workers’ rights and employment, and the future of our planet.
In the evidence which we submitted to the House of Lords Digital and Communications Committee, we identified four key issues and risks associated with LLMs, which we felt required greater scrutiny from policymakers:
Firstly, we need to understand the limitations and applicability of narrow AI. Whilst OpenAI collapses over its stated aim of delivering Artificial General Intelligence, the reality is that conscious machines are the preserve of Hollywood rather than computing labs.
Secondly, we must incentivise data quality to improve how LLMs operate. It is now well-established that many LLMs do not draw upon representative data, contributing to biased outputs and harmful decision-making.
Thirdly, policymakers must not ignore the environmental impact of LLMs and AI models more broadly. The Government considers environmental harms to be ‘out of scope’ for its AI white paper, prioritising short-term growth over longer-term environmental protection.
Finally, we must counter the policy influence of corporate actors. Of course, AI businesses should – and must – be consulted within policy development. However, failing to engage civil society, the wider research community, and the public, will lead to undemocratic and unrepresentative outcomes.
The risks and opportunities that AI presents should not be viewed in opposition to one another. Rather, we believe that we need a responsible, rights-respecting approach to AI which prioritises Public and Planetary Benefit. A Public and Planetary Benefits model would allow us to consider the complexities of AI, rather than adopting a “whack-a-mole” approach to risk which stifles innovation.
Our approach to AI regulation cannot rely solely upon ex-post regulatory interventions; it also requires an effective Industrial Strategy and a clear political vision that sets out the role of LLMs in a modern democracy. Together, these components create a functioning and sustainable regulatory system.
Unless parliamentarians are content with corporate capture of these technologies, they must embrace the vast expertise that exists within civil society. We must work together to ensure that AI works for eight billion people, not eight billionaires.
—
If you also think AI should work for 8 billion people not 8 billionaires, buy a t-shirt!