Earlier this year, I was very fortunate to work on a paper for the Tony Blair Institute in collaboration with Lord Hague. Titled “A New National Purpose”, it laid out a science and tech policy roadmap that we believed the UK needs to embrace in order to stave off further decline.

Much has happened since then which underscores the importance of our argument there. No more so than in Artificial Intelligence. As soon as we had hit publish on the initial report, we were thinking about how we would catch lightning in a bottle again, this time doing a NNP-style paper focused specifically on AI.

AI’s unpredictable development, the rate at which it changes and its ever-increasing power mean its arrival may present the most substantial policy challenge we have ever faced, for which our existing approaches are ill-configured.

The latest report, out today considers:

How we need a:

  • complete reset in the state apparatus of AI policy. Whitehall generalists are being given the responsibility of dealing with highly technical AI questions, and the AI advisory mechanisms into government completely missed breakthroughs in transformers and LLMs.
  • What new institutions to develop safe, interpretable advanced AI systems look like. This is the UK’s real competitive advantage, and where we can define our role in the world.
  •  What sensible regulation of near term risks (we focus on deepfakes and disinfo) and catastrophic risk looks like.
  • The policy needed to supercharge the UK’s commercial AI ecosystem, reduce our overreliance on DeepMind, and safely deploy AI in public services.

The full report can be found here, but to wet your appetite I will throw down a few different recommendations:

  • Dissolve the AI Council, replacing the key advisory mechanism into government on AI with a new set of technical experts feeding into the Foundation Model Taskforce.
  • The Office for AI should reform the way it receives advice, creating an ‘external experts panel’ that answers regular questions, including polling on technical and social scientific questions about AI progress. This would be modelled on the ‘US Economic Experts Panel’ from the Chicago Booth School, where results are made public, and the White House Council of Economic Advisors. 
  • Increase the size of the UK AI Research Resource to 30,000 accelerators. Require a bi-annual review determined by the AI Taskforce in order to update scale when necessary.  The UK should also build to exascale capacity by the end of 2023, instead of the planned target of 2026.
  • The UK AI Research Resource should provide not only compute, but also cloud-based and API access to frontier models for qualified researchers. This can exist in the form of hosting cloud-based, open-source models to support development for resource-constrained researchers, as well as through partnerships with existing model developers. 
  • Labs should be required/encouraged to report training runs above a compute threshold (e.g. amount of compute used for GPT-4 or higher). There should also be reporting requirements for compute providers who help with deploying large scale inference.
  • Creating AI SENTINEL, an international research laboratory and regulator focussed on researching and testing interpretable, safe AI technologies, open to partners. 
  • Establish interdisciplinary “Lovelace Disruptive Innovation Laboratories”. held in collaboration with universities and industry. Employ small teams to work at the intersection of AI and 15 different disciplines.
  • The government should fund a mechanism to fund the creation of beneficial but costly datasets. This should be a separate entity, either within the National Physical Laboratory, ONS, or EPSRC itself, which is sandboxed and separately funded. They would first run a public call for desired valuable data, then consider proposals, then submit bids or fund their creation right away if feasible.
  • Require generative AI companies to label the synthetic media “deepfakes” they produce, adhering to content provenance standards such as the Content Provenance and Authenticity (C2PA) industry standard. 

To my knowledge, no other think tank has put out something this broad on the subject. We could not cover anything, and risk upsetting people whose patch we have waded onto. But we need a document global leaders can hold and say ‘this is what we believe’. I think we have had a pretty good stab at that.

Never before has a technology so transformative been so uncertain in its destination. 

But without systemic reform of Whitehall we will not capture the amazing value that AI presents, and we will be swarmed by problem after problem, not knowing our left boot from our right.

More money and better policies will only go so far. Not to go too ‘people, ideas, machines, in that order’, but if the usual suspects end up determining foundation model/compute funding, and the key AI bodies don't end up directly reporting to the PM, we risk taking a step backwards, not forwards. The same can be said for our politicians, who need to get up to speed asap. I am fairly confident that AI will be a doorstep issue brought up in 2024. 6 months ago that was seen as a pretty nutty idea. Now, Westminster columnists are opining on this likelihood.

12 months ago when I told people what my job was, I got slightly glazed eyes. Now, every single person I speak to has a view and great questions.

If you have thoughts on how we can improve our thinking, or want to help make these things actually happen, we would love to hear from you.

If you are considering how to get into work in AI policy, I would also encourage you to get in touch. I will be especially helpful if you are early in your career because I am only three years out of uni. But come and join the club!


New Comment