top of page

Research programme to ensure UK economy uses AI to grow safely

Researchers to be supported in boosting defences against societal risks such as deepfakes and cyber-attacks.

AI Safety institute: New grants scheme open for applications

Researchers focused on boosting society’s resilience against AI risks such as deepfakes, misinformation, and cyber-attacks, can now access government grants to drive forward their work which will help ensure the safety of AI, as the UK taps into its potential to spark economic growth and improvements to public services.


The scheme launched today (Tuesday 15th October), in partnership with the Engineering and Physical Sciences Research Council (EPSRC) and Innovate UK, part of UK Research and Innovation (UKRI), is focused on how society can be protected from the potential risks of AI. It will also support research to tackle the threat of AI systems failing unexpectedly, for example in the financial sector.


Tackling these risks head on will boost public confidence in the technology which holds enormous potential to spark long-term growth, while keeping the UK at the heart of research into responsible and trustworthy AI development. Ensuring public confidence in AI is central to the government’s plans for seizing its potential, as the UK harnesses the technology to drive up productivity and deliver public services which are fit for the future.


To ensure the UK can continue to harness the enormous opportunities of AI, the government has also committed to introduce highly-targeted legislation for the handful of companies developing the most powerful AI models, ensuring a proportionate approach to regulation rather than new blanket rules on its use.


Systemic AI safety is focused on the systems and infrastructure where AI is being deployed across different sectors. The programme launched today hopes to spark a broad range of research to identify the critical risks of frontier AI adoption in critical sectors like healthcare and energy services, identifying potential solutions which can then be transformed into long-term tools which tackle potential AI risks in these areas.


Other Science, engineering & technology news

bottom of page