Home Artificial intelligence Labour’s gamble to turn AI threat into opportunity
Artificial intelligence

Labour’s gamble to turn AI threat into opportunity

Share


This article is an on-site version of our The State of Britain newsletter. Premium subscribers can sign up here to get the newsletter delivered every week. Standard subscribers can upgrade to Premium here, or explore all FT newsletters

Hello and welcome to the State of Britain. Last week I wrote about a UK government entranced by the economic potential of a rising global force, feeling no choice but to rush in while trying as best it can to construct safeguards against the very real dangers. 

That, of course, was China. But as Sir Keir Starmer actually visits Beijing this week, I’m struck by how that description could also serve to describe his government’s approach to artificial intelligence. 

You only have to look at the US stock market to see the huge potential upside of a technology that could radically reshape whole industries, offering capabilities and cost savings that were previously unthinkable. 

Starmer’s Labour government has largely downplayed the safety concerns of his predecessor and has been betting big that Britain can profit from the AI boom. Only this morning, it announced a fifth “AI Growth Zone”, in Lanarkshire, where data centres will get energy discounts of up to £24/Mwh. 

Government cash is also being spent directly investing in AI, as the government recently put £25mn into Kraken, an energy analytics company, which it sees as a national champion, with promises of more to come. Public investment in AI research is being stepped up, and a government determined to bring down immigration is making an exception for software engineers, targeting AI specialists through its Global Talent visa. 

But, quietly, ministers also acknowledge the risks of AI cannot be ignored. Even leaving aside the dangers of AI-aided terrorism or robot domination, this week Dario Amodei, founder of Anthropic, warned that the jobs disruption would be “unusually painful compared to past technologies”. AI was advancing so fast and affecting so many sectors at the same time that the shock to labour markets “will be unprecedented in size”. 

Is AI coming for your job?

Liz Kendall, the technology secretary, told me this week that “disruption is inevitable”, and that jobs will be lost. She is setting up a cross-government unit, including employers and trade unions, to monitor the impact of AI on jobs. 

She said it was “absolutely essential” to gather the best evidence both in this country and internationally about the impact of AI on the labour market “and that we bring together the full force of government to prepare Britain for the future”.

Her vision focuses almost entirely on what government can do to prepare workers displaced by AI to get jobs in new fields. She compared the impact with the painful process of deindustrialisation a generation ago — a shift that large swaths of the country have still not adapted to. 

“Whole industries were decimated and people were just left to cope on their own,” she said. “We’re completely different.” Kendall, an optimistic type, believes there will be a net gain of jobs. 

Arguing that Labour would help the losers from AI through the transition, this week she promised that training in using AI would be available to all British adults. A worthwhile initiative, clearly, but you’d have to be as optimistic as Kendall to be confident that further education is up to the task if AI sweeps away white-collar industries at which Britain currently excels. 

Jason Stockwood, the new investment minister, told my colleagues that “some sort” of universal basic income could be introduced, an argument unlikely to find favour with the chancellor. 

Whether the government can — or should — step in to protect these jobs in the first place is a far harder question, to which Labour does not yet have a clear answer. Kendall’s ambition to beat the US and China, not at the scale of AI but the rapidity of its adoption across the economy, rather suggests not. 

Socratic approach

They are on better ground in arguing how government can shape this adoption. No 10 has told departments to prioritise “citizen impact” and this week Kendall asked tech companies to help build dedicated AI tutors for England’s schools.

Officials tell me that rather than banning AI — which doesn’t work and encourages cheating — or just giving up and allowing ChatGPT to tell pupils the answer, there is scope to promote the development of technologies that actually inspire learning and independent thought. Ministers say disadvantaged pupils have the most to gain from such software, giving them advantages that wealthier families pay private tutors handsomely for. 

One person in government described the goal as a “Socratic” AI model that tests and extends student knowledge through question-and-answer sessions. Training AI on questions and answers from real pupils in schools in England will produce something far more useful than off-the-shelf models, they say. 

The dystopian view

The AI Security Institute is also mulling what can be done about the rapid rise of “AI companionship”. In a report last month it found that 33 per cent of UK adults had used AI for emotional needs, with 8 per cent doing so weekly, warning that better safeguards were needed against “emotional dependence” on AI. 

The case of Adam Raine, the American 16-year-old who took his own life last year after discussing suicide with ChatGPT, is a clear illustration of the risks. But officials say these need to be balanced against real benefits in reducing loneliness among teenagers and older people alike.

A nation of unemployed workers displaced by AI turning to machines for human connection may sound as dystopian as anything the Chinese Communist Party can come up with. But the clear difference is that, unlike with China, western governments can shape the path AI takes. 

This month concerted global pressure forced Grok to turn off a feature that allowed users to virtually undress photos of real people. Clearly, a battle against using AI for sexualisation of children is among the easiest to win. But the precedent is there. What it means in future is less clear. 

Labour, a party temperamentally comfortable with regulation and government intervention, might be well placed to lead such a discussion. “People will never seize the opportunities [of AI] if they’re worried about what it means for them and their jobs and . . . if their kids aren’t safe online,” Kendall said. 

Yet unlike with the rowdy political argument over China, debate over how far government can and should try to limit or shape AI has been largely ignored at Westminster. That will have to change. 

Britain in numbers

Line chart of  showing Police officer numbers decline after sharp rise under Tories

Far-reaching plans for police reform this week led to a row over a surprisingly basic question — how many officers do we need? Shabana Mahmood, the home secretary, focused on goals of merging forces, centralising key functions and — yes — using more AI to catch criminals. 

Chris Philp, Mahmood’s Conservative shadow, accused her of a “con trick” because the number of police officers is falling. Indeed it is — numbers were down 1,303 on a full-time equivalent basis in the year to last March, and are thought still to be falling. 

Does it matter? Police numbers have been on a whiplash trajectory in recent years, as steep cuts in the austerity years were rapidly reversed to meet Boris Johnson’s pledge of 20,000 more officers. Unlike many Johnson pledges, this one was kept, but in a way that critics say highlights the problem with numerical targets. There are claims that vetting standards were dropped to boost numbers, leading to recruitment of officers with criminal links. 

Mahmood argued that many of the extra officers were “doing desk jobs” like HR and admin that could have been more effectively done by civilian support staff. There is also a broader argument about whether traditional police officers are the right people to hire to fight rising cyber crime and fraud. 

But voters want police to be visible, not least to make them feel safer, which is why Mahmood also promised 13,000 more “neighbourhood” officers. Reform of the police may need to start with working out what we actually want officers to do.


The State of Britain is edited by Gordon Smith. Premium subscribers can sign up here to have it delivered straight to their inbox every Thursday afternoon. Or you can take out a Premium subscription here. Read earlier editions of the newsletter here.

Recommended newsletters for you

Inside Politics — What you need to know in UK politics. Sign up here

Europe Express — Your essential guide to what matters in Europe today. Sign up here



Source link

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *