Home Artificial intelligence How AI is actually changing day-to-day work | AI (artificial intelligence)
Artificial intelligence

How AI is actually changing day-to-day work | AI (artificial intelligence)

Share


Hello, and welcome to TechScape. I’m your host, Blake Montgomery, chuffed about One Battle After Another’s big win at the Oscars. This week, we’re examining how artificial intelligence is changing the everyday reality of white-collar work in the US, the roots of the current appetite for AI in war, and the United Kingdom’s phantom datacenters.

Professors and coders alike wrestle with AI

As part of the Guardian’s Reworked series on AI’s effect on modern work, we published two stories this week on how specific jobs are changing: those of university professors and Amazon’s technical employees. Both groups are wrestling with profound shifts.

Humanities professors find their students able to outsource tasks like writing that are meant to develop critical thinking, which leaves those same students with little learning from the completion of their assignments. Some Amazon corporate employees say that they face an opposite problem: They hear from their managers that AI can speed up all their tasks, but they find the tools of automating their work slow it down, impeding their work rather than improving it. They find themselves evaluated on the frequency with which they use AI tools that perform their tasks worse than they would unassisted, a confusing conundrum. In a statement, Amazon disputed the characterization that AI encumbered its workers rather than enabling them.

When Silicon Valley entrepreneurs talk about AI altering everything about work, they may be right, but there is often a utopian tone to their predictions that glosses over the discomfort of rapid change. Disruption is a mess in its particulars, which both of these stories demonstrate.

Alice Speri reports on humanities’ professors angst:

In fields most explicitly associated with the production of critical thought – what is collectively referred to as the “humanities” – most scholars see AI as a unique threat. With the potential for AI to increasingly substitute independent thought, a pressing question becomes even more urgent: what exactly is a university education for?

The Guardian spoke with more than a dozen professors – almost all of them in the humanities or adjacent fields – about how they are adapting at a time of dizzying technological advancement with few standards and little guidance.

By and large, they expressed the view that reliance on artificial intelligence is fundamentally antithetical to the development of human intelligence they are tasked with guiding. They described desperately trying to prevent students from turning to AI as a replacement for thought, at a time when the technology is threatening to upend not only their education, but everything from the stock market to social relations to war.

Most professors described the experience of contending with the technology in despairing terms. “It’s driving so many of us up the wall,” one said. “Generative AI is the bane of my existence,” another wrote in an email. “I wish I could push ChatGPT (and Claude, Microsoft Copilot, etc) off a cliff.”

Read more: ‘I wish I could push ChatGPT off a cliff’: professors scramble to save critical thinking in an age of AI

Illustration: Félix Decombat/The Guardian

Varsha Bansal reports on the struggles of Amazon’s technical employees:

More than half a dozen current and former Amazon corporate employees, in roles ranging from software engineer to user experience researcher to data analyst, told the Guardian that Amazon is pressing employees to integrate AI across all aspects of their work, even though these workers say this push is hurting productivity. They say Amazon is rolling out AI use in a haphazard way while also tracking their AI use, and they’re worried the company is essentially using them to train their eventual bot replacements. All of this, they said, is demoralizing.

When Dina, a software developer based in New York, joined Amazon two years ago, her job was to write code. Now, it’s mostly fixing what artificial intelligence breaks.

The internal AI tool she’s expected to use, called Kiro, frequently hallucinates and generates flawed code, she says. Then she has to dig through and correct the sloppy code it creates, or just revert all changes and start again. She says it feels like “trying to AI my way out of a problem that AI caused”.

“I and many of my colleagues don’t feel that it actually makes us that much faster,” Dina said. “But from management, we are certainly getting messaging that we have to go faster, this will make us go faster, and that speed is the number one priority.”

Just days after speaking to the Guardian, Dina was laid off.

Read more: Will AI take Australian jobs, or is it just an excuse for corporate restructure?

How this moment in AI and war came to be

Dario Amodei and Donald Trump in this composite picture. Composite: Getty Images

Two recent events prefigured how essential artificial intelligence would be to the US and Israel’s war on Iran.

The first demonstrates just how much the debate over AI in war has changed: In 2018, Google employees scuttled any military use of its AI with major protests. How remote that debate seems in a moment when Anthropic is suing the Department of Defense in a bid to continue, not stop, its work with the US military. Anthropic is fighting Trump officials not over if its AI should be used in war, but how.

The second, Israel’s technologically turbocharged invasion of Gaza, precedes the disastrous targeting of an Iranian girls’ school less than a month ago that left hundreds dead, many of them children. The kind of mass targeting enabled by AI and put to the test in Gaza has been replicated in Tehran and beyond.

My colleague Nick Robins-Early writes on Google, Anthropic, and the Pentagon:

Anthropic’s refusal to remove safety guardrails and the Pentagon’s subsequent retaliation have highlighted longstanding concerns over the use of AI for conflict. However, the fight has shown how much the goal posts have moved in less than a decade when it comes to big tech’s ties to the military.

In 2018, thousands of Google employees launched a protest against a program to analyze drone footage for the DoD called Project Maven. Eight years later, and Google announced just this week that it would provide its Gemini artificial intelligence to provide the military a platform for creating AI agents to work on unclassified projects. After Google dropped the Project Maven contract in 2019, Palantir took it over. Maven is now the name of the classified system that military personnel use to access Anthropic’s Claude, according to the Washington Post.

Avner Gvaryahu writes in an essay arguing that AI firms have become the new defense contractors:

Israel’s recent war in Gaza has been described as the first major “AI war” – the first war in which AI systems have played a central role in generating Israel’s list of purported Hamas and Islamic jihad militants to target. Systems that processed billions of data points to rank the probability that any given person in the territory was a combatant.

Whether or not an algorithm selected this school, it was selected by a system that algorithmic targeting built. To strike 1,000 targets in the first 24 hours of the campaign in Iran, the US military relied on AI systems to generate, prioritize, and rank the target list at a speed no human team could replicate.

Gaza was the laboratory. The strike at the Shajareh Tayyebeh elementary school in Minab in southern Iran is the market. The result is a world in which the most consequential targeting decisions in modern warfare are made by systems that cannot explain themselves, supplied by companies that answer to no one, in conflicts that generate no accountability and no reckoning. That is not a failure of the system. That is the system.

Tech’s big flex: billionaire dollars in US politics

California billionaires up political action with multimillion-dollar donations

Trump administration reportedly set to be paid $10bn for brokering TikTok deal

With $200m to spend on the midterms, crypto hopes to repeat its 2024 success: ‘It’s the most critical time’

Tech in the global south

‘Invasive’ AI-led mass surveillance in Africa violating freedoms, warn experts

Nigeria’s online content creator market has boomed. Can the skit-makers and streamers make it pay?

India’s scattered workforce: the chatbot keeping families in touch during emergencies

Where are the UK’s promised datacenters?

Illustration: Anais Mims / Guardian Design/Getty

The datacentre investment boom is one of the biggest infrastructure gambles of this era, and Britain may be uniquely exposed. My colleagues Aisha Down, Robert Booth and Dan Milmo report on the UK’s phantom datacenters:

On Friday, more than three years since the launch of ChatGPT unleashed the AI hype, the UK reported zero GDP growth for January. The Monday prior, the Guardian exposed a fissure in the AI edifice. An investigation found the UK’s flagship AI deals, many announced with great fanfare during Donald Trump’s state visit last September, are not as they were described in government and corporate press releases. Key projects are delayed or improbable, crucial “investments” are in fact vague agreements between mostly US tech companies, desperately being spun by ministers as an engine for economic growth.

Most emblematically, the Guardian’s investigation featured a site in Loughton, Essex, that the government said would host “the largest UK sovereign AI datacentre” by the end of 2026. The then technology secretary, Peter Kyle, called it “a fresh start for our economy and for working people”. A year later it was still being used as a scaffolding yard with almost zero chance of being open when billed. After the Guardian’s investigation, Nscale confirmed it had bought the land on which the computer is to be built – eight months after it said it did in January 2025. It still does not have planning permission but said on Friday it was planning to start construction before July and would switch on the datacentre between April and July 2027.

On the big screen

Apple iPad Air M4 review: still the premium tablet to beat

Samsung Galaxy S26 Ultra review: its huge screen blocks shoulder surfers from spying on you

The wider TechScape

New study raises concerns about AI chatbots fueling delusional thinking

This CEO warns that Democratic voters are most at risk from automation | Arwa Mahdawi

Meta reportedly plans sweeping layoffs as AI costs increase

Google scraps AI search feature that crowdsourced amateur medical advice



Source link

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *