As uptake of generative artificial intelligence and machine-learning technologies (AI) continues to accelerate, long‑standing project workflows are being augmented or replaced by more dynamic, technology‑driven approaches. AI isn’t just speeding up design and site monitoring; it’s beginning to run aspects of projects itself — predicting output, balancing demand, scheduling maintenance and improving quality control.
In this article, we look at how you can capitalise on the efficiencies and competitive advantages AI can offer, whilst managing legal and operational risk. We cover:
- Where AI is already creating value for project leaders
- How AI uses impact your legal risk profile
- Practical advice
How AI in construction and infrastructure is overhauling traditional processes
AI is becoming embedded in everything from early‑stage design to operations management. For example (non-exhaustively):
- Design and modelling: Generative tools can generate 3D renderings from initial sketches or natural language prompts. AI tools can analyse thousands of permutations and integrate with building information modelling (BIM) platforms to test compliance with regulatory requirements in seconds rather than weeks.
- Material planning and project delivery: Automated systems can produce material take‑off (MTO) lists within minutes, and project managers are increasingly using machine‑learning insights to optimise schedules, anticipate pinch points and support early analysis of delay or disruption claims.
- Safety: On-site, safety is being redefined through real‑time monitoring technologies. AI‑enabled vision systems, including drone-monitoring (more on this below), can identify unsafe conditions within seconds. Wearable AI devices can track worker behaviour and environmental risks. These tools not only help prevent accidents before they happen, but also create safety records that can be relied on if issues arise later.
- Renewable forecasting and storage optimisation: AI models can fuse weather, historical generation data and live telemetry to predict need/output and decide when and where to store and release energy, supporting grid code compliance and better dispatch across energy infrastructure projects.
- Smart grids and Demand Resource Management (DRM): AI can learn local consumption patterns and dynamically balance load, improving stability and network efficiency, while DRM creates a live link between system operators and users to shape and optimise demand in real time.
- Drones and computer vision: For lines, towers, wind turbines, cables, pipe corridors, offshore assets and other challenging sites, AI‑assisted drones can reach hazardous locations, capture thermal/aerial imagery and feed digital twins (live digital replicas of a physical asset/system synced with real‑world data) for faster surveys, defect recognition and response.
- Predictive maintenance for critical assets: From turbines and substations to pipelines, properties and railways, AI can flag anomalies across projects and portfolios earlier than previously possible, and can schedule advance repairs to reduce damage and disruption.
Together, applications such as these are reshaping the conception, delivery and management of major projects. This brings significant efficiencies and opportunities. But what are the attendant legal risks?
How is AI in construction and infrastructure impacting risk profile?
As algorithmic recommendations increasingly influence (and make) real‑world decisions, the construction and infrastructure industries are seeing risk profile changes in the following areas:
- Contractual gaps and disputes:
- Large-scale construction and infrastructure projects are often conceived, planned and built over many years. The unprecedented speed at which AI innovations are being developed and implemented, combined with the proliferation of complex and long-term contractual arrangements across the industry, means that many projects are proceeding under contracts that weren’t necessarily drafted with AI in mind. Even where contracts do envisage the use of AI, the pace of technical development and the potential for unforeseen application can make it difficult to cater for future eventualities in the drafting.
- Determining legal responsibility and liability where AI has played a role in any outcome can also be challenging. Parties may dispute whether fault lies with the contractor deploying the AI, the counterparty providing the underlying data, the technology provider whose system produced the error, or even the AI itself [1]. Resolving such disputes is likely to require detailed technical evidence, contractual analysis and interpretation, and possibly even further developments in the law.
- traditionally-drafted contracts may also be deficient in allocating responsibility for keeping shared AI models or inter-dependent tools accurate and up to date. This can result in disputes about undetected defects, or delays caused by outdated or incorrect data, for example, with each party believing another responsible.
- Intellectual Property: Where AI tools generate novel structural or engineering solutions, questions may arise as to who owns the resulting design. Contractors, employers and AI vendors may all assert rights over valuable outputs, particularly where training data or proprietary algorithms contributed meaningfully to the end result. Issues may also arise where AI systems trained on historic data reproduce confidential or patented design features on future projects, exposing parties to IP infringement claims or breaches of past contractual duties.
- Data protection and confidentiality: Increased data collection, aggregation and model training heighten the risk of privacy breaches. Wearables, drones and sensors often capture personal and sensitive data, which must be dealt with appropriately. The integration of AI into projects and workflows heighten exposure and vulnerability to cyber-attacks. As recent high-profile attacks have demonstrated, a cyber event can be both a regulatory breach and an operational incident.
Minimising AI risks in construction and infrastructure
Governance
Minimising legal exposure begins with proactive governance. Legal teams, project managers and technical leads should work together to develop an ‘AI use register’, to map how AI is being deployed throughout a project lifecycle. Safeguards can then be built into each stage.
At the same time, accurate and consistent recording of data sources and model inputs, decision logs and the creation of specific opportunities for human oversight in project workflows will likely prove vitally important in disputes where AI is a factor.
An AI use register should record every AI system, digital twin, drone workflow or automated tool used on the project. For each system, the register should set out its purpose, the data on which it relies, the points where human oversight is required, the model and version in use, and the logging and retention arrangements. This can become a reference point for procurement, assurance, incident response, disclosure and ongoing governance.
Contractual terms
Clear and forward‑looking contractual terms are critical. Agreements with contractors, consultants and AI vendors should expressly allocate ownership of AI‑generated outputs, define responsibility for errors or faulty recommendations, and address data rights, confidentiality obligations and liability caps specific to AI‑driven processes. Given the potential for disputes over IP ownership, data provenance or defective outputs, contracts must reflect the reality that AI is now a meaningful contributor to project outcomes, not merely a passive tool, and must therefore be actively managed. Key points to consider for your contracts include:
Liability allocation: Liability should be allocated in a way that reflects how AI is being used, and should be supported by appropriate warranties and indemnities from integrators or software vendors. Provisions should also clarify whether obligations are governed by fitness‑for‑purpose standards or reasonable skill and care. Human sign‑off points must be expressly defined for safety‑critical, operational or market‑sensitive decisions.
AI‑ready contract schedules: Whether you’re working under NEC, JCT, FIDIC or bespoke contractual arrangements, schedules should clearly set out the scope of AI use and specify the level of disclosure required. Appropriate provisions will also differ according to the particular AI tools covered. For example, for advisory tools, you may need “no‑reliance” wording; while automated tools may require defined acceptance tests and agreed fall‑back modes to cater for eventualities such as system failure.
IP and data rights: Ensuring proper safeguards around security, data use, retention and model training will be critical to avoiding or managing data and IP disputes. Specifying ownership and licensing arrangements for AI‑generated outputs (and the datasets that underpin them), restricting the use of confidential datasets for training, defining re‑use rights, and ring-fencing sensitive information can all be effective strategies.
Operational safeguards
As well as legal safeguards, it’s important to strengthen day‑to‑day operations. In practice, this means having a straightforward plan for what to do if an AI system goes wrong, including a simple way to pause or roll back the system quickly. It also means testing your AI tools regularly to check how they behave under pressure, limiting who can access your AI systems, and keeping training environments separate from the live systems you rely on.
AI in construction and infrastructure: How we can support you
AI is transforming how major projects are designed, delivered and managed. But adopting it without the right contractual, governance and operational safeguards can expose your business to risk, and the legal issues can be complex and multifaceted.
With clear planning, smart allocation of responsibility and robust oversight, AI can become a genuine competitive advantage, and liabilities can be kept to a minimum.
With AI now influencing design, procurement, programme management and on‑site delivery across the UK’s most complex and highly regulated sectors, such as transport, energy, water, defence, and major public‑sector infrastructure, project leaders need targeted, cross‑disciplinary support.
Our Construction, Commercial and Technology specialists can provide legal and practice support as you navigate this evolving landscape. We can:
- Help you stay informed about, and train staff on, the ever-changing technological and regulatory environment
- Provide commercial and risk management advice on policies, procedures and operational practices, including assisting with the creation of an AI use register
- Undertake contract reviews and implement a robust but flexible contractual framework
- Provide advisory, transactional and strategic dispute resolution advice across contractual, IP, data protection issues within the construction and infrastructure context
- Prioritise safety, privacy, and socio-environmental concerns, alongside AI and commercial concerns.
Proactively addressing legal challenges will enable responsible businesses to reap the transformative benefits of AI in construction and infrastructure. Please contact Carly Thorpe or Ryan Doodson for further information or advice.
[1] Currently, the law of England and Wales doesn’t recognise AI as a legal person. However, the Law commission raised that possibility as a potential area for reform, in a discussion paper published in July 2025.
Leave a comment