HSE’s Approach to Regulating Artificial Intelligence (AI) in Workplace Health and Safety
- KSH Safety Services
- Aug 11
- 3 min read
The Health and Safety Executive (HSE) has announced that it is actively enhancing its understanding of how Artificial Intelligence (AI) is being deployed across the industries it regulates. By examining real-world AI applications, the HSE aims to identify their potential impacts o occupational health and safety.

As AI technology progresses, the HSE has pinpointed four key areas where its use may influence workplace safety:
1. Maintenance systems – AI aids in predictive inspections, failure detection, and decision support.
2. Health and safety management – AI assists in risk assessments, incident investigations, and training material generation.
3. Control of equipment and processes – AI enables autonomous vehicles, robotics, and industrial data analytics.
4. Occupational monitoring – AI-powered computer vision tracks worker behaviour and workplace conditions.
AI Within HSE’s Regulatory Scope
The HSE’s oversight of AI aligns with its core mission: preventing work-related deaths, injuries, and illnesses. This includes:
Regulating AI applications that impact workplace safety in sectors where the HSE is the enforcing authority.
Supervising AI in the design, manufacturing, and supply of workplace machinery and products under Product Safety regulations.
Monitoring AI’s influence on building safety, chemical handling, and pesticide use.
Health and Safety Law & AI
The Health and Safety at Work etc. Act 1974 forms the foundation of the HSE’s enforcement framework. As a goal-setting law, it mandates safety outcomes without prescribing specific methods, making it adaptable to emerging technologies like AI. Employers must assess and mitigate risks, regardless of the tools they use.
Risk Management in AI Applications
Under health and safety law, those who create risks are responsible for managing them. Employers and workplace controllers must:
Evaluate AI-related hazards, including cybersecurity threats.
Implement reasonably practicable control measures.
Integrate AI risks into standard safety protocols rather than treating them as exceptional.
Regulatory Principles for AI
The UK government’s pro-innovation AI regulation white paper outlines cross-sector principles for managing AI risks. For workplace safety, key principles include:
Safety, security, and robustness – Ensuring AI systems operate reliably.
Transparency and explainability – Making AI decisions understandable.
Accountability and governance – Defining clear responsibility for AI-related risks.
Understanding AI’s Risks in the Workplace
While AI can enhance safety, it also introduces new challenges:
Human & Organisational Risks
Over-reliance on AI – Reduced worker vigilance and eroded safety culture.
Deskilling – Loss of expertise as tasks become automated.
Algorithmic stress – Excessive AI-driven monitoring leading to worker fatigue.
Warning fatigue – Frequent alerts causing critical notifications to be ignored.
Safety & Technical Risks
Inaccurate AI assessments – Faulty decisions due to flawed data or unexpected conditions.
Unpredictable behaviour – AI acting outside intended parameters.
Cybersecurity threats – Hacks leading to loss of control over AI systems.
Data privacy concerns – Risks from AI monitoring workers or incidents.
"Black box" decision-making – Difficulty explaining AI failures, especially in untrained scenarios.
Strengthening HSE’s AI Oversight
To regulate AI effectively, the HSE is:
Building internal expertise – Establishing an AI common interest group to track developments.
Collaborating across government – Contributing to national AI policy and international standards (e.g., BSI, ISO, IEC).
Engaging industry & academia – Gathering insights on AI applications and safety implications.
Partnering with regulators – Working with groups like the AI Standards Forum and ICO AI Regulators Forum for consistent oversight.
Investing in research – Testing an Industrial Safetytech Regulatory Sandbox to remove barriers to AI adoption in construction.
Future Directions
As AI evolves, the HSE will continue refining its regulatory approach by:
Monitoring emerging AI trends in the UK and globally.
Engaging stakeholders to address new risks and opportunities.
Applying its risk-based, proportionate enforcement to ensure AI is used safely across all regulated sectors.
By fostering innovation while prioritising worker safety, the HSE aims to balance AI’s benefits with robust regulatory oversight.
Whichever way the HSE decides to go, AI must never replace Human intervention. Remember the Management of Health and Safety at Work regulations state that employers must have access to a Competent Person. While AI can help, it can never be that final decision maker, the Competent Person.