Last month we were at IHR Live London 2025 where an interesting panel discussion took place.
“The AI in TA Gamble — Bold Strategy or Legal Disaster?”
If you missed it, watch the full session below and catch the sharpest takes from: James Clark (YOONO), Jules Morgan (Mediabrands), Hung Lee (Recruiting Brainfood) and Nick Thompson (Haleon).
The panel discussed how artificial intelligence is reshaping recruitment, especially in relation to ethics, compliance, brand reputation and hiring practices.
While AI offers major efficiency gains, speakers highlighted the importance of human oversight, close collaboration with legal teams and responsible adoption to prevent risk and bias.
The overall message was clear: AI is inevitable, but success depends on using it thoughtfully and responsibly.
Discussion highlights
Regulation and legal readiness
Recruitment AI is a high risk area under both current and upcoming laws.
James Clark, Chief Legal Officer at YOONO, emphasised the need for transparency, data quality and a privacy by design approach.
AI can be used lawfully in recruitment if it is implemented properly and responsibly.
Building trust and transparency
Candidates should always be informed when AI is used and how it affects them.
Accuracy, fairness and good record keeping are key to maintaining trust and compliance.
Vendor accountability
Vendors should be carefully vetted. Organisations must ask about data sources, licensing and bias testing.
Suppliers who are open about complexity and show clear understanding are more trustworthy than those offering easy answers.
Culture and leadership
HR’s compliance driven culture can restrict innovation, but avoiding all risk is unsustainable.
Employees are already using tools such as ChatGPT informally, so governance and education are better than outright bans.
Leaders must set clear AI policies and improve understanding across all departments.
Adoption strategy
Start with low risk applications such as interview scheduling, then expand gradually to higher risk areas.
Avoid adopting AI tools just because they are new or fashionable; focus on solving real business problems.
Shared responsibility
Responsibility is shared between employers and vendors.
Vendors must design safe and compliant systems, while employers must use them correctly and transparently.
If due diligence is done properly, accountability can be shared fairly.
The key takeaways
1. AI is transforming every stage of recruitment
AI is already screening CVs, writing job adverts and predicting cultural fit.
Candidates are using AI faster than employers, automating applications and even interview preparation, leaving talent teams under pressure to catch up.
2. Legal and ethical risks are both immediate and serious
The EU AI Act, which takes effect in August 2026, classifies recruitment as a high risk area.
Legal exposure exists under data protection law (GDPR), employment law and equality law.
However, when used correctly, AI can reduce discrimination and improve fairness in hiring decisions.
3. Compliance frameworks are evolving quickly
The UK AI Legal Working Group is developing standards for safe and lawful AI use.
Priorities include transparency, lawful data use, accountability and bias mitigation.
4. Talent teams are behind candidates
Compliance and governance processes slow employers down, while candidates are more agile and already taking advantage of AI.
The challenge is to balance innovation and compliance.
5. Protecting brand reputation requires collaboration
Legal, IT and talent acquisition teams need to work closely together.
Employers should use reputable vendors with strong audit trails and data protection practices.
Working with untested or non compliant suppliers increases both legal and reputational risk.
6. AI literacy across the organisation is essential
Misuse of AI often happens because employees lack knowledge or guidance.
Blanket bans do not work; people will use AI tools anyway.
Education and guidance are essential, and AI literacy is already a legal requirement in the EU.
7. Human oversight is essential
Recruitment decisions should never be left entirely to automation.
Combining AI insights with human judgement helps to reduce bias and legal exposure.
Practical recommendations
For talent and HR leaders
Identify core problems before buying any AI tools.
Audit existing AI use to understand current risks.
Involve legal teams early in the process.
Build an AI governance framework that covers: vendor accountability, data protection requirements, transparency for candidates, human oversight in decision making.
For recruiters and practitioners
Always check with legal or AI specialists before using external tools such as ChatGPT.
Use AI to improve efficiency, not to make final hiring decisions.
Know your organisation’s AI policies and follow them carefully.
For organisations
Invest in AI literacy training for all employees.
Keep a central record of AI tools and their uses.
Work with established vendors who can defend both themselves and your organisation.
Encourage transparency, collaboration and continuous learning.
For legal and compliance teams
Embed privacy and compliance into AI development from the start.
Maintain clear audit trails and documentation.
Align internal processes with the EU AI Act, including risk assessment, data governance and human oversight.
Conclusion
The future of AI in recruitment is neither entirely positive nor entirely negative; it is a shared learning process.
Organisations that succeed will:
Start small and adopt AI responsibly
Collaborate across legal, IT and talent teams
Prioritise fairness, compliance and human oversight
Invest in education and AI literacy
As one speaker concluded: “AI is not taking your job. It is taking the parts of your job you do not manage responsibly.”



