Can ChatGPT legally be used to make hiring decisions?

Discover whether ChatGPT can legally be used to make hiring decisions. Learn the risks, laws, and best practices for ethical AI recruitment.

Can ChatGPT legally be used to make hiring decisions?

Artificial intelligence has transformed recruitment from a manual process into an intelligent, data-driven system. Tools like ChatGPT can now draft job adverts, summarise CVs, and even help shortlist candidates.

The temptation to let AI “decide” who moves forward is understandable — it saves time and promises objectivity.

But one critical question remains: Can ChatGPT legally make hiring decisions?

The short answer is no, not on its own.

While ChatGPT can assist in recruitment, using it to make hiring or rejection decisions without human review can breach employment laws and data protection regulations in most countries.

This article explains the legal position and best practice, giving HR leaders and founders a global view of what’s allowed and what’s not.

What ChatGPT Can and Cannot Do in Recruitment

ChatGPT is not a recruitment platform or an applicant tracking system (ATS). It is a general-purpose language model that can assist with content creation and analysis.

✅ What ChatGPT Can Do Legally

  • Draft inclusive job adverts and descriptions
  • Generate interview questions
  • Write candidate communication templates
  • Summarise CVs or feedback for human review
  • Assist with scheduling or communication automation

❌ What ChatGPT Cannot Do

  • Independently decide who is hired or rejected
  • Evaluate human attributes like personality or culture fit
  • Make final decisions without human oversight
  • Guarantee bias-free or legally compliant outcomes

It can support human decision-making but not replace it.

Across every jurisdiction, a consistent rule emerges: final hiring decisions must be made by humans, not machines.

The principle is rooted in two concerns:

  1. Fairness: Automated systems can reflect or amplify bias.
  2. Transparency: Candidates have the right to understand how decisions are made.

Let’s look at how this principle applies in five key markets.

1. United Kingdom

  • Equality Act 2010: Prohibits discrimination based on protected characteristics such as gender, age, disability, race, or religion.
  • UK GDPR & Data Protection Act 2018: Under Article 22, individuals have the right not to be subject to a decision based solely on automated processing if it produces significant effects (such as hiring or rejection).

Implication

If ChatGPT or any AI tool screens or rejects candidates automatically without human input, it breaches data protection law.

Recruiters must ensure that:

  • Human review is applied to every hiring decision.
  • Candidates are informed if automation is used in their evaluation.

The UK’s Information Commissioner’s Office (ICO) has made it clear: AI can assist but cannot autonomously determine hiring outcomes.

2. Ireland

Ireland applies the EU General Data Protection Regulation (GDPR) directly. Under Article 22, individuals cannot be subject to automated decision-making with legal or significant effects unless specific safeguards exist.

Key Considerations

  • Employers must obtain explicit consent for automated processing in recruitment, or demonstrate that it is necessary for contractual purposes.
  • Candidates must be informed about the use of AI in hiring and have the right to request human intervention or to challenge decisions.
  • The Data Protection Commission (DPC) emphasises transparency and accountability in AI use.

Implication

Irish employers may use ChatGPT to assist with candidate shortlisting or communication, but human review is legally required before decisions are finalised.

Failure to do so can expose businesses to GDPR fines and reputational risk.

3. Switzerland

Switzerland’s Federal Act on Data Protection (FADP) was updated in 2023 to align more closely with the EU GDPR. It enforces the principles of transparency, proportionality, and data minimisation.

Key Points

  • AI tools processing candidate data must ensure fairness and non-discrimination.
  • Fully automated decisions with legal consequences, such as hiring or rejection, are prohibited without human oversight.
  • Employers must disclose when personal data is processed by automated means.

Implication

Swiss employers can use ChatGPT to automate repetitive tasks, but the final decision must remain human-led. Additionally, all data processing must occur in compliance with Switzerland’s strict privacy requirements, especially regarding cross-border data transfers.

4. United States

There is no single federal law regulating AI in hiring, but multiple frameworks apply:

  • Equal Employment Opportunity Commission (EEOC): Enforces anti-discrimination laws such as Title VII of the Civil Rights Act.
  • Americans with Disabilities Act (ADA): Prohibits AI tools from screening out disabled candidates unfairly.
  • Fair Credit Reporting Act (FCRA): Applies if AI tools use background or behavioural data.

At the state level:

  • New York City Local Law 144 (2023): Requires bias audits and disclosure when automated employment decision tools are used.
  • Illinois and Maryland have introduced similar legislation around AI video assessments.

Implication

Employers can use ChatGPT to support hiring but must ensure:

  • Bias audits are conducted if AI tools influence decisions.
  • Candidates are notified when automation is part of the process.
  • Human oversight is consistently applied.

In the US, liability remains with the employer, not the software provider.

5. Australia

  • Privacy Act 1988 (Cth): Governs the collection and use of personal data, with updates expected in 2025 to strengthen AI transparency.
  • Fair Work Act 2009: Protects workers from unfair treatment, including discriminatory hiring practices.
  • Australian Human Rights Commission (AHRC): Has issued guidance stating that employers are responsible for bias or discrimination caused by AI systems.

Implication

ChatGPT can assist with content creation and initial screening, but Australian employers must maintain human accountability and transparency.

Upcoming reforms to the Privacy Act will likely introduce mandatory notification when AI is used in decision-making processes, aligning more closely with EU standards.

Country / RegionMain RegulationCan ChatGPT Make Hiring Decisions?Key Requirement
UKEquality Act 2010, Data Protection Act 2018❌ NoMust include human review and candidate disclosure
IEEU GDPR❌ NoHuman intervention required; transparency essential
CHFADP 2023❌ NoHuman oversight and data protection compliance
USEEOC, Local Laws⚠️ LimitedRequires bias audits, notice, and accountability
AUSPrivacy Act 1988, Fair Work Act❌ NoHuman accountability and fairness standards

Managing Risk: Best Practices for Ethical AI Use in Hiring

Even when legally permitted, responsible AI use requires proactive safeguards.

Here are the global best practices every HR leader should follow.

1. Maintain Human Oversight

AI should support, not replace, human decision-making. Every shortlist or recommendation generated by ChatGPT must be reviewed by a person.

For a balanced, human-led process, see automating Candidate Screening.

2. Ensure Transparency and Disclosure

Always tell candidates when automation or AI is used in recruitment. This is required by law in most jurisdictions and builds trust.

Include a line such as:

“Our recruitment process uses automated tools to assist with application review. All hiring decisions are reviewed and finalised by human recruiters.”

3. Protect Candidate Data

Never input personally identifiable data — such as names, addresses, or full CVs — directly into ChatGPT or other public AI tools.

Store and process data only within secure systems and disclose your data-handling practices in your privacy policy.

4. Audit for Bias Regularly

AI models can unintentionally favour certain demographics or language patterns. Run regular bias audits and adjust processes to maintain fairness.

If you use automation for screening, see avoid bias when automating resume screening.

5. Keep Detailed Records

Document every automated decision or recommendation. Maintain records of:

  • Which tools were used
  • What data was processed
  • How human oversight was applied

Documentation is critical evidence of compliance if challenged legally.

6. Train Recruiters on Responsible AI Use

Even the best systems can fail if users misunderstand them. Train HR teams to interpret AI recommendations correctly and to spot potential bias or errors.

7. Combine AI with Broader Automation

ChatGPT works best as part of a broader recruitment automation strategy — one that handles repetitive admin tasks while leaving key decisions to humans.

For inspiration, read 7 Simple Hiring Workflows You Can Automate in One Afternoon.

Ethical Considerations Beyond the Law

Legal compliance is only the baseline. Ethical hiring demands more:

  • Fairness: Ensure AI decisions do not disadvantage underrepresented groups.
  • Transparency: Tell candidates when automation is used and how.
  • Accountability: Maintain clear responsibility for all decisions.

A responsible AI strategy not only protects your business but also enhances your employer brand.

Real-World Example: The Responsible Automation Model

A multinational tech firm operating across Europe, the UK, and the US introduced ChatGPT into its recruitment workflow.

Before:
Recruiters spent up to 10 hours per week writing job adverts and sending repetitive updates.

After:
ChatGPT now drafts all communications, which are reviewed and approved by HR. Final decisions remain human-led, and candidates are informed of automation use.

Results:

  • Admin time reduced by 70 per cent
  • Candidate satisfaction improved
  • Full compliance maintained under EU and UK law

Automation did not replace recruiters — it empowered them.

How a Recruitment Automation Agency Can Help

Working with a recruitment automation agency like Neverdue ensures your use of ChatGPT and AI remains compliant, ethical, and effective.

An expert partner can help you:

  • Design legally sound workflows
  • Implement fair and transparent automation
  • Train staff on responsible AI use
  • Maintain compliance across multiple regions

Neverdue’s team combines legal awareness with technical expertise, ensuring your automation strategy drives results safely.

ChatGPT is one of the most powerful tools ever created for recruiters — but it is not a decision-maker.

Across Ireland, Switzerland, the UK, the US, and Australia, the message is clear: AI can assist in hiring, but only humans can decide.

Use ChatGPT to write, analyse, and communicate. Keep humans responsible for fairness, empathy, and final judgement.

This partnership of people and technology is the future of ethical, efficient recruitment.

If you want help designing a compliant and scalable automation strategy, book a call with our team. We will show you how to integrate ChatGPT responsibly into your hiring process.