Lighthouse Blogs
Workday AI Lawsuit
Back

Why Workday is Being Sued Over AI Screening (And What It Means for You)

April 3, 2026
6

Workday is being sued over its AI hiring tools for alleged discrimination. This breakdown explains what’s happening, why it matters, and how to avoid the risks of black-box screening.

Let’s cut through the noise.

Workday - one of the biggest players in hiring software - is being sued over its AI screening tools. But should you care?

What’s actually going on

The case (Mobley v. Workday) centers around a pretty simple claim:

Workday’s AI screening tools are automatically rejecting candidates in a way that may be discriminatory.

The plaintiff applied to 80-100+ jobs through systems powered by Workday. Rejected every time. Often within minutes. Sometimes at weird hours.

That’s not a recruiter reviewing resumes. That’s automation.

The lawsuit alleges that the system disproportionately filtered out candidates based on:

  • Age (especially 40+)
  • Race
  • Disability

Not because someone coded it to do that explicitly - but because of how the model behaves.

Why this case is different

Here’s where it gets interesting.

Normally, if hiring discrimination happens, the employer is on the hook.

This case is asking a new question:

Can the software vendor be responsible too?

And the court didn’t throw it out.

That alone should get your attention.

Because if that answer becomes “yes,” every company using AI screening tools just inherited a new category of risk.

The core issue: black box AI

This whole thing comes down to one problem:

Nobody can clearly explain how the decisions are being made.

That’s what people mean when they say “black box.”

You feed in resumes.
You get rankings or rejections.
But in between? Good luck.

And here’s the uncomfortable truth:

AI doesn’t need to “know” someone’s age or race to discriminate.

It just needs patterns.

  • Graduation dates → age proxy
  • Gaps in employment → bias signals
  • Certain schools, locations, or companies → demographic correlations

If your model is trained on historical hiring data (which most are), it can quietly learn all of that.

And now you’ve got a system scaling bias instead of eliminating it.

Why this matters for your business

This isn’t just a Workday problem.

This is a “everyone using AI in hiring” problem.

If you’re using any kind of automated screening, you need to be able to answer:

  • Why was this candidate rejected?
  • What factors drove their score?
  • Can we prove it’s consistent and fair?

If the answer is “the algorithm decided”… that’s not going to hold up much longer.

Regulators are already paying attention. The EEOC is involved in this case.

Translation:

This is moving from “nice to think about” -> “you will be accountable for this.”

Where most AI screening tools go wrong

Most tools pass candidate information into a frontier LLM, and produce some sort of score.

That's fine, and these models are amazing, but they lack some key important characterstics:

  • Transparency
  • Explainability
  • Control

So you end up with a fast implementation, but you hit some issues:

  • You can’t audit it properly
  • You can’t explain decisions to candidates
  • You can’t defend it legally

How we approach this differently at Lighthouse

We built Lighthouse specifically to avoid this mess.

No black boxes. No mystery scoring.

Every candidate is evaluated using fully transparent, structured criteria that you define.

That means:

  • Every score is tied to a clear, explainable factor
  • You can see exactly why someone ranked where they did
  • You can adjust criteria and immediately re-screen your pipeline
  • There’s no hidden model making decisions behind the scenes

If someone asks, “Why was this candidate rejected?” - you have an answer.

The bigger shift that’s coming

What’s coming next is pretty predictable:

  • More legal scrutiny
  • More pressure for explainability
  • Less tolerance for black-box decision making

Companies that rely on opaque AI tools are going to get squeezed.

Companies that can show their work are going to be fine.

The bottom line

AI in hiring isn’t the problem.

Unexplainable AI is.

If your system can’t clearly justify its decisions, it’s not just a product risk - it’s a legal one.

And that line is getting enforced faster than most people think.

If you’re using AI screening today, now’s a good time to ask a simple question:

Can we actually explain how our hiring decisions are being made?

If the answer isn’t a confident yes, you’ve got exposure.

And ignoring it won’t make it go away.