NYC Local Law 144 Compliance Checklist for AI Phone Screening (2026)
A practical, step-by-step checklist for running AI/voice screening while meeting NYC Local Law 144 requirements: bias audit readiness, candidate notice, public disclosures, and vendor questions.

NYC Local Law 144 Compliance Checklist for AI Phone Screening (2026)
If you’re using AI phone screening (voice bots, automated interviewers, or any tool that algorithmically scores or recommends candidates), you’re getting a huge throughput advantage in high-volume hiring.
But in New York City, Local Law 144 (and the associated DCWP rule) means you can’t treat “automated screening” as a black box.
NYC’s Department of Consumer and Worker Protection (DCWP) summarizes it plainly: employers and employment agencies are prohibited from using an automated employment decision tool unless it has a bias audit within one year of use, bias-audit information is publicly available, and required notices are provided to candidates/employees. (Source: https://www.nyc.gov/site/dca/about/automated-employment-decision-tools.page)
This post is a practical checklist for operational compliance—specifically for voice/phone screening in high-volume pipelines—and what to ask vendors so you’re not surprised at audit time.
If you’re new to voice screening, start with how it works. If you’re comparing options, see pricing. For more hiring ops playbooks, head to the blog.
Keyword cluster (what this post targets)
Primary keyword cluster:
- NYC Local Law 144 compliance checklist
- AEDT bias audit requirements
- AI / automated phone screening compliance
First: does your AI phone screen qualify as an “AEDT” under NYC?
Local Law 144 applies to “automated employment decision tools” (AEDTs). Whether a voice screen counts depends on what the system does with the results.
In practice, your AI phone screen is much more likely to be treated like an AEDT if it:
- Scores candidates (overall or on dimensions like “communication,” “reliability,” “job fit”)
- Ranks candidates for recruiter review
- Recommends who advances to next step
- Automatically routes candidates (e.g., “invite to onsite” vs “reject”) based on the model output
If the tool is purely collecting info (like an intake form) and a human does all evaluation, you’re in a lower-risk posture—but many “screening” flows still create implicit rankings.
When in doubt, treat it as AEDT-like and build the compliance surface area now. It’s cheaper than retrofitting.
The NYC Local Law 144 compliance checklist (voice screening edition)
1) Map the decision: where does automation actually influence outcomes?
Write this down as a one-page “decision map.” You’ll use it for internal alignment, vendor questions, and audit readiness.
Decision map template (copy/paste into your internal doc):
- Tool name:
- What inputs it uses (audio, transcript, ATS data, knockout questions):
- What outputs it produces (summary, transcript, score, rank, recommendation):
- Where the output is shown (ATS stage, recruiter dashboard, email):
- Who makes the final decision (recruiter, hiring manager, system):
- What action is taken based on the output (advance, reject, schedule, hold):
- Human override available? (Y/N)
- Logging available? (Y/N; what fields)
Why this matters: The EEOC has emphasized that algorithmic tools used to “inform decisions about whether to hire” can constitute a selection procedure, and employers may still be responsible even if a third-party vendor built the tool. (Source: https://www.sullcrom.com/insights/blogs/2023/May/EEOC-Releases-Guidance-Addressing-Artificial-Intelligence-and-the-Potential-for-Disparate-Impact-Discrimination-Concerns-Under-T)
2) Put “notice + disclosure” in the workflow (not in Legal’s inbox)
NYC DCWP points to required notices and public availability of bias-audit info as conditions for use. (Source: https://www.nyc.gov/site/dca/about/automated-employment-decision-tools.page)
Operationally, that means your recruiting ops workflow needs:
- A candidate-facing notice before you use the tool
- A public page (or section) that hosts the bias audit summary (or whatever disclosure is required)
- A way for recruiters to confirm notice was delivered (and logged)
Implementation tip (high-volume): If you invite candidates to an on-demand phone screen via SMS/email, add the notice in both:
- The initial invite message
- The landing page candidates see right before they begin
This reduces “I never saw it” disputes.
3) Make audit-readiness a data problem, not a “model problem”
Bias audits and adverse impact reviews require consistent, reconstructable data.
At minimum, log:
- Candidate identifier (ATS id)
- Role requisition id + location
- Screen version (script + model/prompt version)
- Time/date the notice was sent
- Time/date the screen started/ended
- The tool output (score/rank/recommendation) and what thresholds were applied
- Final human outcome (advance/reject) and stage timestamps
If you can’t reproduce “what the tool did” for a given candidate with the model version and thresholds used at the time, you will struggle to defend the process.
NIST’s AI Risk Management Framework is explicit that risk management is not only about the model—it’s about governance, measurement, and continuous monitoring across the system lifecycle. (Source: https://www.nist.gov/itl/ai-risk-management-framework)
4) Freeze the “screen” into a stable, auditable unit
High-volume teams constantly tweak scripts (“we added one question,” “we changed scoring”). That’s fine—but only if you treat each change like a new version.
Version-control checklist:
- Script version (questions + allowed follow-ups)
- Scoring rubric version (what counts as pass/fail)
- Threshold version (e.g., advance if score ≥ X)
- Language/consent copy version
Rule: if you change anything that could influence who advances, increment a version and record it per screen.
5) Ask vendors the right questions (copy/paste)
If you’re buying (or building) AI phone screening, your vendor answers become part of your compliance posture.
Copy/paste this into your vendor security/compliance questionnaire:
- Bias audit support: Do you provide documentation/data exports to support an independent bias audit within the required timeframe?
- What is the “selection procedure”? What outputs are produced (score/rank/recommendation), and how do you recommend customers use them without turning them into an automatic reject?
- Four-fifths rule vs statistical significance: What methodology do you use (or recommend) to evaluate adverse impact? Do you rely only on the “four-fifths rule of thumb,” or do you also evaluate statistical significance? (The EEOC guidance discussed by Sullivan & Cromwell notes that the EEOC cautions that the four-fifths rule alone may not always be sufficient.) (Source: https://www.sullcrom.com/insights/blogs/2023/May/EEOC-Releases-Guidance-Addressing-Artificial-Intelligence-and-the-Potential-for-Disparate-Impact-Discrimination-Concerns-Under-T)
- Model changes: How do you communicate model updates? Can we pin a version per requisition?
- Explainability: What can you show a recruiter about why the model produced its output (without exposing sensitive model internals)?
- Human override: Can recruiters override outcomes and are overrides logged?
- Data retention: How long do you retain audio/transcripts? Can customers delete on request? Are deletions logged?
- Accessibility: What accommodations exist for candidates who can’t complete a phone screen (hearing impairment, language needs, time constraints)?
6) Add an “accommodations lane” (this is where teams get sloppy)
In high-volume pipelines, accommodations are often handled ad hoc. Don’t.
Create a consistent alternative path:
- Candidate can request an alternative format (text-based, human screen, or scheduled call)
- Alternative is offered without penalty
- Recruiters are trained on how to route and document it
Even if Local Law 144 is your immediate trigger, this is broader risk management—and aligns with the “trustworthy AI” posture NIST promotes: governance plus operational controls around the technology. (Source: https://www.nist.gov/itl/ai-risk-management-framework)
7) Build a monthly “adverse impact + drift” review cadence
Don’t wait for an annual audit to discover issues. Establish a lightweight monthly review:
- Screening pass rates by requisition and location
- Pass rates by stage (apply → screen completed → passed)
- Any sharp drops/spikes after script/model changes
- Recruiter overrides (how often, why)
- Candidate complaints/themes (especially “unfair” or “never got notice”)
A recurring review converts compliance from a one-time scramble into an operating system.
A concrete example: compliant-ish workflow for an on-demand AI phone screen
Here’s a minimal workflow that still moves fast:
- Candidate applies
- Knockout questions determine basic eligibility
- Candidate gets an invite (SMS/email) with:
- a short notice that AI/automation will be used
- link to a public page that summarizes the bias audit and explains the tool
- Candidate completes a 7–10 minute phone screen
- Recruiter sees:
- transcript + structured summary
- rubric-based pass/fail recommendation
- clear “why” bullets tied to role requirements
- Recruiter makes the final decision (advance/hold/reject)
- All outputs + versions + timestamps are logged
This is essentially what Retalent is designed for: moving top-of-funnel screening to an on-demand workflow while keeping humans in control. If you want to see it in action, book a demo: https://calendly.com/nkchandupatla/relaylabs-discovery
FAQ
Does Local Law 144 apply outside NYC?
Local Law 144 is NYC-specific, but the broader concept—audits, transparency, and accountability for automated selection—shows up in other jurisdictions and in federal guidance discussions. It’s worth building the compliance muscle even if you don’t hire in NYC today.
If a human recruiter makes the final decision, are we “safe”?
Not automatically. If the system scores/ranks/recommends and recruiters follow it by default, it can still function like a selection procedure. Design the workflow so humans can override, and log overrides.
What should we publish publicly?
NYC DCWP indicates that information about the bias audit must be publicly available, along with required notices. Start with a simple public page that explains your process, links to the bias audit summary, and provides a candidate contact path. (Source: https://www.nyc.gov/site/dca/about/automated-employment-decision-tools.page)
What’s the fastest way to get audit-ready data?
Treat every screen as an event and record: candidate id, requisition id, tool output, threshold used, model/script version, and the final human outcome. If you can export those rows to a CSV, you’re already far ahead.
Where can I learn more about trustworthy AI risk management?
NIST’s AI Risk Management Framework (AI RMF 1.0) provides a widely referenced structure for governing, measuring, and monitoring AI risks. (Source: https://www.nist.gov/itl/ai-risk-management-framework)
Next steps
- If you’re building a voice-screening workflow, read how it works.
- If you’re pricing out a 24/7 screening SLA, see pricing.
- Browse more practical playbooks on the blog.
- Or book a demo to discuss your funnel and compliance constraints: https://calendly.com/nkchandupatla/relaylabs-discovery
Ready to scale your hiring?
See how ReTalent's AI voice screening can cut time-to-fill and improve candidate experience.