Info@MetaverseLaw.com

Using AI’s Tools in Hiring, Firing, and Compensation Decisions

Automated decision-making technologies (ADMT) in employment decisions
Automated decision-making technologies (ADMT) in employment decisions

What Employers Need to Know About Using
ADMT in Employment Decisions

Decisions about hiring, termination, and compensation represent substantial administrative costs for employers. Automated decision-making technologies (“ADMT”) can significantly streamline the process. However, employers using ADMT should be aware of recent and existing regulations governing the use of AI tools in evaluating prospective and current employees.

In addition to recent AI-specific regulation, use of AI tools in making employment decisions may be regulated by existing anti-discrimination statutes. Use of an algorithm that discriminates against a protected class identified in federal statutes – most notably Title VII of the Civil Rights Act and the Americans with Disabilities Act (ADA) – may expose employers to liability.

What is ADMT?
ADMT, or automated decision-making technology, is any technology that processes personal information and uses computation to replace or substantially replace human decision-making. AI tools used in employment may be one type of ADMT available to employers.

In the context of ADMT, significant employment decisions may include:

  • Hiring
  • Allocating work or compensation
  • Promotion and demotion
  • Suspension and termination

State and local compliance requirements may create exceptions for businesses that do not use the AI tool’s recommendations as a substitute for human discretion. However, this may be a high bar to overcome, and not all types of human involvement qualify for an exception.

For further explanation, please refer to the “Best Practices for Employers” section below.

What are the risks of employment discrimination?
AI and other ADMT tools involved in significant employment decisions may pose two key risks regarding employment discrimination. There is a risk they may:

1) Exclude or disadvantage applicants from a protected group identified in Title VII or
applicants with disabilities.

Groups are protected by the statute on the basis of race, color, sex, religion, or national
origin. This may apply even if there is no intent to discriminate: If the technology is
shown to have a disproportionate effect on a protected group, the employer may be
vulnerable to a lawsuit. For example, if ADMT tends to exclude candidates with names
that suggest a particular racial or national identity, this could pose risk to the employer
using this ADMT.

2) Screen out candidates based on aspects of their application that characterize a disability
recognized by the ADA.

This screening process may apply to a seemingly neutral selection criterion. For
example, an AI tool that screens employees out for a resume gap lasting longer than
four months could raise a risk of liability if the individual has a disability requiring
substantial recovery periods after medical intervention.

What types of ADMT pose particular risks of discrimination?
Certain types of ADMT may pose particular risks of violating state and federal regulations. This may include AI-hiring tools with algorithms that:

  • Fail to take into account reasonable accommodations or available workplace alternatives in their assessment of a candidate’s ability to uphold the employer’s performance
    standards
  • Fail to include measures to mitigate against sensitivity to names of candidates – which contain information as to the gender and/or ethnic or racial origins of the applicant
  • Are overly reliant on inferences between the applicant and existing successful
    employees, which may reinforce existing hiring biases
  • Fail to account for possible reasonable accommodations related to their disability that are available to the applicant
  • Rely on an empirical evaluation of an individual’s conformity with a subjective standard such as “culture fit”.Additionally, video-interviewing software that includes emotion-recognition technology without human involvement in their hiring decision, and hiring tools that require the applicant to provide medical information prior to employment may also create additional risk.

Best Practices for Employers
When selecting an AI tool for use in your employment decisions, there are measures employers can take to potentially reduce the risk of discrimination.

1. Transparency. Measures may include requesting transparency from the developer about mitigating measures to insulate decisions against particular risk factors.For example, seek tools that do not weigh factors posing particular risks of discrimination in the scoring process so heavily that they disqualify candidates. Transparency is also useful in preparing risk assessments which may be required by state and local regulations when using AMDT.

2. Human Involvement. Employers may also consider assessing the degree of human involvement in the decision-making process to see if the applied use qualifies for an exception from the regulation.

If seeking an exception, a certain degree of human involvement may be required.
Examples of insufficient degrees of human involvement may include situations where the decision-maker:

  • Is tasked with merely reviewing AI output
  • Lacks authority to change the decision
  • Lacks access necessary to make an independent decision
  • Operates under time constraints insufficient for substantive review
  • Only intervenes for obvious mistakes

In general, businesses should not recommend that the human decision-maker follow
the AI’s decision by default in policy or in practice and should encourage independent human review.

3. Preparation. When using AI to assist in employment decisions, businesses may want to
consider:

  • Conducting and submitting a risk assessment evaluating the risks of potential
    discrimination or data privacy balanced against the benefit to the business
  • Disclosing use of an AI tool in the applicant selection process before an applicant submits their application
  • Consulting state and local regulations to confirm compliance with required procedures and components. For example, CA, NY, IL, and CO are among the states that mandate some type of pre-disclosure when using ADMT or similar tools. Depending on the jurisdiction, it may be helpful for employers to consult relevant statutes to determine specific compliance requirements and timelines for disclosure.
  • Maintaining alternative processes to ADMT for selecting qualified candidates and allow potential applicants to opt-out of its use in evaluating their application. For candidates with disabilities, this may also include providing candidates with reasonable accommodations, including specialized equipment or extended timing or other modifications for timed skill assessments.
  • Establishing an appeal process for employment decisions made using AI tools.
  • Anticipating possible requests for deletion of personal data in response to evolving privacy rights across various jurisdictions. For example, in California, applicants may have existing privacy protections that include the rights to:
    • Be notified regarding a business’s use of AI in making employment decisions
    • Know what data is being collected, its purpose, and with whom it will be shared
    • Request deletion of personal information
    • Correct inaccurate personal information
    • Stop or limit the sale of sensitive personal information
    • And non-discrimination for exercising the rights provided.

What’s Next?

The recent Executive Order suggests that national policy may soon tend away from allowing
applicants and/or employees to bring claims based on an AI tool’s disproportionate effect on a protected group. (Executive Order, Ensuring a National Policy Framework for Artificial Intelligence, Sections 6 & 9, issued December 11, 2025). However, as state and local-level protections take effect and as federal minimum standards continue to be fleshed out, some caution is required as these standards are interpreted by relevant state and federal agencies.