The Role of AI in Leave Management: Employer Considerations

Employee Benefits

The Role of AI in Leave Management: Employer Considerations

Machines are said to have artificial intelligence (AI) if they can interpret data, learn from it and use the knowledge to react and achieve specific goals.1 Among other things, this intelligence allows machines to complete mental tasks that only humans typically perform.

AI is all around us – we likely interact with it throughout the day without fully realizing it. Have you asked Siri to play your favorite music playlist as you settled in to read? Do you open your phone with Face ID? Have you researched a topic online lately and received an AI Overview of search results? Though AI has been around in one form or another for nearly 75 years, its influence in everyday life, particularly business operations and employment practices, is more prevalent than ever.

AI in Absence Management

Over the last several years, absence management vendors and insurance carriers have started incorporating AI into their businesses to create efficiencies and support targeted absence management practices. Where many of these functions were previously considered back-of office, AI is now undeniably more visible through chatbots in vendor portals, one-way text messages and workflow-based automated approvals.

While absence administration and claims processes are still primarily executed by humans, vendors are exploring how they can use AI to execute repetitive processing tasks as part of overall claims management. Their goal is to improve efficiency and automate certain tasks so claim examiners can spend their time more effectively, such as making claims decisions, thinking critically and holistically about claims and interacting with claimants.

Common examples of how AI may be used by vendors in the claims process include the following:

  • Reviewing medical documentation for completeness
  • Automating claimant text messages to obtain missing data
  • Identifying keywords in medical documentation, summarizing the documentation, adding a summary to a claim note in the case, and triggering a task for the examiner to review it
  • Leveraging chatbots, computer programs that employ AI, and natural language processing to understand and answer common and repetitive questions regarding a claim, such as status, if paperwork was approved or if benefits were paid2
  • Automating the creation of standard claim action plans, including future-dated tasks associated with a plan (for example, phone calls or medical outreach)
  • Using AI-driven predictive modeling to determine necessary case management interventions
  • Reviewing telephonic claim discussions to evaluate sentiment and empathy scores at the examiner and claimant level, which can be shared with examiners, quality/performance staff, and operational leadership
  • Approving a claim based on specific and defined factors (at this point in the technology’s evolution, AI should not be used to make adverse decisions on a claim without human final review)

Employer Considerations

Employers using AI to drive organizational efficiency and cost savings should consider legal risks associated with these tools. In July 2023, the U.S. Senate introduced the No Robot Bosses Act to establish “an interagency taskforce on employer surveillance and workplace technologies.” The goal is to “protect and empower workers by preventing employers from relying exclusively on [AI] or bots to make employment decisions.” The Exploitative Workplace Surveillance and Technologies Task Force Act3 was established to execute this initiative.

Several state laws, such as Colorado’s Concerning Consumer Protections in Interactions with [AI] Systems Act and New York City’s Automated Employment Decision Tools law, may impact employers’ use of automated decision tools. Given the evolving AI regulatory environment and a complex web of federal, state and local laws, employers must stay informed about this intricate legal landscape.

For example, employers cannot escape legal risks by using AI tools designed and/or administered by a third-party vendor. The Equal Employment Opportunity Commission (EEOC) has made it clear that “employers may be held responsible for the actions of their agents, which may include entities such as software vendors if the employer has given them authority to act on the employer’s behalf.”4

Unlawful discrimination resulting from AI tools that employers played no role in creating or administering does not shield them from liability. Employers could still be liable if the tool results in disparate impact or disparate treatment discrimination.5

In addition to its role in the claims process, AI is commonly leveraged in other aspects of employment, such as preemployment screenings. Employers could be liable for a third-party vendor’s failure to provide a reasonable accommodation for a disabled applicant when administering and scoring a pre-employment test.

The EEOC states that “if an applicant were to tell the vendor that a medical condition made it difficult to take the test, which qualifies as a request for a reasonable accommodation and the vendor did not provide an accommodation required under the ADA, the employer likely would be responsible even if it was unaware that the applicant reported a problem to the vendor.”4

The EEOC suggests that employers that rely on vendors to develop or administer an algorithmic decision-making tool should ask the following questions to support the evaluation:

  • Have steps been taken to evaluate whether using the tool causes a substantially lower selection rate for individuals with a characteristic protected by Title VII?
  • If the tool requires applicants or employees to engage a user interface, did the vendor make it accessible to as many individuals with disabilities as possible?
  • Are the materials presented to job applicants or employees in alternative formats? If so, which formats? Are there disabilities for which the vendor will be unable to provide accessible formats, in which case employers may have to provide them absent undue hardship?
  • Did the vendor attempt to determine whether the use of the algorithm would put individuals with disabilities at a disadvantage? For example, did the vendor determine whether any of the traits or characteristics measured by the tool are correlated with certain disabilities?4

Before using AI tools, employers should consider taking the following actions:

  • Perform appropriate due diligence on AI tool vendors, including asking what process was used to determine whether the tool might adversely impact applicants or employees.
  • Provide notice to individuals about the AI tools being used and the availability of reasonable accommodations.
  • Confirm with the vendor that any adverse leave decision will have a human review before it is communicated to an employee (for example, a system-generated flag to the claim examiner and/or the employee’s supervisor to assess the adverse decision before it is finalized).
  • Only develop and select tools that measure abilities or qualifications essential for the job, even for people entitled to reasonable accommodation.
    • For example, in a recent case, the EEOC sued on the basis that an employer violated the Age Discrimination in Employment Act of 1967 because its AI hiring program automatically reject[ed] applicants who were older than 55, which eliminated more than 200 applicants.6
  • Develop AI usage policies that consider how employees use AI in the workplace, not just how employers think employees may use it.
  • Regularly conduct bias audits of AI tools and validate and compare the results with those of human decisionmakers. This includes ensuring that AI tools have assessed subjects in diverse demographic groups and comparing the average results for each group.
  • Identify responsible parties to oversee quality assurance to help ensure the models are learning and performing as intended.
  • Companies should confirm compliance with processes and laws, certify that data is protected from a security perspective, and ensure no bias within the system influences outcomes and decisions.
  • Stay current on existing and pending legislation related to AI to ensure tools are consistent with federal, state, and local laws and update policies and practices to reflect legal developments.
  • Consider adding AI monitoring to the framework for monitoring other forms of employment law, such as internal legal resources, external legal resources, brokers/consultants, and vendors.

As the use of AI expands in human resources and claims management functions, employers must keep a watchful eye on litigation and regulatory activity and how it may affect their businesses.

1 PBS. Crash Course: Artificial Intelligence. Public Broadcasting Service, 9 Aug. 2019. Retrieved from www.pbs.org/video/
what-is-artificial-intelligence-1-hptal6/
2 IBM. What Is a Chatbot? Oct. 15, 2021. www.ibm.com/topics/chatbots
3 Congress.GOV. S.2440. Exploitative Workplace Surveillance and Technologies Task Force Act of 2023. Retrieved from
https://www.congress.gov/bill/118th-congress/senate-bill/2440
4 U.S. Equal Employment Opportunity Commission. Select Issues: Assessing Adverse Impact in Software, Algorithms, and
Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964. May 18, 2023.
Retrieved from www.eeoc.gov/laws/guidance/select-issues-assessing-adverse-impact-software-algorithms-and-artificial
5 U.S. Equal Employment Opportunity Commission. The Americans with Disabilities Act and the Use of Software,
Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees. May 12, 2022. Retrieved from www.eeoc.
gov/laws/guidance/americans-disabilities-act-and-use-software-algorithms-and-artificial-intelligence
6 Business Law Today. American Bar Association. April 10, 2024. Navigating the AI Employment Bias Maze: Legal
Compliance Guidelines and Strategies. Retrieved from www.americanbar.org/groups/business_law/resources/businesslaw-today/2024-april/navigating-ai-employment-bias-maze/

Melanie Payton, CLMS, AVP

Absence Consulting and Audit Practice

Chrissy Theiss, JD, SHRM-SCP

Senior Compliance Consultant