This state is allowing AI to help rule on its unemployment claims
Google and the state of Nevada are partnering on a brand new gen AI use case, in which the tech could decide unemployment claims and benefits for thousands.
Nevada will become the first state to pilot a generative AI system designed to make unemployment claim decisions, marketed as a way to speed up appeals and tackle the nation's overwhelming backlog of cases. It's a risky, first-time experiment at integrating AI into higher-level decision making.
Google is behind the program's tech, which runs transcripts of unemployment appeals hearings through Google's AI servers, analyzing the data in order to provide claim decisions and benefit recommendations to "human referees," Gizmodo reported. Nevada's Board of Examiners approved the contract on behalf of its Department of Employment, Training and Rehabilitation (DETR) in July, despite broader legal and political pushback against integrating AI into bureaucracy.
Christopher Sewell, director of DETR, told Gizmodo that humans will still be be heavily involved in unemployment decision making. "There’s no AI [written decisions] that are going out without having human interaction and that human review. We can get decisions out quicker so that it actually helps the claimant," said Sewell.
But Nevada legal groups and scholars have argued that any time saved by gen AI would be cancelled out by the time it would take to conduct a thorough human review of the claim decision. Many have also noted concerns about the possibility of private, personal information (including tax information and social security numbers) leaking through Google's Vertex AI studio, even with safeguards. Some have hesitancies surrounding the type of AI itself, known as retrieval-augmented generation (RAG), which has been found to produce incomplete or misleading answers to prompts.
Across the country, AI-based tools have been quietly rolled out or tested across various social services agencies, with gen AI integrating itself further into the administrative ecosystem. In February, the federal Centers for Medicare and Medicaid Services (CMS) ruled against using AI (including generative AI or algorithms) as a decision maker in determining patient care or coverage. This followed a lawsuit from two patients who alleged their insurance provider used a "fraudulent" and "harmful" AI model (known as nH Predict) that overrode physician recommendations.
Axon, a police technology and weapons manufacturer, introduced its first-of-its-kind Draft One — a generative large language model (LLM) that assists law enforcement in writing "faster, higher quality" reports — earlier this year. Still in a trial period, the technology has already sounded alarms, prompting concerns about the AI's ability to parse the nuance of tense police interactions and potentially adding to a lack of transparency in policing.