Technology

New York City Moves to Regulate How AI Is Used in Hiring

European lawmakers are finishing their work towards an AI law. The Biden administration and congressional leaders have plans to curb artificial intelligence. Sam Altman, chief executive of OpenAI, developer of the AI ​​sensational ChatGPT, recommended the creation of a federal agency with oversight and licensing powers in his Senate testimony last week. And this topic was also taken up at the G7 Summit in Japan.

Amidst far-reaching plans and commitments, New York City has emerged as a modest pioneer in AI regulation.

The city government will pass the law in 2021, Specific Rules of the Last Month It is the application of high-stakes technology in hiring and promotion decisions. Implementation will begin in July.

City law requires companies that use AI software in hiring to notify candidates that automated systems are being used. It also requires companies to have independent auditors check for technology bias annually. Candidates can request and be informed of what data is being collected and analyzed. Violating companies will be fined.

New York City’s focused approach represents an important frontier in AI regulation. Experts say the broad principles formulated by governments and international organizations will at some point need to be translated into details and definitions. Who will be affected by technology? What are the advantages and disadvantages? Who can intervene and how?

“I am not in a position to answer these questions without specific use cases,” says Julia Stojanovic, an associate professor at New York University and responsible director of the AI ​​Center.

But the New York City law has been criticized even before it took effect. Public interest activists say this is not enough, while business groups say it is unrealistic.

Complaints from both sides point to the challenge of regulating AI. AI advances at a breakneck pace, with unknown consequences, fueling enthusiasm and anxiety.

An uneasy compromise is inevitable.

Stojanovich fears the city’s laws have loopholes that could weaken them. “But it’s much better than no law,” she says. “And you can’t learn how to do it until you try to regulate it.”

The law applies to companies with employees in New York City, but labor experts expect the law to affect practice nationwide. At least four states—California, New Jersey, New York, Vermont—and the District of Columbia are also working on legislation to regulate AI in employment. Illinois and Maryland have also enacted laws restricting the use of certain AI technologies, often for workplace surveillance and candidate screening.

New York City law emerged from a clash of sharply opposing views. The city council passed the bill toward the end of Mayor Bill de Blasio’s administration. Afterwards, over 100,000 words of public hearings and public comments were held under the supervision of the city’s Department of Consumer and Labor Protection, the rule-making body.

As a result, some critics say it has become overly sympathetic to business interests.

Alexandra Givens, director of the Center for Democracy and Technology, a policy and civil rights group, said that what was supposed to be a landmark piece of legislation has been watered down and has lost its effectiveness.

That’s because the law defines “automated hiring decision tools” as technology used “to substantially assist or replace discretionary decision-making,” she said. Regulations adopted by the city appear to interpret this language narrowly, saying audits are only necessary when AI software is the sole or major factor in hiring decisions, or when it is used to overwhelm humans. Mr Givens said.

She said this ruled out the primary use of automated software, leaving the final choice always up to the recruiter. She said the potential for discrimination by AI typically arises when narrowing down hundreds or thousands of candidates to a small number, or when generating pools of candidates in targeted online recruiting efforts. It is said that

Givens also criticized the law for limiting the types of groups that can be abused. Covers gender, race, and ethnic bias, but does not cover discrimination against older workers or people with disabilities.

“My biggest concern is that this is going to become a national template when we should be asking policy makers to do more,” said Givens.

City officials said the law was scaled back to make it more rigorous, focused and enforceable. The council and labor protection agencies have heard many voices, including public interest activists and software companies. Officials said the aim was to consider the trade-offs between innovation and potential harm.

“This is a huge regulatory success in terms of ensuring AI technology is used ethically and responsibly,” said the chairman of the council’s technology committee when the law was passed. said Robert Holden, who is still a member of the committee.

New York City is trying to address new technologies in the context of federal workplace laws with employment guidelines dating back to the 1970s. The Equal Employment Opportunity Commission’s main rule stipulates that selection practices and methods adopted by employers must not have a “different influence” on legally protected groups such as women and minorities. there is

Businesses have criticized the law. The Software Alliance, an industry group that includes Microsoft, SAP and Workday, said in a filing this year that the requirements for an independent audit of AI were “the audit environment is in its infancy” and that standards and professional oversight bodies are in the early stages. It said it was “impossible” because there was a shortage.

But emerging areas are market opportunities. Experts say the AI ​​auditing business will only continue to grow. It has already attracted law firms, consultants and start-ups.

Companies selling AI software to help with hiring and promotion decisions have generally embraced regulation. Some have already undergone external audits. They see this requirement as a potential competitive advantage, providing evidence that their technology broadens a company’s pool of job seekers and increases opportunities for workers.

“We believe we can comply with the law and show what great AI looks like,” says Eightfold, a Silicon Valley startup that makes software to help recruiters. AI General Counsel Roy Wang said.

New York City law also takes an approach to regulating AI that could become the norm. The law’s primary metric is the “impact rate,” a calculation of the impact of software use on a protected group of job seekers. It doesn’t elaborate on how the algorithm makes decisions, a concept known as “explainability”.

Critics say that in life-affecting applications like employment, people deserve an explanation of how decisions are made. But AI like ChatGPT-style software is getting more complex, perhaps falling short of the goal of explainable AI, some experts say.

“The focus will be on the output of the algorithm, not what the algorithm does,” said Ashley Kasoban, executive director of the Responsible AI Institute, which develops certifications for the secure use of AI applications in the workplace, healthcare and finance. says.

Related Articles

Back to top button