Follow Datanami:
May 6, 2022

AI That Works on Behalf of Workers

(Blue Planet Studio/Shutterstock)

Individuals have grown used to the existence of invasive data collection and processing practices that benefit businesses but offer little to no value to them at the end of the day. But what if employees could turn the AI engine around and use it for protection from an unfair employer? That’s the gist of a new AI project at Northeastern University.

Saiph Savage, an assistant professor at Northeastern and director of its Civic AI Lab, is leading the development of an “intelligent nudging” application that helps gig workers avoid unfair reviews.

“Just having one bad review can lead to the termination of the worker,” Savage says. “But the problem is that many times workers can be evaluated on things that they do not control.”

An Uber driver, for example, may be late to pick up a rider because of unexpectedly bad traffic, Savage says. Or a data labeler may have submitted incomplete work due to a computer crashing in the middle of their review. The worker didn’t do anything wrong in either case, but all too often, the worker gets the blame anyway.

Through her work with NLP systems, Savage knew there was a potential solution.

“What we did was we trained machine learning models that could detect when a person, a customer, or a manager was writing an evaluation about a worker that was considering things that were outside the scope of the work of the worker,” Savage tells Datanami. “And we started nudging the employer, the boss, the manager, or the client to reconsider their evaluation so that they could be fairer with workers.”

The system is based on a deep learning model that’s trained on the written terms of services for a specific gig platform. Those terms of services define the responsibilities of the gig workers. Through the process of elimination and some NLP reasoning, the system is able to identify instances when the content of a written review is veering beyond the responsibilities as defined in the terms of service. When this happens, the system pops up an alert and makes a suggestion to the writer.

(vladwel/Shutterstock)

“It’s a little nudge,” says Savage, who recently won a $300,000 NSF grant to study human centered AI to empower rural workers, according to her bio “We’re helping the manager to start to rethink how they’re writing about their workers.”

Savage’s team made the system available as a browser plug-in, which allows anybody to use it, she says. “If you’re a client that’s worried about being unfair to workers, you can just download our tools and start to use them,” she says. “We have them available in our GitHub.  The idea is that this way, developers can also take tools and start to build their own things.”

The model could also be used to detect instances of bias in a review, according to Savage, who was named to MIT Technology Review’s 35-under-35 Innovators.

“It could possibly be that, for instance, maybe for a woman worker or a person of color worker, they wrote a review that was a little bit harsher,” Savage says. “And this helps them to rethink, ‘Oh wait, I might have been a little bit too harsh on this person unintentionally. I’m going to reconsider it.’”

Savage has been working with data labeling firm Toloka, West Virginia University, and the National Autonomous University of Mexico (UNAM) to help train rural workers to become data labelers. The idea for the intelligent nudging application originated in part from that work.

Assistant professor Saiph Savage runs the Civic AI Lab at Northeaster University

“One thing that we have been working a lot on is on providing data transparency to workers,” Savage says. “A lot of the gig platforms currently have information asymmetries where the clients might have a lot of information about the workers, but the workers do have a lot of information about the clients. So I think that it is important to rethink how we can empower everyone to have access to information about the different stakeholders so that they can make good decisions.”

One example of this information asymmetry is the practice of rideshare companies incentivizing their drivers to continue to work with the lure of earning a lot of money. In reality, the drivers have a poor chance of hitting that kind of payday, but they just don’t know it.

Toloka has taken this point of view to heart and has designed its workspace to produce fairer interactions for the workers, Savage says. But more can be done.

“I think it’s a matter of maybe holding companies more accountable,” says Savage, who earned her Ph.D. in computer science from the University of California, Santa Barbara. “You could argue that maybe what we need is even having labels similar to the [food] labels. ‘Oh, this food was produced under fair trade.’ Maybe we need something similar for gig platforms.”

It will be difficult for companies to give up their informational advantage if it’s profitable. But if enough people get informed about the situation, that could someday change.

Related Items:

Fighting Harmful Bias in AI/ML with a Lifelong Approach to Ethics Training

New DataKind CEO Sees More Than Dollar Signs in Data Science

It’s Time to Implement Fair and Ethical AI

 

Datanami