Recently, 63-year-old US Army veteran Stephen Normandin was fired unceremoniously from his job as an Amazon contact driver. The algorithm tracking him had decided the Phoenix man was not fit for the job and sent the pink slip over the mail. No sit downs, no memos, no second chances: fired by a bot.
Stephen swears by his work ethics. He said he has never shirked his duties and Amazon is sacking him for things beyond his control, like when he was unable to deliver packages because of locked apartments and such.
Bots take over
Amazon has become the world’s largest online retailer in part by delegating a lot of its workload to algorithms. The company leverages AI to manage workers in the warehouse, oversee contract & independent delivery drivers and appraise the performance of its employees.
The tech giant uses AI to manage millions of third-party merchants on the app. Jeff Bezos has implemented this strategy for quick turnarounds and cost efficiency. According to a Bloomberg report, Amazon was aware of the fallibility of machine intelligence, but carried on anyway to save millions on labour costs.
Amazon started its Flex delivery service in 2015 to handle packages missed by Amazon vans. Drivers can sign up and upload their documents via an app- after which the Flex algorithms monitor their moves.
Amazon’s algorithm calls the shots here: be it driver hiring, performance reports or firing–with minimal human intervention. The Flex algorithm scans incoming data and rates the drivers. The approval ratings fall into four categories: Fantastic, Great, Fair or At Risk.
Flex drivers don’t have the benefit of a contract or no paid leave during an appeal. Their only option is to pay $200 to take the dispute to arbitration. But for workers who earn between $18–$30 per hour, it’s simply not worth it.
According to Bloomberg’s report, the livelihoods of millions of independent contractors are dependent on an unfair and opaque algorithm system.
Flex drivers’ forums also report the accounts have been terminated because of the failure in the selfie verification system–photos not being confirmed by image recognition algorithms when people have lost weight, shaved beards, gotten a haircut or taken a picture in low lighting.
Uber The App Drivers & Couriers Union (ADCU) has been fighting cases to challenge the company’s practice of robo-firing. In an interview with Wired, an ex Gold Uber driver, who had a customer rating of 4.96, reported waking up one morning unable to log in to work, being barred from the system for improper use. He was unable to appeal the decision either. Similar legal claims were brought to the Netherlands court as well.
While AI and automation allow easier management of employees, it comes with the danger of bringing out the worst in humans. With companies incorporating biometrics, facial recognition and constant surveillance into the workflow, it begs the question: How much is too much? If unchecked, the future workplace can end up as an AI-fueled Orwellian dystopia.
State of play
At present, we have software like Hubstaff that records employees’ keyboard strokes, mouse movements, and browsing history. Time Doctor uses webcams to monitor employees. Meanwhile, Enaible’s ‘AI Productivity Platform’ runs in the background, tracking employee’s data trail, task completion time, and has inbuilt performance nudges. The software gives each person a productivity score for appraisal purposes. IBM has introduced a system that uses sensors to track pupil dilation and facial expressions. This data, along with the employee’s sleep quality and meeting schedule, is used to deploy drones to deliver a jolt of caffeinated liquid to the sloppy workers.
Interestingly, Canon has installed cameras in its chinese office with AI-enabled “smile recognition” that only let smiling workers enter rooms or book meetings. With more and more computers making decisions for humans, experts are calling for regulations to force companies to make their algorithms transparent. However, legislation has been slow. In 2020 Senator Chris Coons, Democrat of Delaware, had introduced the Algorithmic Fairness Act that would make it compulsory for the Federal Trade Commission to create rules that ensure equitable use of algorithms. But it has not been signed into law yet.
Lina Khan, the chairperson of FTC, has proposed an Antitrust policy based on her 2017 paper “Amazon’s Antitrust Paradox “. The paper argued that the current antitrust laws did not address the harm caused by tech monopolies- especially Amazon. Her policy proposes restoring traditional antitrust & competition policy principles or applying common carrier obligations.