How Automated Surveillance Impacts Remote Workers with Disabilities

Lex Huth
5 min readApr 15, 2024

I remember how nervous I was when I found out my freelance gig was going to require timed recordings of my screen to ensure I was working. I understood the reason behind it, but I also worried that the employer would see my screen reader running. Would they fire me? Would it look like I wasn’t being productive?

Lex Huth during photoshoot holding white cane
ID: Lex Huth wearing a green shirt and smiling at the camera while holding her white cane. Image credit: Arlo Boutique and KNZ Photography.

In recent years, telework has increased, bringing benefits such as eliminated or reduced commute times, the ability to live in a broader range of locations, and the chance to customize work setups. As of FY2021, a reported 47% of Federal employees participated in telework. Not only that, but a recent survey found that if remote work flexibility is taken away, 45% of government employees will look for another job.

While telework has many benefits, employers’ use of automated surveillance tools to monitor the activities of teleworking employees has also increased, which can create a stressful work environment and potentially lead to workplace inequalities. Automated workplace surveillance tools can log keystrokes, track eye gaze and facial expression, trigger screenshots, monitor location, and more. According to the Partnership on Employment & Accessible Technology (PEAT), automated surveillance tools can make employees vulnerable to discrimination, particularly those with disabilities. I am a visually impaired worker with obsessive-compulsive disorder (OCD), and the use of these tools has been a barrier for me in past positions.

When considering procuring automated surveillance tools, employers should define their goals and the outcomes they want to achieve by using the tools. Automated surveillance might not provide the data needed to achieve those goals and may come with unanticipated costs. As employers continue to evolve their workplace and telework policies, here are some key considerations to keep in mind when evaluating automated surveillance tools.

Lex Huth on the DEAMcon 2024 stage presenting
Me talking passionately about inclusive AI at work during my DEAMcon 2024 session with John Robinson. Image credit: DirectEmployers Association.

Limitations of Automated Surveillance Tools

Automated surveillance tools claim to measure employee metrics like the number of messages sent, perceived productivity, time away from the computer, and attentiveness. While this data may seem useful to assess performance or hold workers accountable, it does not reflect the quality of an employee’s work or take into account the different ways employees successfully complete their tasks.

For example, does the number of emails a person sends relate to the quality of those emails? From the perspective of an employee with a disability, does eye gaze tracking prove I am engaged in my work? I cannot control the gaze of my right eye, so those tracked metrics are not relevant to my work performance. This type of monitoring could also force skilled workers with disabilities out of positions because taking necessary breaks or using assistive technology could hurt their surveillance metrics despite them excelling in their roles.

Impact on Work Environment

Automated surveillance tools can create a stressful work environment and have a negative impact on employee morale. For example, when I freelanced, hiring managers could opt to automatically take screenshots of my computer screen to ensure I was only working on their contract; I was constantly aware of the surveillance happening in the background. It created additional stress because whenever I needed to look something up or use my screen reader to listen to what I wrote, I wondered if I would need to justify that time. I also avoided using my screen reader because I did not want certain hiring managers to know I was visually impaired. Disclosing a disability is a personal choice, and thinking about the impact of a negative surveillance rating should not force anyone to disclose their disability or avoid utilizing workplace accommodations. Employees may choose not to disclose for many reasons, including the sensitive nature of medical information and the stigma that unfortunately comes with certain disclosures.

Legal Implications

Given the many ways workers with disabilities can be affected by automated surveillance, employers should review their legal commitments under the Americans with Disabilities Act (ADA). In 2022, the Equal Employment Opportunity Commission (EEOC) released guidance on how the ADA applies to software, algorithms, and artificial intelligence (AI) used to monitor employees. The government is also starting to review the broader impact of automated surveillance in the workplace. For example, Connecticut requires employers to notify employees about electronic monitoring.

This past May, the White House Office of Science and Technology Policy released a public request for information to learn more about automated surveillance tools, noting the potential for privacy risks and discrimination. In addition, an October Fact Sheet for the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence specifically calls out supporting our nation’s workers as AI impacts jobs and workplaces. When agencies seek to implement this piece of the Executive Order, it is imperative that they consider AI’s unique potential harms that directly relate to people with disabilities.

Evaluation and Management of Risks

Using automated surveillance to monitor teleworkers can seem enticing. However, these technologies may not be an effective means of assessing both the quantitative and qualitative aspects of an employee’s contributions to the workplace. They can also create added stress and complexities for employees, particularly those with disabilities, and alienate valuable members of the team.

Agencies should seriously consider the implications of automated surveillance before integrating these systems into their workplaces. Taking steps such as conducting algorithmic impact assessments and standards-based risk evaluations that use frameworks like the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) can be excellent places to start. These steps can allow agencies to understand, mitigate, and manage risks related to bias and discrimination. As with any technology, we must ensure that AI-enabled tools are developed, used, and refined, by and with people with disabilities and intersectional identities. Only then can we truly begin to experience the benefits that AI can bring.

--

--