Algorithms also need management training

1 day ago
tgadmintechgreat
23

European Union completion expected Platform Working Directive, its new legislation governing digital work platforms, this month. This is the first law proposed at the level of the European Union that directly regulates “algorithmic control”: the use of automated monitoring, evaluation and decision-making systems to make or inform decisions, including recruiting, hiring, assigning tasks and firing.

However, the scope of the Platform Work Directive is limited to digital work platforms, i.e. “platform work”. In the meantime, algorithmic control the first have become widespread on the work platforms of the gig economy, the past few years – in the context of a pandemic – have also seen the rapid adoption of algorithmic management technologies and practices in traditional labor relations.

Some of the most carefully controlled, harmful and highly publicized uses have been in warehousing and call centers. For example, warehouse workers reported quotas. so strict that they don’t have time to use the bathroom and they say that they were fired – according to the algorithm – for not meeting with them. Algorithmic control has also been documented in retail and production; in software development, marketingand counseling; And in work in the public sectorincluding healthcare and police officer.

HR professionals often refer to these algorithmic management techniques as “people analytics“. But some observers and researchers developed a more accurate name for the monitoring software installed on employees’ computers and phones, on which it often depends: “bosswar“. It added a new level of surveillance to work life: location tracking; registration of keystrokes; screenshots of working screens; and even, in some cases, videos and photos taken through webcams on work computers.

As a result, there is new position among researchers and politicians that the Platform Work Directive is not enough, and that the European Union should also develop a directive specifically regulating algorithmic management in the context of traditional employment.

It is not difficult to understand why traditional organizations use algorithmic management. The most obvious benefits are associated with increased speed and scale of information processing. For example, in recruiting and hiring, companies can receive thousands of applications for a single job opening. Resume validation software and other automated tools can help make sense of this vast amount of information. In some cases, algorithmic management can help improve the efficiency of an organization, for example, by matching workers to work more intelligently. And there are some potential, though mostly untapped, benefits. Carefully designed algorithmic management can reduce bias in recruitment, evaluation, and promotion, or improve employee well-being by detecting training or support needs.

But there are clear harms and risks, both for workers and organizations. Systems are not always very good and sometimes make decisions that are clearly erroneous or discriminatory. They require a lot of data, which means that they often result in constant and close monitoring of workers, and they are often designed and deployed with relatively little worker involvement. As a result, they sometimes make biased or otherwise poor managerial decisions; they harm privacy; they expose organizations to regulatory and public relations risks; and they can undermine trust between workers and management.

The current regulatory situation regarding algorithmic governance in the EU is complex. Many branches of law are already applied. The data protection law, for example, provides some rights to workers and job applicants, as do the national systems of labor and employment law, discrimination law, and occupational health and safety law. But there are still some missing pieces. For example, while data protection law requires employers to ensure the “accuracy” of employee and job applicant data they hold, it is not clear whether there is an obligation for decision-making systems to draw reasonable conclusions or decisions based on this data. If a service worker is fired because of a bad customer review, but that review was motivated by factors beyond the worker’s control, the data may be “accurate” in the sense that it reflects a poor customer experience. Thus, a decision based on it may be legal, but still unreasonable and inappropriate.

This leads to a curious paradox. On the one hand, additional protection is required. On the other hand, the confusion of pre-existing laws creates unnecessary complexity for organizations trying to use algorithmic governance responsibly. Further confusing, the algorithmic governance provisions of the new Platform Work Directive mean that platform workers, long underprotected by law, are likely to have more protection from intrusive monitoring and error-prone algorithmic governance than regular employees.

Leave a Reply