Legal mobilisation and data-driven technologies: a multidimensional and participatory approach (part two)

Second of a two part blog post in the IER series on “Labour, Strategy, and legal mobilisation”.

Commentary icon5 May 2023|Comment

Aude Cefaliello

Unit on health and safety, working conditions at the European Trade Union Institute

Antonio Aloisi

Assistant professor of European and Comparative Labour Law at IE University Law School, Madrid

This is the second of two blog posts in the IER series on “Labour, Strategy, and legal mobilisation”. We explore the potential of legal mobilisation to restore workers’ sovereignty over workplace technology and strengthen their capability to co-design data-driven company practices, overcoming the limitations of the fragmented legal framework.

Unboxing workplace tech through regulation and case law

Workers are accustomed to questioning the whimsical power exerted by human bosses via a wealth of statutory safeguards and collectively negotiated controlling factors. However, when it comes to extending these countervailing capacities to artificial intelligence (AI)-driven tools, several obstacles emerge. For example, the contingent and compartmentalised nature of some legal responses prevents the capture of multi-purpose technologies that exhibit self-reinforcing capabilities across the entire life cycles of managerial functions. At the same time, the predominantly individualised exercise of certain categories of rights clashes with how the relevant instruments are programmed to juggle community data, team dynamics and population statistics. Moreover, the inherent opacity of these tools renders grievances more complex and has a chilling effect that inhibits voice mechanisms and social dialogue. Despite this, workers have inaugurated a new season of mobilisation by leveraging existing rights to challenge, curb, influence and reshape algorithmic management (AM) tools and practices.

Concomitantly, the EU institutions have started to recalibrate the available legislation and deploy responsive regulations that directly address AM and high-risk artificial intelligence (AI) systems. However, the resulting legal framework is highly fragmented. This second blog post focuses on blending strategies from seemingly unconnected fields such as data protection, non-discrimination and occupational health and safety (OSH) and so moving beyond field-specific approaches. Such an all-encompassing model allows for the tackling of the entire panoply of workplace decisions delegated to or supported by AI- and algorithm-based tools.

Activism at the institutional level has been significant, although its outcomes have been partially inconsistent or overlapping. Only in the last two years has the EU Commission proposed the adoption of a Regulation on AI (AI Act) and a Directive on platform work (PWD). While this legislative trilogue unfolds, the currently available version of the former identifies fully or partially automated decisions that significantly affect workers in the areas of ‘employment, workers management and access to self-employment’ as high risk and, therefore, mandates a risk-based model to foster compliance with a long list of essential requirements on the part of AI systems providers. By contrast, the proposed PWD fleshes out robust transparency and information duties, in addition to human oversight requirements, the right to explanation and the establishment of other mechanisms of redress intended to benefit both employed and self-employed platform workers. A common denominator can be found in the modular framework, which to varying degrees outsources responsibility for compliance (and self-assessment) to those who develop or alter high-risk AI systems or run digital platforms.

The risk-based model finds some parallel with the ‘safety by design’ approach in the OSH field, whereby providers or manufacturers can evaluate and address safety issues during the research and development (R&D) phase before introducing a new product or machine into the workplace. Yet, lots of AI-related risks are ‘dynamic’ and evolve, meaning that they need constant oversight and iterative mitigation, as proposed in the PWD. Moreover, it is doubtful that eminently procedural rules will be suitable for pursuing substantively regulatory goals in the field of social rights. Indeed, minimising the risk seems to imply the calculated infringement of labour rights. Leaving such a margin of discretion to providers to tick all of the compliance boxes is unconvincing, especially when technologies are acknowledged to highly affect individuals’ rights and freedom. There is no denying that risk assessment methods allow for engaging stakeholders in the evaluation and mitigation phases, although the legal basis of the AI Act risks watering down national frameworks that mandate stronger participatory rights when employers consider introducing workplace technology. To give ownership back to workers, the evaluation of any AM tool should include workers’ and their representatives’ perspectives and not merely involve self-certification by providers or toothless audits.

To overcome the limitations of the current legal schema, a model based on a combination of multiple resources must be adopted. By mobilising the information and access rights enshrined within the EU General Data Protection Regulation (GDPR), workers and their representatives can learn more about the logic of AM and AI-driven practices. Thanks to the safeguards contained in s. 3 of art. 22, data subjects in professional contexts can exercise the right to obtain human intervention by the data controller (the employer), to express their point of view and to contest a decision. If read in conjunction with the Preamble, this catalogue also includes the right to obtain an explanation (rec. 71 GDPR). To all this must be added a wealth of national legal traditions in countries such as Germany, Italy and Spain, which consider prior information and consultation of workers’ representatives preconditions for the lawfulness of data collection and processing. From a procedural standpoint, borrowing from equality law, workers can also rely on the evidentiary simplification that, thanks to statistical or testimonial evidence, shifts the burden of demonstrating that algorithms are not discriminatory to the alleged perpetrators.

Data rights are currently being mobilised before data protection authorities (DPAs) and tribunals to enhance transparency, legibility and access rights concerning the logic of intricate systems. DPAs have established that companies operating via digital tools must comply with all of the GDPR principles and, importantly, the national provisions that strengthen the EU framework through more protective rules. Similarly, anti-discrimination provisions have been tested in court against ‘blind’ algorithms that end up putting persons with protected characteristics at a particular disadvantage. Interestingly, the procedural rules of equality law are advantageous for the alleged victim (e.g. due to the emphasis on effects rather than intentions). The collective mobilisation of these rights to defend multiple interests represents a new frontier for workers’ representatives, trade unions and civil society organisations. By reducing information asymmetries, collective bodies are uniquely positioned and equipped to accrue knowledge, conduct fact-finding and bring claims.

‘In this together’: relational data requiring collective strategies

Workers often bear the brunt of a company’s light-hearted adoption of third-party technologies supposedly aimed at streamlining production, boosting efficiency and increasing productivity. Workplace data are collected and elaborated at the levels of populations and communities, even when the inputs come from individuals. Mobilising data, equality and OSH rights in their eminently collective dimension to reshape algorithm-based company practices is an effective strategy for addressing the widespread individualised approach. A key advantage of this inventive blend is that traditional safeguards with an ‘instrumental’ nature can be repurposed to achieve different goals than those for which they were conceived. More specifically, data accrued due to the exercise of information, access or explanation rights, or even in the context of a data protection impact assessment under art. 35 GDPR, can be used to learn and assess the parameters of automated decisions. After revealing the substance of the model, workers can challenge its rationale and start meaningful negotiations to improve their working conditions by eradicating the risks to equality and OSH.

This collaborative attitude promises to benefit all stakeholders, including companies, while safeguarding fundamental rights. When the providers of high-risk AI and digital labour platforms are left alone to govern workplace tech, they are prone to deploying organisational models that are highly dysfunctional due to being designed without the involvement of those who are directly affected by them. Workers’ engagement through information, consultation and co-design processes allows companies to identify useful technologies, meaningful organisational patterns and valuable data metrics. However, this remains a neglected facet of mobilisation that involves a decisive paradigm shift. Rather than acting in a remedial or retrospective way and, therefore, only restoring victims when damages have materialised, the mercurial nature of digital tools requires anticipatory and participatory responses to achieve people-centred workplaces. Union-led attempts have paved the way for a deeper understanding of the shortcomings and potentials of automated decision-making. Still, much remains to be done. To be at the forefront of this shift, trade unions are called upon to build capacity and master digital literacy.

Aude Cefaliello

Aude Cefaliello joined the Unit on health and safety, working conditions at the European Trade Union Institute in 2020. She obtained... Read more »

Antonio Aloisi

Assistant professor of European and Comparative Labour Law at IE University Law School, Madrid, where he is a member of... Read more »