Data protection impact assessments (DPIA), also referred to as Privacy Impact Assessments (PIA), are compulsory under the new EU General Data Protection Regulation (GDPR). But ensuring an organization or business’ data protection compliance now and in the future is the bare minimum. In a world of data networks that increasingly shape and define everything from our identity, economy to politics, we need to also consider the broader ethical and social implications of data processing.

Why an ethical impact analysis in addition to the PIA?

Here’s an example from Denmark. Since 2015, wellbeing digital tests have been performed in Danish schools. Children are asked about everything from bullying, loneliness to stomach aches. Recently it came out that although presented as anonymous, they were not. Data was stored with kids’ social security numbers, correlated with other test data and even used in case management by some municipalities. (please read the legal analysis by DataEthics Catrine Byrne Søndergaard here). The privacy and data protection legal implications of a system like this are evident and a proper PIA or DPIA would probably have caught many of the core legal issues that now have caused the Ministry of Education to pause the tests.

But is compliance with basic provisions of data protection legislation really the only issue here? No and it’s a question of an evolution that we can’t ignore. A digital test is not just a different type of tool, it is social engineering.  Data systems like this are increasingly integrated into our social realities. They are part of the very social structure of everything from our local communities, to our economies to our politics. The data from these digital tests are for example analysed and used by schools, and teachers in their work with the wellbeing of the children. An ethical impacts assessment would take the analysis of impacts one step further than mere legal compliance. It would consider factors such as community influence, social risks, the distribution of responsibilities. How does a data system like this for example influence the role of the teacher, the school, the municipality, the parent and the child? Which decisions can a teacher make based on these data sets? Which decisions can a school make? How are the tests perceived? Are conclusions combined with experienced input? Or are these data analytics and derived data conclusions perceived as objective truth? Which conclusions are derived by data analytics? Based on which criteria? With which consequences for the local community? etc. etc.

Ethics is a value choice

These are just a few questions that a social and ethical impact analysis will highlight. But we might want to take this even further. What is ethics? Ethics is culture, that means a set of shared meanings” that are produced informally within a given society and represented formally (in our regional and national laws for example). Ethics is not a given. Our choice of ethics is a value choice.  With our choice of ethics we choose to focus on specific risks, we prioritise interests, we distribute roles and responsibilities. Interestingly, the choice to combine the digital tests in the Danish schools with the children’s social security number was among others based on recommendations from experts that saw this methods’ value in terms of scientific analysis. This was indeed a prioritisation of one specific value. But was it a conscious one? Was it balanced against other values and interests? Those of the individual or the community?

The first step in a data ethics impact analysis would be to make transparent the ethical rationale behind the choices we make: Which risks are prioritized? Whose interests? Whose responsibility.

Advertisements