Skip to main content

2023 | Buch

Privacy and Identity Management

17th IFIP WG 9.2, 9.6/11.7, 11.6/SIG 9.2.2 International Summer School, Privacy and Identity 2022, Virtual Event, August 30–September 2, 2022, Proceedings

insite
SUCHEN

Über dieses Buch

This book contains selected papers presented at the 17th IFIP WG 9.2, 9.6/11.7, 11.6/SIG 9.2.2 International Summer School on Privacy and Identity Management, held online in August/September 2022.

The 9 full papers and 5 workshop and tutorial papers included in this volume were carefully reviewed and selected from 23 submissions. As in previous years, one of the goals of the IFIP Summer School was to encourage the publication of thorough research papers by students and emerging scholars. The papers combine interdisciplinary approaches to bring together a host of perspectives, such as technical, legal, regulatory, socio-economic, social or societal, political, ethical, anthropological, philosophical, or psychological perspectives.

Inhaltsverzeichnis

Frontmatter

Keynote Paper

Frontmatter
How to Build Organisations for Privacy-Friendly Solutions
Abstract
Personal data is a central corner stone of a broad variety of business models. But collecting and keeping this kind of data is risky and expensive for organisations, and potentially incorporates privacy risks for persons whose personal data is processed. This tension between risks and benefits is investigated with a focus on organisational aspects based on the perspective of the startup polypoly. The main focus is to split up the organisation to avoid conflicting interests.
This paper summarizes a keynote speech held on this topic at the 17 th IFIP Summer School on Privacy and Identity Management.
Christian Buggedei

Workshop and Tutorial Papers

Frontmatter
Privacy-Enhancing Technologies and Anonymisation in Light of GDPR and Machine Learning
Abstract
The use of Privacy-Enhancing Technologies in the field of data anonymisation and pseudonymisation raises a lot of questions with respect to legal compliance under GDPR and current international data protection legislation. Here, especially the use of innovative technologies based on machine learning may increase or decrease risks to data protection. A workshop held at the IFIP Summer School on Privacy and Identity Management showed the complexity of this field and the need for further interdisciplinary research on the basis of an improved joint understanding of legal and technical concepts.
Simone Fischer-Hübner, Marit Hansen, Jaap-Henk Hoepman, Meiko Jensen
From Research to Privacy-Preserving Industry Applications
Workshop Summary
Abstract
This paper summarizes the contents and presentations held at a workshop at the IFIP Summer School on Privacy and Identity Management 2022, focusing on privacy-preserving industry applications developed within the H2020 CyberSec4Europe project. In this document, we provide a short introduction to the project, and then explain three out of the seven vertical demonstrator cases considered within CyberSec4Europe, focusing on fraud detection within the banking sector, job applications, and smart cities. For each of the selected demonstrators, we motivate the need for privacy and research in the domain, and then summarize the achievements made within the project.
Jesús García-Rodríguez, David Goodman, Stephan Krenn, Vasia Liagkou, Rafael Torres Moreno
What is There to Criticize About Voice, Speech and Face Recognition and How to Structure the Critique?
Abstract
In view of a multitude of rapidly spreading applications based on voice, speech, and facial recognition technologies, there is a danger of criticism becoming fragmented and narrowed down to a few aspects. The workshop developed critiques of three application areas and collected initial suggestions on how these critiques could be categorized across multiple application areas.
Murat Karaboga, Frank Ebbers, Greta Runge, Michael Friedewald
Raising Awareness for Privacy Risks and Supporting Protection in the Light of Digital Inequalities
Abstract
Despite legal improvements in protecting privacy like the EU GDPR, most applied conceptions of privacy are individualistic, thus, still putting the responsibility for privacy management onto the users of digital technologies. A major problem with this approach is that it ignores obvious differences between user groups in being able to manage their privacy online. Recent studies show that factors like sociodemographic and -economic status create digital inequalities in people’s digital behaviors, which is also true and particularly concerning for their privacy behaviors. Empirical works investigating means to assist users in their self-data management, however, are barely addressing these digital inequalities in their proposed solutions. Therefore, the present chapter will briefly summarize the empirical status quo of research focusing on privacy and digital inequalities and identify gaps in currently proposed solutions. In conclusion, it can be said that although initial research reveals digital inequalities in terms of people’s privacy awareness, literacy, and behaviors, there appear to be neither empirical nor regulatory solutions to balance these inequities. Thus, I recommend that future research should more actively address and study the particular needs of vulnerable groups in finding ways of how to assist them to better manage their privacy on the internet.
Yannic Meier

Open Access

The Hitchhiker’s Guide to the Social Media Data Research Galaxy - A Primer
Abstract
This short paper is a primer for early career researchers that collect and analyze social media data. It provides concise, practical instructions on how to address personal data protection concerns and implement research ethics principles.
Arianna Rossi

Selected Student Papers

Frontmatter
Valuation of Differential Privacy Budget in Data Trade: A Conjoint Analysis
Abstract
Differential privacy has been proposed as a rigorous privacy guarantee for computation mechanisms. However, it is still unclear how data collectors can correctly and intuitively configure the value of the privacy budget parameter \(\varepsilon \) for differential privacy, such that the privacy of involved individuals is protected. In this work, we seek to investigate the trade-offs between differential privacy valuation, scenario properties, and preferred differential privacy level of individuals in a data trade. Using a choice-based conjoint analysis (\(N = 139)\), we mimic the decision-making process of individuals under different data-sharing scenarios. We found that, as hypothesized, individuals required lower payments from a data collector for sharing their data, as more substantial perturbation was applied as part of a differentially private data analysis. Furthermore, respondents selected scenarios with lower \(\varepsilon \) values (requiring more privacy) for indefinitely-retained data for profit generation than for temporarily-retained data with a non-commercial purpose. Our findings may help data processors better tune the differential privacy budget for their data analysis based on individual privacy valuation and contextual properties.
Michael Khavkin, Eran Toch
Promises and Problems in the Adoption of Self-Sovereign Identity Management from a Consumer Perspective
Abstract
Online identification is a common problem but so far resolved unsatisfactorily, as consumers cannot fully control how much data they share and with whom. Self-Sovereign Identity (SSI) technology promises to help by making use of decentralized data repositories as well as advanced cryptographic algorithms and protocols. This paper examines the effects of SSIs on responsible, confident, and vulnerable consumers in order to develop the missing understanding of consumer needs in SSI adoption and define preconditions and necessary considerations for the development of SSI-based platforms and applications.
Marco Hünseler, Eva Pöll
Usability Evaluation of SSI Digital Wallets
Abstract
Self-sovereign identity (SSI) is a new decentralized ecosystem for secure and private identity management. In contrast to most previous identity management systems where the service provider was at the center of the identity model, SSI is user-centric and eliminates the need for a central authority. It allows the user to own their identity and carry it around in a form of digital identity wallet, for example, on their mobile device or through a cloud service. The digital wallet supports mechanisms for key generation, backup, credential issuance and validation, as well as selective disclosure that protects the user from unintended sharing of user’s personal data. In this article we evaluate the usability of existing SSI digital wallets: Trinsic, Connect.me, Esatus and Jolocom Smartwallet. We study how these early experiments with SSI address the usability challenges. We aim to identify the potential obstacles and usability issues, which might hinder wide-scale adoption of these wallets. Applying the analytical cognitive walkthrough usability inspection method, we analyse common usability issues with these mobile-based wallets. Our results reveal that wallets lack good usability in performing some fundamental tasks which can be improved significantly. We summarize our findings and point out the aspects where the issues exist so that improving those areas can result in better user experience and adoption.
Abylay Satybaldy
Influence of Privacy Knowledge on Privacy Attitudes in the Domain of Location-Based Services
Abstract
In our daily life, we make extensive use of location-based services when searching for a restaurant nearby, searching for an address we want to visit, or searching for the best route to drive. Location information is highly sensitive personal information that users share without the awareness of being continuously tracked by various apps on their smartphones or smart devices. Privacy knowledge and overall privacy literacy facilitate gaining control over sharing personal information and adjusting privacy settings online. This research examines the influence of privacy literacy on privacy attitudes in the domain of location-based services. Hereby, privacy literacy is measured through four dimensions by asking the participants about various aspects of knowledge about institutional practices, technical aspects of data protection, data protection law, privacy policies, and also about possible data protection strategies. The overall privacy literacy score is examined in relation to various privacy attitudes such as tolerance of sharing personal information, perceived intrusion when using location-based services, and their perceived benefits. Overall, 155 participants took part in the questionnaire. A significant difference can be found between the overall privacy literacy score between German participants and those from other countries with German participants having a higher privacy literacy score. Furthermore, privacy literacy positively correlates with trust in the GDPR, and also with privacy concern about the secondary use of location information. Indicating, that the higher the privacy literacy level is, the more concerned participants seem to be.
Vera Schmitt
Privacy and Data Protection in the Era of Recommendation Systems: A Postphenomenological Approach
Abstract
Privacy and data protection are two fundamental rights. As complex concepts, they lend themselves to various interpretations aimed at protecting individuals. In this paper, I explore the concepts of ‘privacy’ and ‘data protection’ directly related to the protection of ‘identity’. I argue that the ability for privacy and data protection law to protect identity is being challenged by recommendation systems. In particular, I explore how recommendation systems are continuously influencing people based on what can be predicted about them, while the legal tools that we have do not fully protect individuals in this regard. This paper aims at breaching this gap, by focusing on the study of Porcedda, who examines four different notions of privacy related to identity under article 7 of the European Charter of Fundamental Rights. Through the huge capacity for analytics that draws on a lawful combination of consent and non-personal data, this paper examines why data protection regulation does not, in fact, fully protect individuals. In this paper it is explored how the notion of privacy, understood as the protection of identity, is especially relevant to understand the limitations of data protection law, and I explore postphenomenology to help us better contextualize the relationship between identity and recommendation systems.
Ana Fernández Inguanzo
The DMA and the GDPR: Making Sense of Data Accumulation, Cross-Use and Data Sharing Provisions
Abstract
The Digital Markets Act aims to fix the inherited problems of the digital markets by imposing obligations on large online platforms, also known as the gatekeepers. Such obligations involve data accumulation, data cross-use prohibitions and data sharing related obligations that heavily interplay with the data protection rules, hence the GDPR. However, all of these three provisions are highly linked to the data subject consent in the sense of the GDPR. The academic literature heavily criticised consent as the legal basis, especially in the context of the digital markets. This article firstly criticises the legal policy choice of consent to keep the digital markets contested and then analyses the risks arising from the beforementioned provisions for the EU data protection law, especially from the angle of the GDPR principles. It also focuses on the security of data transfers. It then evaluates the possible legal frameworks in order to minimise the risks, such as the data sharing agreements. It finally calls for a sector-specific approach to the general “per se” mentality of the DMA, supported by “core platform service specific” guidelines to be issued in order to minimise the risks for effective data protection in the digital markets.
Muhammed Demircan

Open Access

Towards Assessing Features of Dark Patterns in Cookie Consent Processes
Abstract
There has been a burst of discussions about how to characterize and recognize online dark patterns — i.e., web design strategies that aim to steer user choices towards what favours service providers or third parties like advertisers rather than what is in the best interest of users. Dark patterns are common in cookie banners where they are used to influence users to accept being tracked for more purposes than a data protection by default principle would dictate. Despite all the discussions, an objective, transparent, and verifiable assessment of dark patterns’ qualities is still missing. We contribute to bridging this gap by studying several cookie processes, in particular their multi-layered information flow —that we represent as message sequence charts—, and by identifying a list of observable and measurable features that we believe can help describing the presence of dark patterns in digital consent flows. We propose thirty one of such properties that can be operationalised into metrics and therefore into objective procedures for the detection of dark patterns.
Emre Kocyigit, Arianna Rossi, Gabriele Lenzini
Accessibility Statements and Data Protection Notices: What Can Data Protection Law Learn from the Concept of Accessibility?
Abstract
Terms and conditions, legal notices, banners and disclaimers are not new for websites, applications and products. However, with the ambitious legislative plan in the European Union to introduce accessibility features to several websites, applications, services and products, it becomes imperative to also consider the accessibility of the information included in such statements. In my paper, I specifically investigate data protection notices as opposed to accessibility statements and aim to answer whether the concept of accessibility can help us redefine the principle of transparency in EU data protection law. I conclude that accessibility can benefit the principle of transparency, by contextualizing it. However, at the same time it introduces a number of new challenges.
Olga Gkotsopoulou
SeCCA: Towards Privacy-Preserving Biclustering Algorithm with Homomorphic Encryptions
Abstract
Massive amounts of newly generated gene expression data have been used to further enhance personalised health predictions. Machine learning algorithms prepare techniques to explore a group of genes with similar profiles. Biclustering algorithms were proposed to resolve key issues of traditional clustering techniques and are well-adapted to the nature of biological processes. Besides, the concept of genome data access should be socially acceptable for patients since they can then be assured that their data analysis will not be harmful to their privacy and ultimately achieve good outcomes for society [1]. Homomorphic encryption has shown considerable potential in securing complicated machine learning tasks. In this paper, we prove that homomorphic encryption operations can be applied directly on biclustering algorithm (Cheng and Church algorithm) to process gene expression data while keeping private data encrypted. This Secure Cheng and Church algorithm (SeCCA) includes nine steps, each providing encryption for a specific section of the algorithm. Because of the current limitations of homomorphic encryption operations in real applications, only four steps of SeCCA are implemented and tested with adjustable parameters on a real-world data set (yeast cell cycle) and synthetic data collection. As a proof of concept, we compare the result of biclusters from the original Cheng and Church algorithm with SeCCA to clarify the applicability of homomorphic encryption operations in biclustering algorithms. As the first study in this domain, our study demonstrates the feasibility of homomorphic encryption operations in gene expression analysis to achieve privacy-preserving biclustering algorithms.
Shokofeh VahidianSadegh, Lena Wiese, Michael Brenner
Backmatter
Metadaten
Titel
Privacy and Identity Management
herausgegeben von
Felix Bieker
Joachim Meyer
Sebastian Pape
Ina Schiering
Andreas Weich
Copyright-Jahr
2023
Electronic ISBN
978-3-031-31971-6
Print ISBN
978-3-031-31970-9
DOI
https://doi.org/10.1007/978-3-031-31971-6

Premium Partner