The IEEE Global Initiative published an enlightening report on Ethics of Extended Reality (XR).
Here are the main insights:
- More and more, devices will pack sophisticated hardware to sense the world around them:
- LiDAR sensing, camera arrays, microphone arrays, directional microphones.
- Simultaneous Localization and Mapping (SLAM) algorithms allow XR devices to position themselves in the world and render experiences from the user perspective.
- Worldscraping: Planet-Scale AR consortium by Niantic, AR APIs and services from Microsoft (Mixed Reality ToolKit), Apple (ARKit), and Google (ARCore) contain capabilities for topological mapping, scene understanding, classification, world positioning, and geometry generation/capture.
- At the XR user level:
- Movements and physical actions: Optical and inertial tracking of head/body/limb movements, EMG neuromotor input, sensing of facial expressions, auditory sensing of speech and nonspeech activity.
- Neural activity: EEG for brain-computer interfaces
- Context: Location tracking, Simultaneous Localization and Mapping (SLAM), and machine learning-driven analysis of optical data
- Physiology: Eye/gaze tracking, HRV sensing, and other biometrics
- Combined with cloud computing and machine learning, both the benefits and drawbacks of this technology will be unleashed on a societal scale.
- Machine learning algorithms and AI-driven approaches can be trained and employed to predict/infer information about identity, behavior, activity, internal state, and make decisions based on computed data.
- Decision-making behaviors could be constructed based on the availability of XR requisite sensing: brain activity, optical tracking (body language, facial expressions, micro gestures), contextual information (instrumenting everyday actions and behavior), physiological sensing (arousal, fatigue) can lead to a detailed multi-sensory model of a user’s mental state and personality.
- Such algorithms are offered to developers as services that can trivially enhance the capability of an application to process sensed data (Microsoft Cognitiv Services, Apple’s CoreML, Amazon’s AWS-driven AI/ML services, Google Cloud).
- The algorithms are subject to significant issues such as algorithmic bias and false positives.
- It might instigate digital harms for both users and bystanders, from violations of anonymity, privacy, and identity to mass distributed surveillance and behavior nudging.
- Bystander privacy: XR applications and platforms will be able to instrument the actions, attitudes, and emotions not just of the wearer, but of all those within their sight or within the sensing range of their equipment and its networks. It includes identity recognition (violating the right to anonymity) but also physical and mental privacy (heart rate monitoring, audio capture, etc.)
- Differential privacy focuses on the problem of sharing data about a group publicly but withholding information about individuals, such that any single individual’s data is not enough to adjust the data set in a manner that would allow their identity to be inferred.
- Mental privacy: in biometric psychography, biometric data is used to identify a person’s interests. As a consequence, mental privacy is eroded as well, from low-level brain activity data to inferred behavior and intent.
- Surveillance: The natural limits of human memory ensured a degree of privacy. Persistent, ubiquitous recording by electronic devices, XR applications and platform owners, however, can collect perfect memories in a centralized database to be potentially used by corporate and state actors, enabling cybersurveillance (in VR), and surveillance/sousveillance (AR).
- Manipulation: User behaviors or thoughts could be anticipated and manipulated to the benefit and desire of a third party (the XR platform, apps on the platform, governments), which undermines the right to agency, or reverse engineering fixed action patterns. It could reinforce existing bias toward “othered” groups or manipulate how we think about a politician or political party.
- Super-sensory attacks: Supersensory capabilities (super-hearing, supersight) and memories can help to overcome impairment but also support sophisticated shoulder-surfing attacks.
- Future use: if this data has been captured by said third parties, further processing and insight into users lives and behaviors might be generated far into the future, constantly refining a digital twin of their identity.
- Neuro-rights refer to human rights set within neuro-technologies, aiming to enshrine protections regarding identity, agency, mental privacy, exposure to algorithmic bias, and access to augmented intelligence/mental augmentation.
Reco #1: XR stakeholders should actively develop and/or support efforts to standardize differential privacy and/or other privacy protocols that provide for the protection of individual identities and data.
Reco #2: XR platforms should seek to adopt voluntary proposals such as neuro-rights to help ensure that the mental privacy of users is not violated.
Reco #3: XR platforms should disclose (in plain language) and give users agency over what personal data is being captured, how this data is processed and to what ends, and for how long it (and its processed outputs) is retained.
Reco #4: Individuals should have the right to decide how their identity (or representations such as digital twins or augmented appearance) is perceived and appropriated by others in XR.
Reco #5: Where some aspect of bystander data is legally permissible to be captured and processed, bystanders should be made aware that this capture is occurring and should have the capacity to revoke implicit or assumed consent for capture.
Reco #6: Platforms should refrain from enabling the persistent pseudo-anonymous identification or tracking of bystanders and their associated data. Where there is a risk that requested sensor streams enable such tracking and violation of bystander privacy, such streams should be obfuscated by default (making bystanders unrecognizable).
Reco #7: The right to privacy should be extended to protecting real-time surveillance of homes, businesses, and public spaces.
Reco #8: Capture and processing of non-personal real-time data regarding public and private spaces needs to be regulated in the same way that personal data is through GDPR.
Reco #9: Where there is a risk of infringing on the privacy of others, any augmented intelligence or perception application should require the consent of the sensed others or provide mechanics such that others in the environment are made aware of, or can automatically opt-out of, such activity.
Reco #10: Where there is a genuine need for powerful augmented perception approach that introduces a privacy concern (such as impairment), use of this capability should be sufficiently visible to bystanders that it cannot trivially be misused/abused.
Reco #11: XR Platforms need to adopt rigorous control over what sensor APIs applications can utilize, and how said data is protected from unintended or unanticipated processing. Where risky requests for access occur (e.g., requesting data that, in composite, could enable additional biometric processing), these risks should be mitigated against (e.g., informing users, denying access).
Reco #12: Users should be given the tools they need to retain agency over their device, its sensing activity, and client applications using this data. This includes requiring informed consent for risky sensor data and providing continual awareness and feedback regarding device activity.
Reco #13: Companies should strive to adopt leading guidelines regarding XR privacy protections and standards and enforce those standards on their app stores and platforms.
Reco #14: Industry, legislators, and researchers need to define an Extended Reality Privacy Rights Framework that can inform future legislation and provide voluntary standards for XR privacy protections as a stopgap.
Reco #15: Given there will be shortcomings in legislation and guidelines, the rights of victims of digital harms and privacy violations should also be addressed.