Derived digital Identities (DDIs)

Oscar Okwero
8 min readNov 17, 2021
Courtesy nextgov.com

Introduction.

The use of digital platforms has led to the creation of new forms of identification. Through user interaction with digital media content through likes, shares, tweets, and retweets, social media Companies have accumulated loads of meta-data about users on their platforms. Based on the digital trail and metadata from these interactions, it’s been possible to derive new digital representations of the different groups, interests, demographics. Through Artificial intelligence & big data, and methodologies like Social media analysis, researchers and threat actors have been able to create digital ‘publics’ that analyse the digital exhaust from users of social media. These derived digital identities exhibit as much characteristics to identify an individual or a group as it is with physical identification and can be further used to influence the activities of the said entity more effectively.

Unlike other official identities provided by governments or other legal humanitarian agencies like the UNHCR to provide legal status & allow access to services, and which are accompanied by physical representations of the identity, the derived digital identity (DDI) exists only in the virtual realm with its existence and usage fully under the control of the AI algorithm that generates the desired ‘publics’ from the big social media data to achieve particular Information operations goals. However, unlike their physical counterparts who have well defined rules of use as set out in the World Bank paper.” Principles on identification for sustainable development: toward the digital age” 1, DDIs have no formal recognition yet as a representation of identities despite several Data privacy Acts implemented across the world alluding to the potential misuse of derived data characteristics from digital user data2.

The inability of users to explicitly know of their existence and their potential for misuse due to the fact that they rarely read the terms & even after reading , It’s never explicitly clear what exactly is collected and for what purposes. This raises concerns on the privacy and digital safety of users 3. The Cambridge Analytica case exposed how private Companies exploit this dark area in the law on digital privacies to carry out Information operations that are technically legal but operationally demonstrate misuse of digital user meta-data4. Only limited by the abilities of the AI algorithms processing the digital user metadata and the dimensionality of the user metadata processed, these derived digital IDs have the potential to disrupt both positively and negatively the processes of identification, the digital rights of the subjects and the responsibilities of the actors generating or using these derived digital identities.

Potential use cases of DDIs.

· Ability to track communications between threat actors like terrorist groups by triaging every mobile phone user registration to a physical identity. This is also useful in following illicit money trails by integrating account opening to government issued ID cards5.

· Governments’ security agencies can also use Social media analysis to detect active targeted information operations aimed at their citizens which could easily destabilise their internal security6. With this information, they could then take protective steps.

· Governments can also use social media analysis tools to communicate government policy to the public and get feedback from the populace by analysis of engagement7.

· Commercial enterprises have been able to use social media analytics to identify potential target markets for their products and effectively position them for optimum gains8.

· Financial investment firms have harnessed the power of AI and analytics to allocate investments, develop new market derivatives and even predict the market performances9.

· Civil society and Individuals have taken advantage of these derived identities to campaign for causes like Climate change, racial equality, and religious tolerances among others10, 11.

The risks of the DDIs;

· The creation and spread of Misinformation & disinformation which has compromised public health campaigns like Covid-19 vaccination 12, 13, 14. Misinformation has also led to destabilization of states through targeted political Information operations like occurred in the US 2016 elections.

· Increased polarization through use of highly customised digital content based on highly granular DDIs to achieve information operation objectives 15, 16.

· It has expanded digital surveillance by the state both officially and un-officially under the guise of national security 17.

· Increased censorship of opposing Views on social media or limiting displayed content based on recommendations by optimization algorithms. This limits rights of the users to access full available as enshrined in article 19 of the International Humanitarian Law 18, 19, 20.

· Data breaches, identity fraud, and function creep can put people — especially vulnerable groups — at serious risk of harm 21.

Sample Interventions by states to minimise the adverse effects of DDIs;

1. The recent push for the repeal of section 230 of the Communications Decency act in the US would transfer the responsibility of content published to the platforms and therefore hold them responsible for maintaining a favourable information sphere for governments 22.

2. In order to have visibility into the information sphere of their territories, Countries such as China and Russia have implemented their own national alternatives to the Social media. VKontakte23, the Russian version of Facebook, or Weibo 24, the Chinese equivalent of Twitter, were implemented to allow the citizens of these countries have access to similar experiences as offered by their western counterparts but also provide the state visibility into the info-sphere.

3. The EU has established the East StratCom Task Force whose main task is to fight Russia’s on-going disinformation campaigns.

4. The Civil society groups have also been active in joining hands to fight misinformation include the Poynter Centre’s International Fact-Checking Network 26 and the Kyiv Mohyla Journalism School’s StopFake.org project 27 that have proved very effective at exposing false narratives online and especially the detection of Kremlin backed misinformation activities through the StopFake project. Others include pro-democracy projects like the Alliance for Securing Democracy’s Hamilton 68 dashboard 28, and the Atlantic Council’s Digital Forensic Research Lab 29.

Policy controls in the Generation, use and destruction of DDIs.

1) Governments and technology companies should set the minimum thresholds of sharing & processing of Digital data to ensure that they are not abused to design digital ammunitions that compromise the freedoms and rights of users. This can include setting technical standards on APIs and Algorithms that process this data to make it harder for misuse 30.

2) The digital platforms can also set strict terms of use of their publicly accessible data to prevent its use in designing of discriminatory models that goes against the principles of an identity system being free from discrimination, as stated by the World Bank consortium on identity for sustainable development.

3) Unlike the traditional identity systems that have defined methods of acquisition, processing of personal data and the rights assigned to the identity holder, DDIs are held mainly by private entities that access the big data from the Social media firms and process them according to their needs. It’s hence difficult to enforce controls on how much meta-data to access and process. This can be mitigated by governments prohibiting Social media sites from acquiring highly sensitive data from users like Biometrics, gnome that can be adversely misused due to lack of oversight 31.

4) Social media sites should explicitly tell their users the data collected as well as the meta-data that could be derived from the data and the potential threats faced by this processing. It should also allow the user to withdraw consent while still have access to the platforms as they have become critical extensions of social interactions like Facebook did after the Cambridge Analytica debacle.

5) To prevent the upsurge of harmful Information operation activities based on DDIs, Nations & international organizations like the UN should set acceptable standards for use in Cyber warfare to protect Civilians from harmful activities like Cyber bullying, cognitive manipulation and discrimination 32.

6) The search engines and social media platforms should have a technical ‘kill switch’ that can be used to halt active information operations on their platforms targeted at users based on the DDIs generated from the user meta-data.

References.

1. Anderson, M. (August, 2020). Most Americans Think Social Media Sites Censor Political Viewpoints. Available at :< https://www.pewresearch.org/internet/2020/08/19/most-americans-think-social-media-sites-censor-political-viewpoints/>.

2. Auxier, B (December 2020). Social media continue to be important political outlets for Black Americans. Available at <https://www.pewresearch.org/fact-tank/2020/12/11/social-media-continue-to-be-important-political-outlets-for-black-americans/>

3. Barret, P et al (September, 2021). How tech platforms fuel U.S. political polarization and what government can do about it. Available at: <

4. Bosquete, C. (2018). Mining Social media data for policing, the ethical way. Available at : <https://www.govtech.com/public-safety/mining-social-media-data-for-policing-the-ethical-way.html>

5. Centola, D. (October, 2020). Why Social Media Makes Us More Polarized and How to Fix It. Available at <https://www.scientificamerican.com/article/why-social-media-makes-us-more-polarized-and-how-to-fix-it/>.

6. Digital Forensics Lab. Available at< https://www.atlanticcouncil.org/programs/digital-forensic-research-lab/>

7. Doffman, Z. (November, 2019). Your Social Media Is (Probably) Being Watched Right Now, Says New Surveillance Report. Available at: <https://www.forbes.com/sites/zakdoffman/2019/11/06/new-government-spy-report-your-social-media-is-probably-being-watched-right-now/?sh=29be94dc4f99>.

8. Enderle, R. (2011). Internet ‘Kill’ Switch: Balancing Security And Freedom. Available at: <https://www.darkreading.com/risk/internet-kill-switch-balancing-security-and-freedom>.

9. EU east Stratcom task force. Available at:<http://www.tepsa.eu/wp-content/uploads/2015/12/Kimber.pdf>

10. Gladston , I & Wing T. (August 27, 2019). Social Media and Public Polarization over Climate Change in the United States. Available at <https://climate.org/social-media-and-public-polarization-over-climate-change-in-the-united-states/>.

11. Hamilton 2 Dashboard. Available at :< https://securingdemocracy.gmfus.org/hamilton-dashboard/>.

12. Harris, M. (July 2017). Available at :<https://towardsdatascience.com/impact-of-artificial-intelligence-and-machine-learning-on-trading-and-investing-7175ef2ad64e>

https://www.brookings.edu/blog/techtank/2021/09/27/how-tech-platforms-fuel-u-s-political-polarization-and-what-government-can-do-about-it/>.

13. Hubler, J (2020). Free speech and the internet. Available at <https://www.brookings.edu/blog/techtank/2018/09/21/regulating-free-speech-on-social-media-is-dangerous-and-futile/>.

14. Human rights watch. (April, 2018). US should create laws to protect social media users’ data. Available at: <https://www.hrw.org/news/2018/04/05/us-should-create-laws-protect-social-media-users-data#>.

15. IBM. What is Social media analytics? Available at: <https://www.ibm.com/topics/social-media-analytics>

16. International Fact checkers’ network. Available at :< https://www.poynter.org/ifcn/>.

17. Menczer F & Hills T. (2020). Information overload helps fake news spread and Social media knows it. Available at <https://www.scientificamerican.com/article/information-overload-helps-fake-news-spread-and-social-media-knows-it/>

18. Meserole, C (2018). How misinformation spreads on social media and what to do about it. Available at <https://www.brookings.edu/blog/order-from-chaos/2018/05/09/how-misinformation-spreads-on-social-media-and-what-to-do-about-it/>

19. Norton. Social media identity theft. Available at :<https://us.norton.com/internetsecurity-crm-identitytheftprotection-social-media-identity-theft-how-to-protect-yourself.html>.

20. OECD. Social media use by governments. Available at <https://read.oecd-ilibrary.org/governance/social-media-use-by-governments_5jxrcmghmk0s-en#page30>

21. Patterson, D(December, 2020). What is Section 230 and why do law makers want it repealed. Available at <https://www.cbsnews.com/news/what-is-section-230-and-why-do-so-many-lawmakers-want-to-repeal-it/>

22. Siripurapu A and Merrow, W (Feb, 2021). Social Media and Online Speech: How Should Countries Regulate Tech Giants? Available at<https://www.cfr.org/in-brief/social-media-and-online-speech-how-should-countries-regulate-tech-giants>.

23. Stop fake project. Available at:< https://www.stopfake.org/en/about-us/>.

24. Suciu, P. (August, 2021). Sporting misinformation on social media is increasingly difficult. Available at <https://www.forbes.com/sites/petersuciu/2021/08/02/spotting-misinformation-on-social-media-is-increasingly-challenging/?sh=5b68893e2771>

25. Thacker, D. Biometrics in Social media Apps; Opportunities and risks. Available at: <https://www.bayometric.com/biometrics-in-social-media-apps/>.

26. The GDPR. Available at :< https://gdpr-info.eu/>

27. The Guardian, (March 2017).Click to agree with what? No one reads terms, Study confirms. Available at<https://www.theguardian.com/technology/2017/mar/03/terms-of-service-online-contracts-fine-print>.

28. The Rand Corporation. Monitoring Social media; Lessons for future DoD social media analysis in support of Information operations. Available at <https://www.rand.org/pubs/research_reports/RR1742.html>

29. United Nations. (2019). Protecting people in cyberspace: The Vital Role of the United Nations in 2020. Available at :<https://www.un.org/disarmament/wp-content/uploads/2019/12/protecting-people-in-cyberspace-december-2019.pdf>.

30. VKontakte. Available at:<https://vk.com/?lang=en>

31. Weibo. Available at:<https://weibo-com.translate.goog/login.php?_x_tr_sl=zh-CN&_x_tr_tl=en&_x_tr_hl=en&_x_tr_pto=nui,sc>.

32. Wired (2019). How Cambridge Analytica sparked the great privacy debate. <https://www.wired.com/story/cambridge-analytica-facebook-privacy-awakening/>.

33. World Bank. Principles of Identification for sustainable development. Available at : <https://documents1.worldbank.org/curated/en/213581486378184357/pdf/Principles-on-Identification-for-Sustainable-Development-Toward-the-Digital-Age.pdf >.

--

--

Oscar Okwero

Cyber Security | AI | Data protection | Food | Liverpool FC |