Contact Information

Abstract

The lack of prioritization and synergy between international leaders and governments in the development of policies and programs that support child online protection, results in a significant increase in vulnerabilities and risks in the lives of children. In a 2016 UNICEF survey, it was estimated that one in three children globally is already an internet user and that one in three internet users is a child (under 18 years of age). The Global kids online comparative report, describes an increasing risk to children’s safety, and outlines their vulnerability to receiving unsuitable material, abuse and exploitation. The latest statistics from the International Telecommunication Union (ITU) showed that young people aged 15–24 years are the major users of internet access around the world.

Even where there have been background studies of the vulnerability and risk to children in cyberspace, the policies and ethical standards still overlook this clear evidence. In this article, we seek to analyze the relevant literature on this topic and introduce recommendations as possible means to address the current challenges to early risk identification. This article shows that there is, in existing data, theories and methods, sufficient information to answer two challenging questions:

  • Have the proposed international initiatives fostered sufficient coordination and synergy to regulate ethical cyberspace?
  • Are tech giants applying ethical standards in the use of Artificial Intelligence (AI)?

Risks in Children’s Cyberspace

A study by UNICEF, Children’s Reporting of Online Risks, states that up to a third of children had been exposed to some risk online and it had upset them. Children have encountered hate speech or other risks that include sexual content, manipulation, indoctrination, self-harm content, suicide content, violent content, being treated in a hurtful way, meeting someone face to face, theft, and, in particular, sexual exploitation. Each risk is rising sharply. More details about online risk content indicators are shown in Figure 1.

Figure 1-Children (%) who have been exposed to online risks, by country. Source: GLOBAL KIDS ONLINE Comparative report, UNICEF for every child (2019) p.51 Some countries, including Argentina, are omitted from this graph because of missing data.

A policy brief entitled, “Fostering a Safer Cyberspace for Children” proposed to the G20 outlines a useful classification of cyber risks in three main areas – the three Cs: Content risks, Contact risks, and Conduct risks. This classification reaffirms the areas already researched by the OECD in 2011 and organizes risks into categories, adding Consumer Risks as shown in Figure 2.  

Figure 2 – The figure highlights the type of risks described and that cross all risk categories (“Cross-cutting Risks”). Source: OECD and Berkman Klein Center for Internet and Society at Harvard University

It is important to emphasize that risks from advanced technology such as AI cut across all identified risks. These are considered very problematic, significantly affecting the lives and well-being of children in many ways. If a child sees one type of harmful content, it is more likely that the same child will also report seeing other types of harmful content.

Increasing Risks in Children’s Cyberspace in Recent Years

In the first six months of 2020, 44% of all the child sexual abuse content that the Internet Watch Foundation (IWF) dealt with involved images filmed by the victims themselves. This demonstrated a sharp increase from the 29% reported in 2019. According to the IWF, much of this abuse is happening in children’s own homes while their parents or caregivers are in another room. September 2020 was a record month with 15,000 reports from the public, 5,000 more than the same month in 2019.

Also in 2020, tech companies in the US reported a sharp increase in online photos and videos of children being sexually abused — a record 45 million illegal images were flagged last year alone — exposing a system, which is unable to offer protection to children, at a breaking point, unable to keep up with those who exploit technology to keep safe children (Investigation by The New York Times). Apple’s response to this follows a 2019 investigation that revealed a global criminal underworld that exploited flawed and inadequate efforts to control the explosion of images of child sexual abuse. However, it can be shown that AI well used by tech giants can protect children online. They may use an AI-based system in combination with other specialized tools to sort through this data and identify the images and videos showing child sexual abuse.

The lack of prioritization by governments and law enforcement agencies in the fight against child sexual exploitation online in those jurisdictions surveyed by UNICEF office of research is highlighted in Investigating Risks and Opportunities for Children in a Digital World. This report identifies increases in live streaming of abusive home environments, which may be due to financial hardship and restricted opportunities for criminals to travel during the strict lockdown. In addition, a comparative study has shown that many parents feel unsure about, or not competent to manage Internet use by digitally savvy children in the face of complex and rapidly evolving technologies.

Despite the reports published by ECPAT, the current initiatives and proposals of international organizations are insufficient, in the context of a hyper-connected environment where children interact online through web browsing, education, social networks, games and entertainment websites and applications. As a result, vulnerabilities and risks have increased and the lack of coordination and synergy between international organizations and governments has been exposed. Therefore, all appropriate agencies must play a part, this includes global leaders’ summits (such as the G20), international organizations, including NGOs, tech giants, online platforms, governments, law enforcement agencies, academic experts, policymakers, religious figures, and civil society. To ensure children’s safety online, we must promote ethical behavior at strategic, tactical, and operational levels.

To establish children’s rights, safe use of digital technology is a key factor. The United Nations Convention on the Rights of the Child outlines these rights. This is also consistent with the United Nations Sustainable Development Goals. A robust understanding of how children use the internet is necessary to develop policies and program responses. It is also necessary to identify the risks of harm that children may face online. The findings of such studies can significantly contribute to the development of policies and programs designed to protect children, teachers, law enforcement officers, and others. This is summed up in the review by Investigating Risks and Opportunities for Children in a Digital World: A rapid review of the evidence on children’s internet use and outcomes. In other words, a framework that simply limits children’s access to the internet will be ethically wrong. Children have the right to explore elements and be educated in the rules and protocols of safe cyberspace.

A Global Framework of Tech Inequalities Needs Digital Transformation

Technology has brought, and will continue to bring, enormous benefits but technological developments create new harms. The investigation titled, Towards the Digital Stability Board for a digital Bretton Woods, identifies five areas of ‘technological divide’, which are: access divide, knowledge, trust, market power and the distribution divide. All of these raise fundamental questions about the global framework of inequalities.

Disinformation related to false cures, ransomware attacks and social engineering was found to be rampant. Sifting through the barrage of information online and determining what is accurate and what is not can be extremely difficult, and bad actors use this confusion to their advantage. This is what’s called the trust gap. Therefore, many regulations are sorely lacking in these areas that they need a comprehensive rethink by all parties. 

Artificial Intelligence Algorithms for Preserving Children’s Privacy

Children are constantly using digital applications that utilize AI, such as social media and face filters, etc. There is growing concern that the algorithms used by the tech giants are not always fair in the decisions they make. And errors in facial recognition systems are an example of this. AI algorithms as used by social media platforms must be designed in such a way as to preserve children’s privacy, encourage appropriate behavior, and limit online tracking. This is in order to align the commitments of those developing the algorithms with international conventions and regulations related to child protection online. These algorithms must relate to clearly defined ethical standards. Because of its global reach, the role of the intergovernmental organizations is crucial in leading efforts to create safer cyberspace for children. A policy brief presented to the G20 makes a number of recommendations with the objective of providing initiatives to implement coordination strategies and synergies among the member states and thus achieve their stated aims.

Concern is raised about the absence of clear regulation and protection, with reference to the effects of COVID-19 on children’s privacy as mentioned in a report by the OECD.   This report highlights that because of the acquisition, usage, reuse, and exposure of personal data, the widespread use of e-learning platforms (typically privately managed platforms) can jeopardize children’s privacy. Online platforms that offer video conferencing services and are extensively used for educational purposes may acquire personal data in an unauthorized manner, resulting in privacy violations.

In addition, an increase in improvised student-teacher interactions on social media platforms and applications without prior rules or accepted protocols leaves open the potential for vulnerability regarding personal data and privacy protection. This may disparately and negatively affect the lives of children who are born in already difficult economic and adverse social conditions. Evidence of children’s experiences in the face of the permanent risks of cyberspace brings to the fore the urgency of policy development and decision-making that strengthen the best interests of children as defined in the Convention on the Rights of the Child, the proposed legal framework on children’s rights.

The next generation of children will be significantly affected by advanced technologies, as they will be born and raised in an era of big data, machine learning systems and deep learning robots that will make decisions related to everything to do with their lives: education, entertainment activities, social interactions and so forth.

According to a study by the Human Rights Center at UC Berkeley School of Law, “the seventy percent of the global economic impact of AI by 2030 will be gained by North America and China, while developing countries in Asia, Latin America and Africa will see less than six percent of the overall gain”

In this context, to implement any proposed rules and protocols, multiple stakeholders need effective evidence-based coordination and synergy on how children use the Internet and how much safety multilateral regulations currently offer. The safety of children in cyberspace must cross borders.

Strategies to Build Security in Internet Architecture

In an interesting article entitled, “Getting beyond Norms: New Approaches to International Cyber Security Challenges”, authors ask how should we address the challenges of today’s cybersecurity environment? The piece shows the need to clean up the network infrastructure by means such as “name and shame”. It underscores that the Internet service providers must better protect their networks and build security into the core technology and architecture of the Internet by working with international standards bodies. The risks posed by vulnerable IoT devices could be mitigated through regulations and standards or through the introduction of product liability.

Unfortunately, many countries continue to prefer a kind of “digital sovereignty”, which takes different forms according to local circumstances and cultural traditions. As always there are powerful vested interests at play at the institutional, national and regional levels, which continue to resist international collaboration. The US focuses its efforts on the private sector, particularly tech giants such as Facebook and Google. These platforms set their own terms, conditions, and procedures for enforcement. Furthermore, the EU’s General Data Protection Regulation (GDPR) focuses on essential laws governing personal data protection, market domination by digital platforms, and the promotion of individual rights

Analyzing International Laws, Conventions and Regulations on Ethical Standards in Cybersecurity

The application of strict regulations and sanctions to the platforms of technology giants that do not implement international laws, conventions and regulations would make for a healthier and more ethical cyberspace.

We already know the benefits that technology offers. Where we still do not have certainty is how to take care of our privacy when going out to virtual space through various domains regulated by different international laws and regulations. These regulatory frameworks offer safe guidelines to avoid risks and vulnerabilities, but these are limited by inconsistencies between the laws and protocols of different countries. The following table shows the resume of international rules and regulations in cyberspace about data protection:

Table 1. International laws, rules and regulations in the cyberspace

Laws, Rules and RegulationsJurisdiction
GDPR General Data Protection RegulationEuropean Union (EU) and the European Economic Area (EEA)
ICO Information Commissioner’s OfficeUnited Kingdom
COPPA Children’s Online Privacy Protection RuleUnited States
PDPB Personal Data Protection BillIndia
CCPA California Consumer Privacy ActCalifornia and United States
Oversight Board Facebook content Oversight BoardUnited States
CIGI Centre for International Governance InnovationCanada
Ley 25.326 Protection of Personal DataArgentina

Empowering Multiple Stakeholders through Collaboration 

As UNICEF notes children and young adults are already interacting with AI technologies in many different ways: using virtual assistants and video games with embedded AI, Facial Recognition Technology (FRT) apps, adaptive learning software and chatbots e.g., GPT-3 bots can analyze, understand, and respond to customer questions; predict needs based on a single word; and enhance their human-like responses with every interaction. Algorithms provide recommendations on what videos to watch next, what news to read, what music to listen to and who to be friends with. In addition, real-time technology collects data and images that are stored in their cloud.

However, there is a positive trend in the application of the recommendations of ethical standards and advanced tech that support the reduction of vulnerabilities. Many of these come from academic sources. Multidisciplinary spaces of discussion and collaboration fostering in school or informal settings such as international workshops would encourage adaptation to heighten awareness of cyberspace according to the context, age and profile of the user. Building ideas and discussion strategies led by children and young students will allow them to participate, share their own experiences and promote their own learning and needs on the subject of cybersecurity. Moreover, fostering spaces to listen to children’s perspectives on the ethics of certain AI systems, such as automated screening of applications or chatbots, and asking how they feel about how AI systems impact their lives.

Recently, IEEE published a standard (IEEE 2089) that addresses age-appropriate design for children’s digital services. This standard creates a framework for organizations to recognize and respond to the needs of children and youth. It is the first in a series of guidelines that will allow enterprises, such as social media platforms, to design age-appropriate digital products and services. A working group of professionals from academia, industry, design science (with a focus on interface design and children), software engineering, law enforcement, government agencies, and social policy, among other fields, produced the standard. TikTok has already implemented a number of policy changes as well as plans for new features and technologies aimed at making its video-based social network safer and more secure for kids. California just introduced the ‘Age-appropriate Design Code Act‘ to protect online children, demonstrating the need and importance of this standard. While these efforts are commendable, they are insufficient to address the issues children face in online environments, and more international cooperation is needed.

The Way Forward:

It is recommended that intergovernmental organizations, including the UN and G20, should work in collaboration to address the ethical challenges in technologies built for children and young people. A set of child-centered cyberspace policies presented in some previous works including, Fostering a safer cyberspace for children , and  Preliminary Policy Recommendations should be adopted as a reference. If these polices are implemented properly, it would create a trustworthy and safe online environment for online children.

There is a also dire need to strengthen the helplines for children and adolescents with more resources and tools, including the ones provided by UNICEF and the Child Helpline International. UNICEF, UNESCO and international organizations, universities and laboratories, should be supported to coordinate work on the ethical implications of AI technologies to preserve children’s privacy online and rights for data use by tech companies.

It is imperative to establish a global governance architecture for big data, artificial intelligence, and digital platforms, especially as the world fragments into data domains, as suggested by the Digital Stability Board (DSB). Spreading knowledge about ethical standards such as IEEE P7000 2021 and IEEE 2089 will also play a pivotal role in minimizing ethical risks for children.

Conclusion

The existence of different international regulatory frameworks aimed at protecting children’s rights over digital platforms is undeniable. While tech giants apply these ethical standards to varying degrees, the increased use of cyberspace at an early age needs a stronger framework coordinated by the international community. We have confidence in the use of advanced technologies such as AI. If used ethically, it can be a useful resource for facilitating complex validation tasks. Raising awareness in society is not enough in itself, the effort must also be based on technological architecture. Therefore, the role of international leaders and institutions is essential to promote and encourage existing initiatives that allow the prioritization and synergy of cybersecurity experts. This will enable them to take immediate actions against cyber threats and prevent their further spread, especially for children and young users.

*Thumbnail image designed by https://facebook.com/b4umedia

Founder & CEO at | Website | + posts

Muhammad Khurram Khan is the Editor-in-Chief of Cyber Insights Magazine. He is founder and CEO of Global Foundation for Cyber Studies and Research, USA. He is also a professor of Cybersecurity at Center of Excellence in Information Assurance at KSU.

+ posts

Silvia Lanza Castelli is a Master in Strategic Management in Software Engineering and System of Information Engineer. She is teacher-researcher at UTN Córdoba

+ posts

Paul Grainger is a Honorary Senior Research Associate at UCL, London, a member of the G20 Task Force on Education for the Digital Age, Co-Chair TF 6 on Economy, Education, Employment, and the Digital Age and Co-Chair of TF 4 on Digital Transformations.

Share:

Leave a Reply

Your email address will not be published.