Contact Information

Much that is written in the cyber field concerns rapid developments in technical capacity and the urgent need for regulations and protocols to keep pace with these. To have traction, such regulations must be based not only on efficiency and technical dexterity, but on sound moral principles. This piece is a plea to go beyond the knee-jerk patching of abuses, and to encourage a global debate around the ethical issues concerning the enhanced human behaviour that this splendid new technology enables. How can we support the aspirations of organisations such as UNESCO, the United Nations and the G20? There is no magic in cyberspace, there are no new moral dimensions, but there is an enhanced capacity to amplify the virtues and vices which have always been a consistent presence in the human dilemma.

Analogies can be drawn from major technological advances of the past. From the mid-15th century, the development of moveable type, and the growing circulation of broadsheets greatly extended the capacity for the circulation of gossip and defamation. During the early 17th century, the laws of libel were progressively developed to counter this. These laws formed the basis of defamation law in the USA. In 1733 Peter Zenger, of New York, was cleared of libel, in a landmark case that revolved around the issue of ‘truth’ (Hudson, F. 1873 “ Journalism in the United States, from 1690-1872 “ In the Forgotten Books, published by New York : Harper & Bros, p. 82). By the early 19th Century, in the first industrial revolution, the new power of steam enabled ever bigger factories which then exploited vulnerable populations including children. In the UK, it took 142 years from the 1819 Cotton Mills and Factories Act to the consolidating factories Act of 1961 to finally combat such mistreatment as society struggled to control exploitation by the unscrupulous.  As new technologies emerge so the capacity for exploitation is enhanced, until society, at some point, is able to develop a new ethical framework. This is now urgent. Constants, such as truth and exploitation, and questions such as what is a good action, and what is justice? are not new, but the cyber world has introduced a greatly enhanced potential for their abuse and neglect.

My colleague and co-author, Silvia, undertook a quick survey of her, admittedly not representative, network of friends, colleagues and students. She posed the question: Do you consider ethical aspects important in the development of artificial intelligence?

We were disappointed to find that 20% neither agree nor disagree. In a further, more general, survey, which was opened by 83 contacts, only 5 responded, indicating a general level of apathy.  This is lamentable, but indicates a weakness in the training of those who design, build and maintain our cyberspace. Once again, in an unscientific and random way, we conducted a review of technical training courses at a sample of universities. Academic training opportunities in the field of Artificial Intelligence (AI) have grown, but unfortunately there is little space in the curriculum dedicated to social and ethical issues. Our research indicates that AI developers do not value the ethical implications of their work. It is crowded out by technical and business concerns. Given the expansion of AI with increasingly autonomous systems (robots and software bots) in processes of our daily lives, we argue that ethical considerations are paramount and that we should, in our colleges and universities, dedicate more spaces for research into, and discussion of, these issues. There is still a deficiency in how to link ethical principles with the practice of AI. Grey areas favor cybercrime. AI includes several areas that simulate human intelligence, but which remain unregulated and vulnerable to manipulation.  The pandemic has accelerated our dependence on AI, without promoting a proper ethical framework for the protection of citizens.

This input is identified in a Policy Brief to the 2020 G20 Summit:

One of the biggest unexpected consequences of the COVID-19 pandemic – and the large number of business and school closures associated with efforts to curb the virus – has been the speeding up of technological transformation ”Heightening cybersecurity to promote safety and fairness for citizens in the post-covid-19 digital world” , Muhammad Khurram Khan, Global Foundation for Cyber Studies and Research, USA , Paul Grainger, UCL, UK , Bhushan Sethi, PwC US , Stefanie Goldberg, PwC US

Our G20 colleague, Muhammad Khurram Khan, noted an enhanced vulnerability due to the increased working from home:

“Among many other possibilities in the Twitter hack, the work-from-home policy could be one of the reasons in the security breach as it is easier for hackers to exploit vulnerabilities and launch social engineering attack in less-controlled environments.

“According to a survey of 6,000 employees conducted by a cybersecurity company Kaspersky, 73% of employees working remotely have not yet received any cybersecurity awareness guidance, July 17, 2020 or training from their employers”

Among many areas of vulnerability concern are:

  • Data Integrity, security and confidentiality
  • Privacy of the individual in cyberspace, and intrusion into personal lives
  • Database in the cloud 
  • Banking transactions
  • E-commerce
  • Videoconference platform
  • Guidelines and protocols on safety issues,
  • The cost of security

For example, Cloud data applications have increased. Therefore, the database in the cloud must have proper standards to be able to give security to their structures. Cloud security standards: What to Expect & What to Negotiate Version 2.0, August 2016, and their support by prospective cloud service providers and within the enterprise should be a critical area of focus for cloud service customers.

As a further example, our data is being harvested all the time, our phone is reporting what we say, and Alexa shares your gossip and indiscretions with her masters in the ether. This intrusion lies way outside of any ethical framework.

In 2019, the European Commission produced Ethics Guidelines for Trustworthy AI, April 2019

This is a ground-breaking and thorough piece of work which makes a major contribution to the process of developing trustworthy AI.

It identifies three components for a system lifecycle:

  1. it should be lawful;
  2. it should be ethical, and
  3. it should be robust.

Of necessity this involves human agency and oversight. Given the complexity of recent systems, this is something we are in danger of losing sight of. This is a matter of great concern. The Ethics Guidelines identify four significant ethical principles:

  1. Respect for human autonomy
  2. Prevention of harm
  3. Fairness
  4. Explicability

The Task Force “Communiqué”, T20 summit season September 17 – November 01, 2020, indicates areas that can be further developed within the G20 process. Two paradigms have emerged:

  1. The need to work together, globally, is greater than ever and that
  2. Coordinated action must embrace structural reform

This need for multi-lateral action is not seen as a justification for authoritarianism. What is called for is decentralised, but co-ordinated action. And this needs to be cross sectoral. Previous attempts at regulation have been localised and sectoral. Global structural reform should not be seen in simply institutional arrangements but must be underpinned by new forms of social contract.

Two splendid contributions to the 2020 G20 help to propel us in this direction:

Paul Twomey and Kirsten Martin’s  “A Step to Implementing the G20 Principles on Artificial Intelligence: Ensuring Data Aggregators and AI Firms Operate in the Interests of Data Subjects“, April 3, 2020 , Last updated: April 6, 2020 calls for co-ordinated global action to enhance cyber security.

So too does “Fostering a Safer Cyberspace for Children”, Task Force 6 Economy, Employment, and Education in the digital age, Muhammad Khurram Khan, Omaimah Bamasag, Abdullah Ayed Algarni, Mohammad Alqarni, which stresses to particular vulnerability of children to on-line abuse.

In 2019, UNESCO, in an influential report “Preliminary study on the technical and legal aspects relating to the desirability of a standard-setting instrument on the ethics of artificial intelligence” had laid down some parameters for achieving more ethical AI.

It is most important to apply Ethically Aligned Design in AI and other autonomous, intelligent systems (AIS) because this makes it possible to address ethical issues at a moment when the technology can still be adapted.

On the basis of its analysis of the potential implications of Artificial Intelligence for society, the Working Group would like to suggest a number of elements that could be included in an eventual Recommendation on the Ethics of AI. These suggestions embody the global perspective of UNESCO, as well as UNESCO’s specific areas of competence.

Most recently  “Carbis Bay G7 Summit Communiqué“,G7 Cornwall UK, June 2021, ) indicated their concern to promote ethical AI.

a sustained strategic priority to update our regulatory frameworks and work together with other relevant stakeholders, including young people, to ensure digital ecosystems evolve in a way that reflects our shared values.

It would be unfortunate, and undermining to our democratic and social institutions, if AI continued to develop without a suitable ethical framework. The pandemic has accelerated the potential for human wellbeing in the application of AI. But it is important that we encourage the embedding of ethics in the analytical capacity of engineering. Artificial intelligence and digitization must be ethically based, and this should be observed by those who work in this area. It is important that they are encouraged to follow the instruments, frameworks and standards established by organizations such as UNESCO and the G20. These principles have been established to accelerate the development of an ethical perspective that acts within the scope of the United Nations Sustainable Development Goals.

+ posts

Paul Grainger is a Honorary Senior Research Associate, G20 Task Force on Education for the Digital Age, Co-Chair TF 6 on Economy, Education, Employment, and the Digital Age and Co-Chair of TF 4 on Digital Transformations.

+ posts

Silvia Lanza Castelli is a Master in Strategic Management in Software Engineering and System of Information Engineer. She is teacher-researcher at UTN Córdoba

Share:

Leave a Reply

Your email address will not be published. Required fields are marked *