The Moral Compass of AI: Navigating the Ethics of Artificial Intelligence
With the rapid digital transformation across industries, AI is rapidly transforming workflows and competences, from healthcare and finance to manufacturing and transportation. However, as AI becomes more advanced and widespread, there are growing concerns about the ethical implications of its use. In recent years, there has been a significant focus on the current state of AI and ethics, with many experts highlighting the need for ethical guidelines and standards to ensure that AI is used in a responsible and fair manner. Unsurprisingly, Europe has been in the forefront of this global initiative, and Slovenia has led the way by heading the Ad hoc Committee on Artificial Intelligence (CAHAI), investigating the legal framework for the development, design and application of AI.
Already when in 2016 I first started working on Healthcare IT, it was interesting to see how a different world it is mostly due to the heavy weight of ethical assessment. From the IT developer perspective this was a big delay and demotivation on most directions going forward. Maybe due to that, later in 2019 (just before the pandemics and the digital transformation it pushed through the healthcare sector) I had a publication accepted at the prestigious European Public Health Conference, and was much surprised to see that I was one of the few presenting AI related topics within thousands of participants attending in Marseille that year. Yes, ethics can be demotivating for hot blood IT developers, but is surely a small price to pay for a fair and sustainable future. Don't you agree?
Also, the European Commission (EC) has been giving much importance to the topic Ethics in the context of its granted projects that, already during the Horizon 2020 programme, have been requested to provide official statements on ethical aspects of the project, and often add an additional work package that ensures progress on that matters. It was thus expectable that the recently released EC AI Act would have something to say on that matters, advising the AI community to ensure privacy and personal data are protected, both when building and when running an AI system, and to allow citizens to have full control over their own data, ensuring that their data is not used to harm or discriminate against them. One of the core principles driving the EU guidelines on the ethics of AI is that the EU must develop a 'human-centric' approach to AI that is respectful of European values and principles. And shouldn't we reserve the center place to the Human in this equation?
Indeed, the topic of Ethics & AI is a world on its own, where the most relevant points (fairly known, publicly discussed and center to current political priorities) are as follows:
- The use of AI for surveillance, through facial recognition, and predictive policing able to infringe on individual privacy and civil liberties, highlighting the need for ethical frameworks and standards for the use of AI in these areas.
- Autonomous weapons (already in use in military operations), where the ethical implications of AI must be carefully considered, provided with guidelines and standards to ensure that AI is used in a responsible and fair manner.
- Bias as a major ethical concern in relation to AI, as algorithms can perpetuate and even amplify existing inequalities if trained on biased data, impacting juridical and legislative procedures or even employment pipelines.
One of the key challenges in the current state of AI and ethics is the potential for bias in AI algorithms, that are only as unbiased as the data they are trained on. This means that if the data used to train an algorithm is biased, the resulting algorithm will also be biased. Such bias can lead to significant ethical ramifications, especially in fields like hiring and lending, where biased algorithms can uphold and even amplify existing inequalities (a classical example is Amazon’s recruiting tool showing bias against women in 2019). Addressing this challenge requires a focus on developing datasets that are diverse and representative, including datasets that are representative of different genders, races, and ethnicities, as well as balanced datasets with positive and negative examples. Additionally, there has been a push for more transparent and explainable AI algorithms to identify and reduce biases. Overall, it is crucial for businesses and policymakers to proactively address ethical challenges related to AI to ensure that AI is used to benefit society as a whole.
Another significant challenge in AI and ethics is the possibility of AI being used to violate privacy and individual rights. This can happen through the use of AI for surveillance, facial recognition, and predictive policing, leading to ethical implications for personal privacy and civil liberties. To address this, there has been a growing focus on developing ethical frameworks and standards for the use of AI in these areas, as well as increasing transparency and accountability in AI algorithm usage. Along with privacy concerns, there are also ethical implications for AI in areas such as healthcare, finance, and autonomous weapons. In the case of autonomous weapons, there is concern that AI-powered weapons could make decisions that violate international humanitarian law and be hacked or malfunction. In healthcare, AI-powered devices and diagnostics could lead to incorrect diagnoses and treatment, as well as be used for non-medical purposes. In finance, AI-powered trading algorithms could contribute to market instability and volatility and be used for illegal activities such as money laundering.
Being the Ethics of AI one of the pillars of IRCAI’s activities since its inception, it has a lively AI & Ethics committee headed by Vanessa Nurock, the UNESCO EVA Chair (Ethique du Vivant et de l'Artificiel /Ethics of the Living and the Artificial). This is reflected in the several activities in this direction, promoting the use of ethical, humanistic, and effective AI to help address the UN’s Sustainable Development Goals (SDGs) and advance UNESCO’s agenda by, e.g., helping prepare the Recommendation on the Ethics of AI adopted by the UNESCO assembly in November 2021. Also, the first IRCAI award in 2021 went to Adriana-Eufrosina Bora, at the Queensland University of Technology (Australia), for her work in Project AIMS (Artificial Intelligence against Modern Slavery). AIMS used AI to read and benchmark companies statements produced under mandatory reporting requirements in order to improve transparency and help eradicate modern slavery. This prototype is based on state-of-the-art methods of Natural Language Processing (NLP) to produce a census analysis of the statements that companies have been publishing in response to the UK’s Modern Slavery Act. What is more, IRCAI has initiated the development of a global observatory analysing news and publications through NLP methods, observing topics like, e.g., bias, common good, democracy, equity, fairness, gender, Human rights, etc., to generate an exploratory capacity mostly based on open data, to empower researchers in further investigation in the ethical use of AI. Contributing for a more AI-aware and responsible future. We should all do, won't we?