Ethics in the age of AI
12 Oct 2023
“Do unto others as you would have them do unto you”. This golden rule exemplifies a directive instilled in all humans, regardless of their ethnicity or religion, implicating a unilateral commitment to regard others as one does oneself.
This rule can steer AI clear from the reciprocity of exchange, often coined as “do ut des” and sets the bedrock upon which we can train AI to serve our humanity to prosper in harmony and mutual understanding. Akin to equipping its algorithms with an inner compass capable of locating the righteous path yielding a serene consciousness, for this latter is often a pendulum swinging to signal when the road taken is devious.
The AI ethical add-on summons our morals and lays the ground for the standards people owe themselves and one another to preserve self-respect and dignity. The AI ethical add-on can encompass the beliefs that support a certain view of morality. It denotes the behavioral canons by which we judge ourselves and one another. For instance, AI will process the prompt ‘betraying a friend’ as unethical, and it will grant a higher weight to ‘means’ than ‘ends’. The language model will translate all wrongdoings as acts of ‘wickedness’.
Nothing captures this better than an AI that incorporates the curse and mark of Cain for Abel’s slaying or Ambrose Bierce’s satiric work (the devil’s dictionary, 1911) to understand how the law can be the lowest common denominator, the minimum level of acceptable behavior, and therefore pushes our societies toward higher ethical standards rather than settling for the lowest legal criteria.
We can achieve this through a reflective AI on the dos and don’ts, which considers the law, but transcends its legislation whims to reach a flawless illustration of the right and wrong. When both laws and ethics align, AI sees our actions as a matter of law; yet when both notions collide, AI processes our actions as a matter of ethics, for each one’s life is not about “me, myself, and I”, but a question of “is it the right thing to do?”.
Hippocrates once said, “be of benefit, do not harm”, I believe this maxim exemplifies best what can shape our AI algorithms. This means that every step of language models training must have a higher benefit to risk ratio. Otherwise, we should revise it until we achieve this ratio. Similarly, it should respect confidentiality and seek informed consent to maintain beneficence and integrity and promote rigor in its data perusal. We shall clearly explicate all aspects related to its research process to all stakeholders along with a clear delineation of expected outcomes and contribution to the accumulation of knowledge.
Overall, AI algorithms must put the interest of our societies above the interest of its scalers. This way the will of the people will always prevail over that of interest groups, and hopefully we can all enjoy the benefit of an AI encompassing the norms that guide our behavior. An AI that puts first trust, accountability, mutual respect, and fairness, which are necessary values for cooperation and collaboration that are needed to promote the key aims of our society: knowledge, truth, and improving the world.
The case for ethics in AI shall set the norms on misconduct and conflict of interest and restore our trust in large language models and keep our integrity intact. Ethics first will guarantee our functioning as a society and separate us from the algorithm, for it enables us to attend to issues data analysis and experiments cannot answer.
Dr. Yassine Talaoui is Assistant Professor of Strategic Management at the Center for Entrepreneurship and Organizational Excellence, College of Business and Economics, Qatar University.