Tessa Verhoef from the Leiden Institute of Advanced Computer Science and Eduard Fosch-Villaronga from eLaw- Center for Law and Digital Technologies wrote an article on how affective computing should be inclusive, diverse, and work for everyone.

Diversity and inclusion are critical aspects of the responsible development of artificial intelligence (AI) technologies, including affective computing. Affective computing, which focuses on recognizing, interpreting, and responding to human emotions, can revolutionize various domains, such as healthcare, education, and human-machine interaction. Capturing subjective states through technical means is challenging, though, and errors can occur, as seen with lie detectors not working adequately or gender classifier systems misgendering users. If used for ulterior decision-making processes, such inferences could have disastrous consequences for people, the impacts of which may vary depending on the context of an application, i.e., flagging innocent people as potential criminals in border control or detrimentally affecting vulnerable groups in mental health care.

Following this line of thought, Tessa Verhoef from the Creative Intelligence Lab at Leiden University and Eduard Fosch-Villaronga from eLaw - Center for Law and Digital Technologies wrote an article highlighting that systems trained on the datasets currently available and used most widely may not work equally well for everyone and will likely have racial biases, biases against users with (mental) health problems, and age biases because they derive from limited samples that do not fully represent societal diversity.

Eduard Fosch Villaronga and Tessa Verhoef

Tessa and Eduard presented the paper entitled 'Towards affective computing that works for everyone' (LINK) at the conference Affective Computing + Intelligent Interaction (ACII '23) that was held at the Massachusetts Institute of Technology (MIT) Media Lab. The annual Conference of the Association for the Advancement of Affective Computing (AAAC) is the premier international forum for research on affective and multimodal human-machine interaction and systems.

In their paper, they argue that missing diversity, equity, and inclusion elements in affective computing datasets directly affect the accuracy and fairness of emotion recognition algorithms across different groups. The researchers conducted a literature review revealing how affective computing systems may work differently for different groups due to, for instance, mental health conditions impacting facial expressions and speech or age-related changes in facial appearance and health. To do so, they analyzed existing affective computing datasets and highlighted a disconcerting lack of diversity in current affective computing datasets regarding race, sex/gender, age, and (mental) health representation. By emphasizing the need for more inclusive sampling strategies and standardized documentation of demographic factors in datasets, this paper provides recommendations and calls for greater attention to inclusivity and consideration of societal consequences in affective computing research to promote ethical and accurate outcomes in this emerging field.

Acknowledgement

The authors thank Joost Batenburg for providing support through the SAILS Program, a Leiden University wide AI initiative. They would like to thank also the Gendering Algorithms has received funding from the Global Transformations and Governance Challenges Initiative at Leiden University. This paper has also been partly funded by the Safe and Sound project (a project that received funding from the European Union's Horizon-ERC program (Grant Agreement No.

(C) 2023 Electronic News Publishing, source ENP Newswire