Championing Ethical AI Development
Adrian opens the conversation with a discussion on what ethical AI entails. Both Elena and Karol offer that the ways in which AI is used is much more important than the AI tools themselves. For instance, the inherent bias transferred onto all AI tools from their human creators, creates the ethical issue of biased outcomes that can harm certain groups by leaving little room for nuance that can be deciphered by human judgement.
The creation of biased and unequal outcomes by AI ultimately begs the question, where do we draw the line between human judgement and AI? Should decisions be made largely by AI systems and verified by humans or vice versa? Karl introduces the concept of ‘Uncanny valley’ whereby the rate of AI advancement has made it increasingly difficult to distinguish AI from human. He explains that the narrowing of this gap creates distrust and fear surrounding the use of AI. Elena explains that people don’t trust companies but rather the people behind them because of their judgement. Therefore, building trust is largely about communicating thought processes behind decisions and establishing transparency around where AI is and isn’t being used in business processes, citing a recent AI failure used to calculate credit scores without notifying the affected individuals of the process being used.