Modelling a Socialized Chatbot Using Trust Development in Children: Lessons learnt from Tay

Published 30 May 2021 in the IET Cognitive Computation and Systems Volume 3, Issue 2 p. 100-108

Abstract: In 2016 Microsoft released Tay.ai to the Twittersphere, a conversational chatbot that was intended to act like a millennial girl. However they ended up taking Tay’s account down in less than 24 hours, for Tay had learnt to tweet racist and sexist statements, as it had learnt to act as such from its online interactions. Taking inspiration from the theory of morality as cooperation, and the place of trust in the developmental psychology of socialization, we offer a multidisciplinary and pragmatic approach to build on lessons learnt from Tay’s experiences, to create a chatbot that is more selective in its learning, and thus resistant to becoming immoral, as Tay did.

Working with Dr Oliver Bridge, Rebecca Raper (PhD Candidate) and Dr Selin Nugent, we explored how it might be possible to enable an autonomous conversational bot to learn and build a moral “understanding” using acknowledged trustworthy news sources. We called our hypothetical bot, A1B0T. We proposed that A1B0T would scan Twitter (using a version of the page ranking system used by the google search engine) for tweets that were from reliable news media sources.

In the example above, BBC News [red] has been deemed a trustworthy news source that is reflective of the society and values within the United Kingdom. In other words it could be relied upon to generate trustworthy content. Other media sources can be evaluated according to their proximity (in terms of degrees of separation) from the BBC News. If BBC News follows me, this gives me credibility and it will receive a positive rating on the trustworthiness scale. Whereas if BBC does not follow me, the indication of my trustworthiness will be according to how far away I am in terms of ‘follows’ from the BBC. For example, in a hypothetical scenario where ABC [blue], DEF [purple] and GHI [yellow] news sources could be regarded as the most trustworthy sources because they are each only one degree of separation, in terms of being followed by the BBC. Jill, on the other hand, is one degree of separation from the BBC, she is not followed by the BBC, and therefore, not an immediate source within the network. Logically, Jill and Ben would both be ranked as the second most trustworthy sources because they are both two degrees of separation from the BBC. In this scenario, Amy is the least trustworthy of the Twitter sources because her distance from the BBC is three degrees of ‘follow’ separation.

What is significant about this model is that A1B0T forms its own representation of the world (which can be called its ontology) before making a decision on the facts with which it is presented. The relationship between the tweet and how trustworthy it is, is formalised within the representation before this is used to make a decision. Ultimately, this allows A1B0T to take the first steps towards forming its own independent view of the world and adds to its autonomy.

During this analysis we reflected on emerging issues such as defining and finding potential role models and trustworthy sources, how ‘reliable’ social norms can limit the diversity of content and learning as well as the problems of defining what is moral.

We wondered, cheekily, whether this inquiry could be the first step in creating a critical thinking machine?

Here is the link to the article if you would like to explore the idea further: https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/ccs2.12019

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.