Children, Teenagers, and Artificial Intelligence: Fostering Human Agency

Franciska Nowak

Children and teenagers encounter new technologies with curiosity. It is not surprising they were among the early adopters of Open AI’s ChatGPT. It is obvious: Access to smartphones and tablets opens the door towards an AI-infused digital world.

My friend is a chatbot: Snapchat’s MyAI

The product range of AI-supported applications made for and used by young target groups is wide. To give an example beyond the common voice assistants, ChatGPT, and the algorithms TikTok and Instagram use: The app Snapchat offers its young users MyAI, a chatbot that presents itself as a constantly available friend, ready to ask questions of all kinds. The chatbot appears at the very top of each user’s friend list and thereby evokes trustworthiness. Snapchat is among the most popular smartphone apps, especially among female users and pre-teens. In Germany, 52% of all teenagers between the ages of 12 and 19 use the app to chat and exchange snapshots.

Snapchat tries to tailor the chatbot’s answers to the individual age of its user but might still provide inappropriate answers. In a test, MyAI advised a teenager to create a romantic ambiance by dimming the lights and setting up candles, even though the user revealed being with a man 18 years older. This is just one out of several examples. Snapchat constantly improves the chatbot. Users can report messages and give feedback within the app. Nevertheless, a certain fallibility will remain as generative language models are based on statistical probabilities.

Why not ban AI-chatbots for children and teenagers?

Given the vulnerability of children and teenagers, it is important to develop strategies addressing the fallibility of applications based on machine learning. Even beyond AI-gadgets, children and teenagers do not yet fully understand the mechanisms behind media, and need support in reflecting on their media use. Public discourse demands protection, sometimes resulting in prohibitions of certain applications for certain age groups. Recently, Australia decided to ban children below 16 from using social media applications to counter security and privacy issues, including the spread of wrong information by automated bots. The effectiveness of such prohibitions is uncertain. The next potentially harmful application is just around the corner, age restrictions are difficult to enforce, and a higher age does not necessarily equip users to better protect themselves.

The need for human agency

I suggest focusing on the promotion of human agency in children and teenagers. Human agency deals with notions of empowerment, control, self-determination and aims to explain human action. Within Albert Bandura’s Social Cognitive Theory, the anticipated effectiveness of one’s own action is our central driving force. If we do not expect our own action to produce a desired outcome, we might refuse to act at all. The consequences of a perceived lack of human agency might be feelings of resignation and loss of control. For the use of chatbots, this would mean accepting the application's fallibility because we do not ascribe to ourselves any capability to work against it. A teenager receiving inappropriate answers while chatting with Snapchats’ MyAI might end up in a state of resigned acceptance, even though the app offers feedback options.

To prevent the loss of human agency, fostering AI-literacy is essential. AI-literacy describes a set of skills including an understanding of how AI works, where AI is used and how one interacts with AI effectively. AI is understood here as a general term primarily referring to machine learning techniques. AI-literacy helps to demystify AI and aims to enable a self-determined use. There is a good chance to raise children and teenagers to act AI-literate, if schools, media, and family as instances of socialization manage to create spaces to negotiate AI. These spaces could include regular school lessons on AI, the deployment of social media scouts offering peer-to-peer counseling, or a family visit to an AI museum, creating opportunities for joint discussions. However, determining which skills and methods are the most important for building AI-literacy is a subject of current research. An understanding of the role data play in machine learning (something, children and teenagers often underestimate), as well as hands-on approaches to building, coding, and experimenting with small-scale AI applications are promising.

We can assume that a child who has coded a small-scale chatbot experiences AI from the perspective of an individual with the ability to influence the situation. The child will look for – and, with assistance, find – solutions, if the bot does not behave as intended. When Snapchat’s MyAI provides a wrong answer, the child will be able to better contextualize this and, ideally, feel motivated to report the fault with the awareness that this feedback can contribute to the bot’s improvement. Rather than being left with a feeling of resignation, the child will have experienced themselves as an agent in control.

About the Author

Franciska Nowak

Researcher, University of Erfurt