Close×
Privacy

Using play to understand artificial intelligence

By : Maryse Guénette
Photo credit: Brooke Cagle (Unsplash)

In a new study, 22 young people learn by playing a series of games how artificial intelligence works and how it could impact their lives. Teens tell us what they think of AI and how they believe governments and businesses should handle it.

Instagram, Snapchat, YouTube — all these platforms are extremely popular with young Canadians. But while they’re voluntarily disclosing personal information on these platforms, do they know what’s happening to their data? Do they understand that the platforms are learning more about them than they say? Probably not — because the platforms use artificial intelligence (AI), a technology most of us barely understand.

The study, Algorithmic Awareness: Conversations with Young Canadians about Artificial Intelligence and Privacy, conducted by MediaSmarts, examined young people’s thoughts on how artificial intelligence is used by social media platforms. “We talked a lot with young people in our previous projects,” says Kara Brisson-Boivin, Director of Research at MediaSmarts, an organization that has been studying youth behaviour for over 20 years. “But we didn’t know what they were experiencing when they were on the platforms, or how much they understood about the platforms.”

 

A playful journey

Since the goal of the study was to find out what young people thought of AI and the algorithms used by different social media platforms, interviewees needed to have enough information to get a good grasp of the topic and be able to discuss it in an informed manner.

We knew that young people’s knowledge was limited. If we had just asked them questions, the results of our research would not have been as rich.”

Kara Brisson-Boivin, Director of Research at MediaSmarts and co-author of the study

To educate the young participants, MediaSmarts organized them into focus groups that would play a “card-matching game” in which each participant created content for a popular video site. The game was played in three phases. In each phase, participants were presented with scenarios designed to help them understand the role of algorithms and what happens to users’ personal information once it’s provided to the platform.

For example, in Phase 1, participants were instructed to retain their audience, encourage them to watch, share, like and comment on as many videos as possible, and get them to keep on coming back to the platform. In Phase 2, participants were told to monetize their videos, which meant using advertising to make sure the videos would be seen by their target audience — not just the widest audience possible. To do this, participants were told to collect and analyze information on platform users. In Phase 3, participants were given data cards with information on users’ gender, race, health status and sexual orientation. The participants could link the cards together or exchange them with other players to form hypotheses about users.

The longer the game went on, the more participants improved their skills in artificial intelligence and their awareness of what it can do. At the same time, participants gradually came to see how disturbing these practices were.

By the end of the game, the young people were able to see how the algorithms make assumptions based on three things: what people do online, such as ‘liking’ certain things and watching certain videos; how they appear when they’re online, based on the information they’ve supplied; and what the platform can find out or infer about them. This really made them worry.”

Kara Brisson-Boivin

A hopeful discussion

In the discussions that followed Phase 1, participants complained that they wound up with repetitive, boring content that they couldn’t change. This left the participants feeling they had no say in the matter. Participants also noted that in some cases, artificial intelligence was creating controversy simply to optimize SEO on the platform.

After Phase 2, participants were concerned that the recommendations they were receiving were based on assumptions made from data collected from people like them or from their friends, rather than based on their own data. They were also worried that their data was being sold to other platforms by data brokers, a practice they believed was unethical.

After Phase 3, participants expressed concerns about how sharing sensitive information could potentially contribute to marginalization and discrimination. They also wondered about users’ rights, what protections exist and who was responsible when problems arose.

Young people were very responsive. They were aware of the kinds of abuse that can occur. They were worried that a large portion of the population would end up being stigmatized.”

Kara Brisson-Boivin

Some of the participants’ discoveries were especially concerning. “They wondered if the cross-referencing done by AI might lead to assumptions being made that could put them at a disadvantage when they were applying to college or seeking employment,” says Brisson-Boivin.

 

Finding solutions

Based on their observations, the young participants proposed several solutions. In the report, these took the form of recommendations organized under five key terms: awareness, transparency, protection, control and engagement.

For example, participants mentioned the need for more awareness measures to help both young people and adults — including the less affluent — develop critical thinking skills. Participants wanted companies to be more transparent by making clear to users how their data is collected, held and traded, and to divulge both the recommendations that emerged from this process and what the process actually is. Participants also asked for security and privacy measures to be clearly disclosed.

Participants expressed concerns regarding the eventual impact that storing and sharing information about them could have on their lives, the trust that companies place in artificial intelligence, and the marginalization and discrimination that could result from the increasing use of AI. They called for enhanced protection measures.

Along the same lines, they wanted to see more control over the personal information users provide to companies and asked that mechanisms be developed to help them file a report if they’re unhappy about an issue.

Finally, participants called for more youth engagement in consultations designed to develop tools to improve algorithmic literacy for young and old alike.

While these proposals may sound overly idealistic, Brisson-Boivin believes they’re actually quite achievable. “Young people are very clear about what they want,” she says. “I personally believe that their wishes will be fulfilled, and that the government will improve its legislation on AI.”

The expert also believes that the prototype game developed for the study could be used over the long term by a variety of audiences. “We shared the prototype of the game with adults at several presentations and they all found it instructive. Our plan for the next year is to turn this into an educational game that can be used by all Canadians.” The tool will be added to initiatives across the country and around the world to help young people and adults improve their algorithmic literacy, such as Kids Code Jeunesse and the Canadian Commission for UNESCO, AI4ALL (US), and AI and Children (UNICEF). This is a huge step in the right direction.

The study

Algorithmic Awareness: Conversations with Young Canadians about Artificial Intelligence and Privacy (MediaSmarts, 2020) was written by Kara Brisson-Boivin and Samantha McAleese, respectively Research Director and Research and Evaluation Associate at MediaSmarts. The report includes a literature review, but more importantly gives voice to the concerns of 22 teens aged 13 to 17 who participated in eight online focus groups held across Canada from November 2020 to January 2021. A prototype game was “designed and facilitated” jointly by MediaSmarts’ Director of Education and a media literacy specialist from the organization. The game, #ForYou: A Game About Artificial Intelligence and Privacy, is designed to help participants learn about artificial intelligence, algorithms, online privacy and data security, and to discuss the issues involved at a deeper level.