The list of potential dangers for our children as they navigate the complexities of digital life, and its crossover and connections with ‘real’ life, are long. How can we help them? How can we engage them in the conversation? BCS held a roundtable to discuss this as part of the launch of Project PEEL, designed to give young people a voice on self-identity. Brian Runciman MBCS reports.

While adults struggle to keep up with the breakneck pace of change in the digital world, the simplistic view is that children can cope better because they have grown up with technology, and know how to work the devices. A derivative of the 80’s stand-up comedy troupe of adults needing to get their child to program the video for them.

However, technical knowledge is not a substitute for adult experience. Issues of self-esteem and growing up are not changed by knowledge of capacitive screen operation. In the film that PEEL used to introduce the event, a young person neatly drew attention to the complexity of issues around ‘self’ with a simple declaration on social media: ‘There is always someone prettier than you, better than you, smarter than you.’

BCS, as it makes IT good for society, asks: What does the world we want to live in look like? In this context, the vital question is: What do our children need from the digital world? We want our children to be creative, capable, safe and empowered in the digital world. The starting point is the rights our children should expect.

Matthew Taylor, Chief Executive of the Royal Society for the Arts, hosted the event. With tongue in cheek, but also in the interests of engendering as broad a discussion as possible, he commented, in his introduction, that while these conversations are often controlled by ‘do-gooders’ we need ideas from all corners.

He played devil’s advocate with these thoughts: ‘there isn’t really a problem, we just identify mental health problems better now and, in fact, everyone is more inclusive these days.’ And on solutions: ‘things will solve themselves anyway, because if things don’t work they will fail eventually.’ These are not necessarily majority views, he said, but need to be in the conversation.

From the initial discussions, the participants were asked to come up with three broad types of idea: principles that underpin the subject; policies that we could focus on; and innovations that could help.

Discussion 1: How do we update and make real the rights of the child in the digital age?

There are rights in existence, namely the United Nations Convention on the Rights of a Child (UNCRC). The UNCRC was mentioned as a good vehicle to use for updating rights because, as a widely-adopted convention, amendments would have a similarly wide impact.

In fact, are the UN children’s rights already correct in principle, but just need a little tweaking or reframing for the digital age? Do digital rights crossover with existing child’s rights?

Previously the child was largely a passive recipient of rights, but in the digital world they need to take a more active role. And as children are full participants in the digital environment, the whole of society needs to be involved in their safe participation.

Schools and parents took the lion’s share of responsibility in the past because they had most control of what children’s lives touched upon. Now children’s lives have a wider range of touchpoints, digitally enabled, beyond the confines of the home and school.

The view of the participants was that digital and ‘real’ rights should be indivisible, so we need to take the existing rights and rigorously apply them in the online world. But some specific issues, for example the right to remove content, are not in the UNCRC.

It could be that digital rights could become a subset of the main UNCRC - perhaps in the procedures that support the existing rights; how they are implemented in the digital context and who is responsible for doing that.

Analogising

In these discussions a number of useful analogies came out. For example - we do not let our children play on building sites - but the sites still need to be secured. So, for those who run websites inappropriate to children, not just ‘adult-only’ - what are the analogous responsibilities?

Is the principle that children have the right to find environments safe for them?

Another good analogy is that of brand safety. Advertisers demand that their ads not appear next to inappropriate content, such as extremist content but, strangely, children’s behaviour online is not protected in the same way.

In the physical world, we try to limit risk. Playgrounds, for example, evolved from concrete-based iron equipment through to better designed areas, complete with soft areas. This developed through council responsibility, the involvement of equipment manufacturers, parental concern and so on coming together to get better solutions.

There are still risks in playgrounds, but they are mitigated at the design stage and, of course, still demand parental oversight.

Along similar lines, swimming pools play a part in education - schools facilitate and promote education and participation, but the pool provider meets safety standards, and parents facilitate education and participation and fulfil positive role models. The risks associated with swimming are known and understood - and we can draw lessons in the digital environment.

And what about the issue of the right to one’s (own) identity? Identity can develop and change as we grow, too. Which shows the importance of the right to remove information, content, or images, and the vital role of age verification.

Producing policies in these areas was agreed to be notoriously difficult. Research to evidence issues is a key factor in raising awareness that change is needed, with news stories cited as a good way to back this up. Any policies which are drafted need to be user-centred, and created in constant dialogue with young people.

Often older people assume things on behalf of young people, but we need to consult more with them to find out the reality of different situations.

The larger context is that many of these issues are societal problems, not purely digital ones. Describing them purely as digital issues has an isolating effect, whereas the conversation should be broader.

So, what came up...?

Principles:

  • How we socialise the use of digital is not a digital problem, it is a societal one.
  • Young people’s voices need to be heard.
  • We want children who flourish, taking into account that children at different ages and experience levels have different issues and needs.
  • Social media should make you happy, healthy and safe.
  • We should all have the same digital rights as we enjoy in the physical world.
  • Whatever is created needs to be human-centred.
  • There should be a right to be forgotten.
  • Children must come before profit.
  • We need to enable children to have control and be safe digitally - to have agency.
  • Young people should be able to shape their education rather than be passive recipients of it.
  • Training should be positive and used for advantage, not just about risks and ‘what not to do’.
  • Young people need to understand that they own their own data and that they can withdraw their consent if they don’t like what’s being done with it.
  • Children need to understand the issues raised when someone else has their data, and what it can be used for.

Policy:

  • Education - schools should invest in projects to address issues, perhaps by providing a personal development curriculum.
  • Parents need education on the digital environment as it pertains to children, ideally pre-natal.
  • We need better, memorable public education messages. Like the five-a-day healthy eating message, or a version of the well-known ‘stranger danger’ campaign for the digital space.
  • A debate is needed about what young people are doing versus how much time they spend doing it (for example: smoking is never a good thing, and that’s a joined-up message in public life. However, sex education is designed to support young people in having agency).
  • The PSHE curriculum should be more about lived experience to help with thinking and decision making - providing a space for children to be able to think about resilience.
  • The role of social media companies needs examining. They started as enablers of content and, indeed, profit from it, so do they have a role in regulating its use?

Innovation:

  • Every choice needs to be an informed one, and delivered at the decision-making point: what about emojis that convey terms and conditions?
  • Perhaps a kitemark, analogous to the https alert, could be developed to mark worthwhile content - maybe via a w3c standard?
  • We need a shift in the language we use. We’re currently using concepts/language that suits the ‘real life’ world (for example, ‘stranger danger’ is widely taught and effective, but the concepts don’t translate neatly to the online world).
  • Large corporates already know a lot about us - can we use this insight to enhance children’s digital lives?
  • We need to encourage self-organising communities (e.g. young mums on mumsnet, local support groups, advice). This could give people a richer knowledge of their neighbourhood, enlivening places and communities and providing support networks.
  • We need better and advanced age verification - we wouldn’t expect to turn on the TV in the middle of the day and see pornography, so how is that OK online?
  • Support for some of these changes will require algorithm adjustment.
  • To address issues around the length of time children spend in the digital world, what about an ‘app forest’ that grows the more time a child is not on a screen (e.g. exam times). This could produce groups supporting each other to not be going online, offering rewards for this behaviour. There is an app called Moment which will show you how much time you spend on your phone.
  • We need to help young people to be more self-reflective, critical thinkers with resilience through a re-activation of human principles. Some technologies already offer local support services to someone if they search for terms like ‘suicide’ and ‘self-harm’ or allow users to block trigger words. This data could be used to monitor someone for support and early intervention.

Discussion 2: Are we making sure that child-safe design is happening? What are the principles that should inform it?

Long before the digital world beckoned, children were risk-takers. And even if we de-risk a particular set of websites, children will try to find a way to use others. Nevertheless, normally, children don’t intentionally try to harm themselves, so could be taught what is good for them and then reminded from time to time.

An example approach is the fitbit, which encourages people to exercise and gives them periodic prompts.

Because of the vast number of ‘bedroom designers’ and different countries producing online content, imposing safety standards at the design stage seems to be a logistical impossibility. Therefore, we need to impose safety at the point of use.

The problem here is two-fold: first, who is the regulating authority, designating what standards exist and which websites are adhering to them and, second, how could that regulating practically be carried out with the vast amount of online content being created?

Here are some ideas:

  • A rating system to help parents and young people navigate the choice of products and services, for example, like the red/amber/green rating on food for salt, sugar and fat. Rating could cover three simple categories: age monitoring, timing monitoring (with ‘nudges’ to remind the user to take time out), and content control. These ratings could be crowd sourced.
  • How about a version of BBFC ratings?
  • Ratings systems could be combined in some way with a device - which knows its user’s behaviour better.
  • What about ‘junior versions’ - is there an argument for creating, for example, a cornered-off ‘Safe Facebook’, or should we, as a matter of course, make all of Facebook child-safe?
  • Digitally-driven toolkits could be created so that families can choose age appropriate content together.
  • Age verification is a fundamental issue.
  • Design standards should be backed by legislation.
  • How about appealing to the principle of ethical investment. The government could use monetary incentives to drive standards and make digital ethical investment more appealing.
  • Should this discussion be more about education than restrictions? We could show the effects of poor online hygiene with a video when you first go into a social media site. For example, it could show a young person posting something online and how it affects other people - so that it becomes clear that what you say online has effects on others even though you might not directly be able to see them.
  • Models of good behaviour - talking to young people in a language that’s theirs.
  • Blacklisting can be an important tool at certain ages.
  • What if terms and conditions were displayed through a video?
  • Ethical and child-centred design should be taught in universities.
  • What about a software designer accreditation in child-safe design?

Discussion 3: What should we be teaching our children to help them thrive in the digital world?

Niel McLean, who is heavily involved in Computing at Schools, introduced this discussion. He noted an example from another industry that can be applied: car safety campaigner Ralph Nader spotted in the 1960s that it was the design of cars that was killing people - it wasn’t a fault with the education of the driver.

Unfortunately, a lot of the design work in the digital world, as with cars, has already be done. What can we do? Niel commented that knowledge comes first, with understanding next, but what we often need to understand are the motivations, the incentives, that underpin behaviour.

A good goal would be to create the ‘conscious consumer’ who is aware of being manipulated by ‘nudges’ or the feeds that they’re shown, without scaring them. There is something to be said for ensuring young people feel OK about making ‘mistakes’ and to live with the consequences - as adults do - but with relevant knowledge of those consequences.

So, in this section, the participants were asked to suggest three things for the digital sphere: one key thing young people should know; one key competency they need; and a motivator that could promote that behaviour. The results are boxed below.

Where now?

This is the start of a conversation - BCS wants to encourage it, and it needs to include policy makers; parents; politicians; digital providers; philosophers; thinkers... children! To get involved, follow BCS as it pursues making IT good for society.

One thing young people should know

  • Know what kind of digital space you are in
  • Realise that every click has an impact
  • Online use is progressive
  • Online is not a static environment, not bad, not good (like a bus it can take you to good or bad places)
  • Be active digital participants not just consumers
  • Get a greater understanding of algorithms, how it narrows choices affects engagement - but without driving fear
  • Be able to find information on yourself and know how it is generated. Learn how to find reliable information

One piece of competency

  • Decide how to behave in this space
  • Know how to be responsible and the implications of actions
  • Practice self-regulation
  • Recognise how feelings are affected by online behaviour
  • Learn to be creative online
  • Bring individuality to your online persona
  • Critical thinking (e.g. know the library index metaphor; How do I find information? How do search algorithms work, and what is their bias? Where do I find help?)

A motivation

  • In order to be who you want to be
  • To have a holistic view of being a good citizen (underpinned by a ‘digital baccalaureate’?)
  • Role models, more self-identity material
  • The ability to shape your own experience
  • To create value in oneself, validating the individual beyond the pursuit of ‘likes’
  • Seeing why data matters when it’s gathered online - its impact on your life
  • To get self-esteem and related life-skills

Further reading

Tags