• Smart toys are proliferating rapidly across the globe, displaying new risks and opportunities for the youngest generation.

  • Several emerging regulations set minimum requirements for smart toy manufacturers in areas such as cybersecurity and online safety.

  • Where regulations may have limitations, initiatives such as the Smart Toy Awards actively incentivize smart toy developers to set children’s wellbeing and developmental needs as their utmost interaction design priority.

Copyright: weforum.org – “Why AI companies should develop child-friendly toys and how to incentivize them”


AI-enabled smart toys come in an exponentially growing diversity and inhibit the most known social environments of children. The Market Research Future group, for instance, projects that the global market share of such toys will grow by 26% and reach 107.02 billion USD by 2030. Through their growing interaction with children, transferability across contexts, connectivity to other AI-enabled devices, and the ways in which children unconsciously entangle with them, AI-enabled toys regularly impact the upbringing of the youngest generation.

Exposure to online safety risks

Smart toys, as highlighted by the Generation AI Initiative of the World Economic Forum, can exhibit highly positive effects on children’s development when designed responsibly. Complaints received by the Federal Trade Commission (FTC) as well as an investigation of the Norwegian Consumer Organisation and rich scholarly research however also point to the severe impacts that smart toys can have on children’s development.

The non-transparent ways in which some smart toys exchange data for algorithmic analysis with other AI-enabled devices, e.g., through Bluetooth connection, demonstrate how weak cyber security features can violate children’s privacy and jeopardize safety. Smart toys can exacerbate children’s exposure to online safety risks, such as “‘content’ risks (e.g. exposure to harmful or age-inappropriate material); ‘contact’ risks (e.g. exposure to unsolicited contact from adults); ‘conduct’ risks (e.g. cyberbullying); and ‘contract’ risks (e.g. data harvesting, commercial pressure and exhortations to gamble)” as defined by the 5Rights Foundation.

Legal mechanisms internationally govern the proliferation of AI-powered toys and the mitigation of such risks. The EU’s Cyber Security Act poses minimum requirements for smart toy developers, which have to design toys that exhibit strong cybersecurity features when products are marketed in the EU. The European AI Act, which is expected to go into effect in 2024, introduces a four-tiered risk framework to evaluate artificial intelligence technologies and also requires smart toy innovators to assess the impact of their AI systems against ”reasonably foreseeable misuse”. AI-enabled smart toys that are exploitative of children are outright banned by the regulation. The EU’s Digital Services Act also protects youth by banning AI-enabled targeted advertising through profiling children. A U.S. federal law, the Children’s Online Privacy Protection Act (COPPA) sets age-appropriate access criteria and exchange of content on websites as a requirement. The FTC has recently also introduced a cyber security labeling programme to protect consumers from AI system breaches.[…]

Read more: www.weforum.org