Author Bio

Nomisha Kurian is a Teaching Associate at the Faculty of Education, University of Cambridge. She is currently researching how technology impacts children’s rights and wellbeing and exploring AI ethics at the Leverhulme Centre for the Future of Intelligence. She spoke at the 2022 UNESCO Forum on Artificial Intelligence and Education and recently became the first Education researcher to win the University of Cambridge Applied Research Award for her work to widen participation in higher education. Her research has most recently been published in the Oxford Review of Education, the British Educational Research Journal, and the Journal of Pastoral Care in Education.


Rapid advancements in Artificial Intelligence (AI) pose new ethical questions for human rights educators. This article uses Socially Assistive Robots (SARs) as a case study. SARs, also known as social robots, are AI systems designed to interact with humans. Often built to enhance human wellbeing or provide companionship, social robots are typically designed to mimic human behaviors. They may look endearing, friendly, and appealing. Well-designed models will interact with humans in ways that feel trustworthy, natural, and intuitive. As one of the fastest-growing areas of AI, social robots raise new questions for human rights specialists. When used with young children with disabilities, they raise pressing questions around surveillance, data privacy, discrimination, and the socio-emotional impact of technology on child development. This article delves into some of these ethical questions. It takes into account the unique vulnerabilities of young children with disabilities and reflects on the long-term societal implications of AI-assisted care. While not aiming to be comprehensive, the article explores some of the ethical implications of social robots as technologies that sit at the boundary of the human and nonhuman. What pitfalls and possibilities arise from this liminal space for children’s rights?

Included in

Education Commons