Cambridge experts warn: AI Deadbots could digitally stalk loved ones from beyond the grave

Artificial Intelligence Robot Red Particles

Cambridge researchers warn of the psychological dangers of “deadbots”, artificial intelligence that mimics dead individuals, and call for ethical standards and consent protocols to prevent misuse and ensure respectful interaction.

Artificial intelligence that allows users to hold text and voice conversations with lost loved ones risks causing psychological harm and even digitally “haunting” those who fall behind, according to Cambridge University researchers. design safety.

‘Deadbots’ or ‘Griefbots’ are AI chatbots that simulate the language patterns and personality traits of the dead using the digital footprints they leave behind. Some companies already offer these services, offering an entirely new kind of “postmortem presence.”

Artificial intelligence experts at Cambridge’s Leverhulme Center for the Future of Intelligence outline three design scenarios for platforms that could emerge as part of the developing ‘digital industry beyond’, to show the potential consequences of sloppy design in an area of ​​AI they describe as “high risk”. .”

Misuse of AI Chatbots

The research, published in the journal Philosophy and technologyhighlights the potential for companies to use deadbots to surreptitiously advertise products to users in the manner of a dead loved one, or distress children by insisting that a dead parent is still “with you.”

As the living register to recreate themselves virtually after death, companies could use the resulting chatbots to spam surviving family and friends with unsolicited notifications, reminders and updates about the services they offer, as if were “digitally haunted by the dead”. .”

Even those who initially take comfort in a “deadbot” may be exhausted by daily interactions that become an “overwhelming emotional burden,” the researchers argue, but they may also be powerless to suspend an AI simulation if the his now dead loved one signed a long term. contract with a digital service beyond.

Visualization of a fictional company called MaNana

A visualization of a fictitious company called MaNana, one of the design scenarios used in the paper to illustrate potential ethical issues in the emerging digital afterlife industry. Credit: Dr Tomasz Hollanek

“Rapid advances in generative AI mean that almost anyone with access to the internet and some basic knowledge can bring a dead loved one back to life,” said Dr Katarzyna Nowaczyk-Basińska, co-author of the study and researcher at the Leverhulme Center for future of Cambridge intelligence. LCFI). “This area of ​​AI is an ethical minefield. It’s important to prioritize the dignity of the deceased and make sure it’s not affected by financial reasons for digital afterlife services, for example. At the same time, a person can leave an AI simulation as a parting gift for loved ones who are not ready to process their grief in this way.The rights of both data donors and those who interact with AI afterlife services they must be safeguarded equally.”

Existing services and hypothetical scenarios

Platforms already exist that offer to recreate the dead with AI for a small fee, such as ‘Project December’, which started leveraging GPT models before developing its own systems, and apps like ‘HereAfter’. Similar services have also begun to emerge in China. One of the new paper’s potential scenarios is “MaNana”: a conversational AI service that allows people to create a dead robot simulating their dead grandmother without the consent of the “data giver” (the dead grandfather).

The hypothetical scenario sees an adult grandchild who is initially impressed and comforted by the technology start receiving ads after a “premium trial” ends. For example, the chatbot suggests ordering food delivery services using the voice and style of the deceased. The family member feels that they have disrespected their grandmother’s memory and wants the dead robot to be turned off, but in a meaningful way, which the service providers have not considered.

Viewing a fictitious company called Parent

A visualization of a fictional company called Paren’t. Credit: Dr Tomasz Hollanek

“People could develop strong emotional attachments to these simulations, which will make them particularly vulnerable to manipulation,” said co-author Dr Tomasz Hollanek, also from LCFI in Cambridge. “Methods and even rituals should be considered to remove dead robots in a dignified manner. This may mean a form of digital funeral, for example, or other types of ceremony depending on the social context. We recommend design protocols that prevent deadbots from being used in disrespectful ways, such as advertising or having an active social media presence.”

While Hollanek and Nowaczyk-Basińska say that recreation service designers should actively seek the consent of data donors before moving forward, they argue that banning dead robots based on non-consenting donors would be unworkable.

They suggest that design processes should involve a series of prompts for those seeking to ‘resurrect’ their loved ones, such as ‘Have you ever spoken to X about how they would like to be remembered?’, so that the dignity of the deceased remains in the foreground. in development of deadbot.

Age restrictions and transparency

Another scenario featured in the paper, an imaginary company called “Paren’t,” highlights the example of a terminally ill woman who leaves a dead robot to help her eight-year-old son with the grieving process.

While the deadbot initially helps as a therapeutic aid, the AI ​​begins to generate confusing responses as it adapts to the child’s needs, such as acting out an impending in-person encounter.

Visualization of a fictional company called Stay

A visualization of a fictional company called Stay. Credit: Dr Tomasz Hollanek

The researchers recommend age restrictions for dead bots and also call for “significant transparency” to ensure users are constantly aware that they are interacting with an AI. These could be similar to current warnings about content that may cause seizures, for example.

The final scenario explored by the studio – a fictitious company called “Stay” – shows an elderly person who secretly becomes engaged to a dead robot and pays a twenty-year subscription, hoping that it will comfort her grown children and allow for their grandchildren to know them.

After death, the service starts. A grown child does not engage and receives a barrage of e-mails in the voice of his dead father. Another does, but ends up emotionally drained and wracked with guilt over the deadbot’s fate. However, suspending the deadbot would violate the terms of the contract his parents signed with the service company.

“It is vital that digital beyond services take into account the rights and consent of not only those who recreate, but those who will have to interact with the simulations,” Hollanek said.

“These services risk causing people great distress if they are subjected to unwanted digital hauntings by alarmingly accurate AI recreations of those they have lost. The potential psychological effect, especially at an already difficult time, could be devastating”.

The researchers urge design teams to prioritize opt-out protocols that allow potential users to end their relationships with dead robots in ways that provide emotional closure.

Nowaczyk-Basińska added: “We need to start thinking now about how to mitigate the social and psychological risks of digital immortality, because the technology is already here.”

Reference: “Griefbots, Deadbots, Postmortem Avatars: on Responsible Applications of Generative AI in the Digital Afterlife Industry” by Tomasz Hollanek and Katarzyna Nowaczyk-Basińska, 9 May 2024, Philosophy and technology.
DOI: 10.1007/s13347-024-00744-w


#Cambridge #experts #warn #Deadbots #digitally #stalk #loved #grave
Image Source : scitechdaily.com

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top