The first factor is the intent of a given action and the character of the agent performing the action. But when humans make moral judgments they also consider two other factors. Utilitarian decision-making focuses on outcomes and consequences. “Our work addresses this and, while I used carebots as an example, is applicable to a wide range of human-AI teaming technologies.” “Previous efforts to incorporate ethical decision-making into AI programs have been limited in scope and focused on utilitarian reasoning, which neglects the complexity of human moral decision-making,” Dubljević says. How does the carebot decide which patient is assisted first? Should the carebot even treat a patient who is unconscious and therefore unable to consent to receiving the treatment? One patient is unconscious but requires urgent care, while the second patient is in less urgent need but demands that the carebot treat him first. “For example, let’s say that a carebot is in a setting where two people require medical assistance. “In practical terms, this means these technologies will be placed in situations where they need to make ethical judgments. “Technologies like carebots are supposed to help ensure the safety and comfort of hospital patients, older adults and other people who require health monitoring or physical assistance,” says Veljko Dubljević, corresponding author of a paper on the work and an associate professor in the Science, Technology & Society program at North Carolina State University. The project was focused specifically on technologies in which humans interact with AI programs, such as virtual assistants or “carebots” used in healthcare settings. Matt Shipman interdisciplinary team of researchers has developed a blueprint for creating algorithms that more effectively incorporate ethical guidelines into artificial intelligence (AI) decision-making programs.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |