Emory Responsible AI promotes the ethical use of Artificial Intelligence
Learn to use new and powerful AI technology responsibly within organizations
Businesses are ramping up their use of Artificial Intelligence, and there’s going to be big demand for people who understand the ethical implications of this powerful technological trend.
With this in mind, Dr. Anne-Elisabeth Courrier has designed the upcoming Emory Center for Ethics program Responsible AI: Ethical Strategies for Your Organization.
When computers start making decisions, people may rightly worry about “the loss of control, the loss of human agency,” said Dr. Courrier, who was a scholar of law and ethics in France before coming to teach at Emory.
There’s good reason to be concerned about the responsible use of AI technologies. On the one hand, AI will soon be pervasive: 45 percent of companies are exploring its use, 33 percent have begun limited implementations, and 22 percent are “aggressively pursuing the integration of AI across a wide variety of technology products and business workflows,” according to the Computing Technology Industry Association (CompTIA).
And this has people worried. Some 82 percent of Americans say they care whether AI is ethical or not, and 68 percent are “somewhat or very concerned about the negative impact of AI on the human race,” according to a recent study from Santa Clara University.
In the Emory course, learners will explore the risks. They’ll take a look at how AI systems can potentially be biased or misguided, how they may expose personal data, and generate outcomes that lead us astray.
Then the six-week program will go on to explore some of the ways in which modern organizations can navigate these novel challenges. Designed for executives, boards, and managers who will be implementing AI programs, this online, asynchronous course will feature videos presented by international experts, along with outside readings.
“The idea is to give a path to approaching the risks related in the development of an AI system,” Dr. Courrier said. “It will be hands-on, meaning learners will choose an AI system — a real one or a virtual one — and we will go through the journey of exploring the different ethical risks.”
They will reflect on the issues of responsibility that arise within that AI system, “and at the end of the course they will gather all of the exercises and the assignments that they have been doing and turn that into a blueprint of an ethical impact assessment,” she said.
When they take these skills into the workplace, “they will be able to replicate this approach on other AI systems,” she said.
In addition to upper management and others in leadership roles, the course offers career advantage to anyone whose work is likely to incorporate AI capabilities in the near future. The people building AI-informed systems, as well as those using them in their everyday work, will all need to know how to ensure these systems perform responsibly.
“The end users, even ordinary citizens — a very wide range of people will need to be educated about this,” Dr. Courrier said. And those people will be in high demand in the job market: As businesses ramp up their use of AI, they will be looking for people who can help to establish and deploy strong guardrails.
“People need to be informed. They need to at least know what the questions are” around the potential ethical risks in AI use, she said. “We need more than the law. We need more than regulation. We need to educate.”
The clock is already ticking, and there’s an urgency around getting this right. “AI is developing so fast, that innovation is just behind the door, and this is great." Dr. Courrier said. But given that rapid acceleration, “we need to understand how we are making the decisions with AI, how we can keep control.”
As machine-driven decision-making becomes ever more common across the business landscape, “the only way to ensure trustworthy and safe AI is to be informed and aware about the ethical risks, and about the way in which we can mitigate them,” she said.
What strategies are available to identify the hazards, to avoid the harms, and to define responsibilities when designing and using AI systems? The people who can answer these questions “are the ones who will make the difference at the end of the day,” she said.
Learn more about the Responsible AI: Ethical Strategies for Your Organization program, offered by Emory’s Center for Ethics with support from ECE and Emory’s Center for AI Learning.