Main content

Responsible AI: Ethical Strategies for Your Organization


Why take a course about AI ethics?


No recent topic in technology or in ethics has captured the imagination spread as quickly as the ethics of artificial intelligence (AI). And while the remarkable advances of AI hold unprecedented promise, the development of AI also is fraught with challenges. AI systems can be biased, misguided, expose personal data, lack transparency, lead us astray, and raise questions about the very nature of humankind. Integrating AI into organizations can raise questions of job displacement, retraining, human deskilling, and the proper balance of human and artificial decision-making.

How should modern organizations navigate these unprecedented challenges? What strategies should we use to identify the risks, understand the scope, avoid the harms, and define responsibilities when designing and using AI systems? To help answer these difficult questions, the Emory University Center for Ethics is offering the online certificate Responsible AI: Ethical Strategies for Your Organization.

This six-week program is intended for executives, boards, and general management who face tough decisions about implementing AI programs in their organizations. Delivered through videos featuring international experts, reading lists, and transcripts, learners will be invited to work on an AI system of their choice and to apply lessons to their chosen systems. The program is designed for six to eight hours of personal study time per week, is mainly asynchronous (i.e., online), and successful students will earn a certificate of completion from Emory University.

At the end of the program, you will be able to assess the ethical risks and define strategies for the model AI system you select and work on during the training session. Specifically, you will have developed practical skills to:

  • Identify the types of ethical risks in AI systems
  • Sketch an outline of a privacy impact assessment
  • Recognize the various types of bias and write a bias mitigation plan
  • Name the different components of trustworthy AI
  • Create an outline of a cybersecurity incident response
  • Deliver a practical blueprint of an ethical impact assessment of your selected AI system

Register


Featuring world renowned experts:

Dr. Paul Root Wolpe   
Emory University 

Dr. Anne-Elisabeth Courrier
Nantes University and Emory University

Dr. Ifeoma Ajunwa
University of North Carolina

Dr. John Banja
Emory University
 
Dr. Céline Castets-Renard
University of Ottawa
 
Dr. Ryan Calo
University of Washington

Dr. Jinho Choi
Emory University

Dr. Maarten Lamers
Leiden University

Dr. Naomi Lefkovitz
National Institute of Standards and Technology 

Dr. Eugène Ndiaye
Georgia Institute of Technology 

Dr. Edward Queen
Emory University

Dr. Christoph Rasche 
University of Potsdam

Dr. Max Van Duijn
Leiden University

Dr. Carissa Veliz
Oxford University

Dr. Serena Villata
Institut 3IA Côté d'Azur
 
Dr. Gloria Washington
Howard University

Dr. Lance Waller
Emory University 

Dr. Bryn Williams-Jones 
University of Montreal


This certificate is offered by the Emory Center for Ethics with support from Emory Continuing Education and the Emory Center for AI Learning.

The Emory Center for Ethics, as one of the largest, most comprehensive ethics centers in the United States, is a leader in advancing humanity with ethical AI. The Center helps students, professionals, and the public confront and understand ethical issues in health, technology, the arts, the environment, and the life sciences. We teach, publish, and offer programming that includes public engagement as part of our commitment to respond to issues and concerns important not just to the academic community but to the Atlanta community and beyond.

Register



Learning outcomes


After completing this certificate, participants will be able to:

  • Identify the ethical issues raised by use of AI at their organization and elsewhere
  • Prepare an ethical impact assessment of their organization’s use of AI
  • Suggest how to align use of AI to the mission and values of their organization
  • Prepare to respond to ethical challenges to their organization’s use of AI

Certificate Highlights

Duration
7 weeks

Cost
$2500

Time commitment
6-7 hours per week


Frequently Asked Questions

No, you don't need any specific knowledge about AI systems or ethics. Module 1 is dedicated to introductions to these topics.

Each module is organized for 6 to 7 study hours – including the assignments.

The internal structure of each module is the same and entails five units:

  • Unit 1: Introduction with a self-assessment of the level of proficiency on the topic
  • Units 2, 3, 4: units with content, videos, and reading lists
  • Unit 5: Assignments and self-assessments

  • In each module, you must participate in (asynchronous) discussions to interact with other students and use your knowledge and skills in a specific context.
  • Then, depending on the module, you are expected to prepare individual short essays, slide presentations, and short question exercises, all dedicated to and related to the AI systems you selected in module 1.

You should choose an AI system that has a direct connection with human beings (i.e., not an AI system used for weather predictions or agriculture).

The reading lists are organized at two levels: the compulsory reading list and the optional reading list

The platform includes a calendar with friendly reminders of the deadlines.

You can request up to two extensions as long as you contact the Program Manager.

No, this program is non-credit. You will receive a certificate of completion.