Document Type

Symposia Article


Businesses and consumers increasingly use artificial intelligence (“AI”)— and specifically machine learning (“ML”) applications—in their daily work. ML is often used as a tool to help people perform their jobs more efficiently, but increasingly it is becoming a technology that may eventually replace humans in performing certain functions. An AI recently beat humans in a reading comprehension test, and there is an ongoing race to replace human drivers with self-driving cars and trucks. Tomorrow there is the potential for much more—as AI is even learning to build its own AI.

As the use of AI technologies continues to expand, and especially as machines begin to act more autonomously with less human intervention, important questions arise about how we can best integrate this new technology into our society, particularly within our legal and compliance frameworks. The questions raised are different from those that we have already addressed with other technologies because AI is different. Most previous technologies functioned as a tool, operated by a person, and for legal purposes we could usually hold that person responsible for actions that resulted from using that tool. For example, an employee who used a computer to send a discriminatory or defamatory email could not have done so without the computer, but the employee would still be held responsible for creating the email.

While AI can function as merely a tool, it can also be designed to act after making its own decisions, and in the future, will act even more autonomously. As AI becomes more autonomous, it will be more difficult to determine who—or what—is making decisions and taking actions, and determining the basis and responsibility for those actions. These are the challenges that must be overcome to ensure AI’s integration for legal and compliance purposes.

First Page


Last Page