• Friday, 09 August 2024
logo

Professor Maura R. Grossman: I believe that all computer and data scientists should be required to receive ethics training

Professor Maura R. Grossman: I believe that all computer and data scientists should be required to receive ethics training

Maura R. Grossman is a Research Professor in the School of Computer Science at the University of Waterloo and an Adjunct Professor at Osgoode Hall Law School. She is an expert in the field of eDiscovery, with extensive experience in civil litigation, white collar criminal investigations, and regulatory matters. Maura is well-known for her work on technology-assisted review (TAR) and has been recognized as a leading figure in the eDiscovery field. She has served as a court-appointed special master, mediator, and expert in high-profile federal and state court cases. Maura has also provided eDiscovery training to judges and has taught courses at various law schools. She holds degrees in Psychology from Adelphi University and a J.D. from Georgetown University Law Center. As artificial intelligence (AI) continues to shape various aspects of our lives, it becomes crucial to explore its potential impact on critical sectors such as healthcare, transportation, and public services. However, alongside the benefits, there arise complex ethical considerations and challenges that must be addressed to ensure the responsible and safe deployment of AI technologies. In an insightful interview, we engage in a conversation with Professor Maura R. Grossman, an esteemed expert in the field, to delve into these pressing issues and shed light on the path forward.

Gulan media: What are the most significant ethical, societal, and existential implications of advancing artificial intelligence, and how can we ensure that AI development and deployment align with our values, promote human welfare, and mitigate potential risks to our collective future?

Professor Maura R. Grossman: Possibly the greatest threat posed by recent AI developments is the proliferation of deepfakes—AI-generated video, audio, and images. As generative AI has improved, we are quickly approaching the point that computer-generated content is nearly indistinguishable from human-generated content.  Setting aside the harm caused by non-consensual intimate images—which have even reached school-age children—and the more sophisticated forms of fraud that generative AI can facilitate—often targeted at the elderly, who are more vulnerable to these threats—deepfakes have profound implications for the justice system, election integrity, and perhaps even democracy itself.  Soon, judges and juries, voters, and the general public will no longer be able to rely on their eyes and ears to tell fact from fiction, and that’s an incredibly dangerous place to be. 

Right now, I am less concerned with existential risk than the problems that we already see today, including AI that is biased and harms marginalized groups, thereby exacerbating discrimination, and the ever-increasing power, surveillance, and privacy violations perpetuated by Big Tech and some authoritarian governments as a result of the accumulation of massive amounts of personal data about citizens.  I also believe that economic and labor risks posed by AI are potentially serious as AI replaces humans in the job market.

Voluntary Ethical guidelines have not been sufficient to keep things in check; we need thoughtful regulation and government agencies with expertise in AI to intervene to protect the public and ensure a safe future.  

Gulan media: How can we strike a balance between harnessing the transformative power of artificial intelligence to drive innovation and progress, while also safeguarding against its potential misuse, ensuring equitable access, and preserving human autonomy and dignity in an increasingly AI-driven world?

Professor Maura R. Grossman: This boils down to proper regulation, which is difficult to achieve because technological development is moving so quickly and laws are already outdated by the time they go into effect.  Presently, companies are rushing to market with tools that have been insufficiently vetted, with little thought as to who is benefitted and who is harmed by them, or to their unintended consequences.

It remains to be seen whether the European Union’s Artificial Intelligence Act—the first comprehensive law addressing AI—which will go into effect in near future, will help by placing guardrails on high-risk AI systems, or will stifle innovation, as some commentators fear.  Achieving the right balance between the two is challenging enough, but implementing regulations with respect to AI is made all the more difficult in the context where there is no global agreement on direction.  No country feels safe regulating when its enemies or competitors may continue to build more powerful models or gain economic advantage.  Without some consensus on what is off limits, this seems like an unsolvable problem.

Gulan media: How can we harness the potential of artificial intelligence to enhance national security and defense capabilities, while responsibly managing the risks associated with autonomous weapons systems, cyber warfare, and the ethical implications of AI-enabled decision-making in conflict scenarios?

Professor Maura R. Grossman: There is very little gray area in discussions about military uses of AI.  We know that lethal autonomous weapons and cyber-warfare are already proliferating; we’ve seen them deployed in recent conflicts around the world.  We don’t really know if AI increases or decreases civilian casualties because we haven’t done that research; it isn’t realistic to do it.  Again, there is no global consensus about whether machine-made decisions about killing are acceptable in the first place and without a global agreement on limits on the use of AI in the military context, there is no way to control the use of AI in war.

Gulan media: In the realm of medicine, a critical question is: How can artificial intelligence revolutionize healthcare delivery, diagnosis, and treatment, while ensuring patient privacy, equity in access, and the preservation of the doctor-patient relationship?

Professor Maura R. Grossman: Applications of AI in healthcare strike me as some of the more promising directions for the deployment of AI.  AI has the potential to reshape a field that has primarily been reactive to one that is more proactive.  By that I mean that AI in medicine may help us move from a focus on treatment of diseases to one of prevention, as we learn more about what factors cause illness.

By way of example, I see tremendous promise for AI in the ability to diagnose disease from various imaging sources (e.g., X-rays, CT and ultrasound scans, etc.), the ability to decipher protein structure from the sequence of amino acids (e.g., Alphafold), the development of new drugs and other treatments, and in the use of synthetic data for medical research, decreasing the privacy risks caused by research using sensitive personal information of actual human subjects.  These developments may transform the practice of medicine as we know it. The key will be to make sure that all groups benefit from these AI developments equally.

Gulan media: How should we address accountability for privacy and other violations resulting from AI-generated imagescontent, given recent instances of unethical misuse? Should responsibility lie with the individuals commanding the AI or the developers who may lack sufficient constraints? What penalties should be considered for such violations, and should computer scientists receive legal education alongside their technical training to promote ethical use of AI?

Professor Maura R. Grossman: There is an inherent tension between the ever-increasing need for data to train AI systems and the maintenance of personal privacy.  As a society, we have not come to terms on whether we prefer privacy or convenience, and whether we are willing to accept the costs to personal privacy of targeted advertising.  Different cultures and different countries have valued these things differently, and it matters greatly how the information that is collected from citizens is used.  In some places, increased surveillance has resulted in greater public safety; in others, it has led to greater authoritarian control. 

Similarly, we have not yet reached a consensus on how to address the needs of large language models (“LLMs”) and other generative AI tools for massive amounts of data and the consequences of scraping of copyrighted content from the Internet.  We need to find the right balance between fostering innovation and the development of these useful generative AI tools and the proper protection of writers and artists.  Courts have not yet resolved the issue of what constitutes “fair use” and different countries have viewed this issue differently.  

One thing that is clear at the present time is that there is little accountability for unethical behavior.  Even what seemed like substantial fines for privacy violations imposed by regulations such as the European Union’s General Data Protection Regulation (“GDPR”) have come to be seen as the cost of doing business for many Big Tech companies.  The fines have simply not deterred bad conduct. 

Who we hold accountable has implications for who can enter a given market.  For example, if privacy protection requirements are onerous, only big, entrenched players can enter the field; newer startups that are unable to afford expensive compliance efforts are precluded.

I believe that all computer and data scientists should be required to receive ethics training, indeed I teach such courses at the University of Waterloo.  But that is not sufficient if there aren’t proper incentives for good behavior.  We have seen many Big Tech companies fire their ethics groups when they either dislike their recommendations or economics necessitate downsizing.  As long as the public continues to use services that violate privacy and other ethical considerations, nothing will change.

Top