Skip to content

Inside AI at the UW: A conversation with Noah Smith (Part 2)

Image of Noah SmithIn a three-part interview, vice provost and endowed chair for artificial intelligence, Noah Smith, addresses the University of Washington’s role and responsibility in the development and ethical use of AI, preparing students for careers and funding support for projects that explore how artificial intelligence can enhance teaching and learning.

In this second installment, Smith addresses concerns around privacy, ethics and the impact of AI on fields such as the humanities.

How do we, as a University, address concerns around privacy and the ethical use of AI—and our responsibility to lead the conversation in these areas?
The primary mechanism for institutional responsibility will be the provost’s AI Governance Committee, which I will co-chair. Once formed, the committee’s aim will be to ensure that all policy development and program implementation stay aligned with the UW’s values. The committee will craft recommendations that directly address institutional data and privacy, ethics and bias and areas of uncertainty that require deeper study.

My faculty colleagues, especially, will appreciate the tension between responding promptly to the now-widespread presence of AI in our lives and work, and inclusive and comprehensive deliberation. This is a complicated matter because AI isn’t one thing, and it’s changing all the time. The use cases (and potential use cases) envisioned across the UW are likely too broad for anyone to have the full picture anytime soon. So I expect that after the committee makes initial recommendations and we establish a framework for ensuring ethics, equity, safety, fairness, transparency and social responsibility, our policies will need to continually co-evolve with the technology and our understanding of it.

When we think of AI, we tend to think of it in terms of fields such as STEM and health care and how AI is being leveraged to advance innovation and improve diagnosis and treatments, to find cures. But what about the humanities? The arts? Is AI a threat to those fields? Will the use of generative AI lead to the loss of critical thinking and creativity?

The short answer is, I don’t know. And I don’t think anyone has the answer. My reading of history and human nature is that human critical thinking and creativity at their best are never at risk, under even the worst circumstances. But at their worst, our critical thinking and creativity can be deeply disappointing.

I would like for AI to offer us new tools to promote both critical thinking and creativity, and some of what I’ve read seems to share that vision (e.g., “Will the humanities survive artificial intelligence?” by D. Graham Burnett, a Princeton historian of science, in the New Yorker in April).

Though it wasn’t always called AI, I’ve always been excited about building software tools that help us with tedious but important tasks. An example is explaining a term I don’t know in a language or field I’m still learning, or transcribing a musical performance so I can try to reproduce something in it myself (“good artists copy, great artists steal,” many have said).

There’s also a scale element; a language model can read a lot faster than I can and may help me locate an answer to a technical question among the thousands of papers being published each year in my field. I don’t think there’s a serious danger that AI will do scholars’ thinking for us. I’ve been working among scholars my whole professional life, and they cannot not do their own thinking.

At the same time, I won’t write off the concerns some on campus have about the effects of AI on the state of our thinking or our society. We can’t adopt this stuff uncritically! We must take responsibility for helping our students prepare to lead and innovate ethically in a world where technology is moving fast.

We can do a lot to mitigate the risks by first recognizing that our students are no less intelligent, resilient, or well-intentioned than any other generation in human history. Then, we can engage openly and honestly about the challenge ahead of us.

I believe that there are philosophical questions that are raised specifically by language models and automated fluency that’s not grounded in human intellect and experience. Humans are grappling with something new, and the language we have for talking about it is not satisfying or rigorous yet. But I believe the minds we have at the UW are up to the challenge of making sense of what’s going on and finding constructive ways forward.

In light of state, federal and university budget constraints — and the decentralized nature of the UW — how can we provide faculty and staff with the AI-related resources and infrastructure they need to advance their work?
Resources are tight, so we have to be efficient by leveraging shared infrastructure, shared expertise, and shared learning. If we can centralize what’s hardest to get right—such as access to good tools, a pool of expertise to help projects that need it, foundational AI literacy for all and governance—then the experts in each unit can focus on what makes their work and study unique.

I think this is where AI expertise can help; there’s an interesting push-and-pull between the idea of “general purpose” AI and customization. Right now the latter doesn’t seem to be a priority in commercial tech solutions, but I think it’s essential in the work of a university, where every individual researcher or learner faces a micro-universe of questions.

I want each scholar to be empowered to advance AI for their own purposes, with low barriers to entry, small environmental footprint, and the same complete privacy we have for our own thoughts. The bloated, one-size-fits-all solutions currently on the market give a glimmer of what may be possible, but we have more work to do to reach the kind of individual empowerment that I think is possible.

Missed part one of this series? Please visit our News page to read the vision, purpose and scope of the AI@UW initiative that Smith is leading as vice provost for AI, and how faculty and staff can become involved.