Skip Navigation

Expectations and Trust in AI Systems

Blaine Hoak PhDx’25 knows that AI has tremendous possibilities. But to keep AI safe, she looks at the ways that it can fail.

Blaine Hoak PhDx’25 is interested in artificial intelligence because she’s interested in biological intelligence. “I actually got my bachelor’s degree in biomedical engineering,” she says. “And after that, I took some classes in AI because obviously, there’s this tie between human systems and artificial intelligence. After learning about that and working more with AI, I got really interested, and I thought there are just so many possibilities here. There are so many different things that AI can do. It was really inspiring.”

As her understanding of AI deepened, so did her curiosity about the field’s strengths and limitations. In graduate school, she began studying AI systems’ weaknesses and failures. “Obviously, this is being widely adopted into our society, and it has all these great capabilities,” she says. “But ensuring that it’s actually safe and secure and trustworthy for people to use is really important.”

Hoak started out at Penn State, but when her adviser, Patrick McDaniel, took a position in UW–Madison’s School of Computer, Data, and Information Science, Hoak followed him to Wisconsin — “an absolutely amazing school,” she says.

On March 14, The UW Now Livestream will look at AI fact and fiction, and Hoak will provide insight into security and reliability.

My Main Area of Research Is:

My main area of research is in trustworthy AI. More specifically, my work focuses on evaluating and advancing the security of AI systems. By identifying and measuring the extent of the vulnerabilities in these systems, and by understanding how, when, and why failure modes happen, we can then develop new techniques to help overcome them and make a AI systems more secure and more trustworthy for everyone.

Tonight on The UW Now Livestream, I’ll Discuss:

I’m going to talk about expectations, trust, and shortcomings in AI systems. We’ve all witnessed the incredible capabilities of AI, the impressive performance that AI systems can have on a variety of tasks can leave people feeling that AI is human-like or even superhuman. However, there are still many, many ways in which AI systems fail. These issues not only highlight fundamental flaws in these systems, but they also showcase the discrepancies between how AI behaves and learns and how humans behave and learn. It’s really important that we reshape our trust and expectations of these systems and not rely on them to learn or behave like humans do.

The One Thing I Want Viewers to Remember Is:

Setting realistic expectations about the systems that people are using is important. AI is capable of extraordinary things, but there’s still a lot of work to be done toward making these systems trustworthy. I definitely don’t want to send the message that people shouldn’t use AI. I think that we should be using AI, but we should not expect AI systems to be perfect.

To Get Smart Fast, Read:

The news usually has pretty extensive coverage — the New York Times, Washington Post, places like that, usually have really extensive coverage of the latest and greatest AI systems, and also stories of crazy failures. For people that are interested in learning a little bit more about how the systems work, it’s amazing how much high-level content you can find just by Google searching some of the questions that you have. 

Related News and Stories

Three professors joined WFAA CEO Mike Knetter for a conversation that applied social science to examine the real-world outcome of the policies Kamala Harris and Donald Trump are proposing.

As the presidential election draws near, Smeeding and other UW experts will interpret how the candidates’ policies will affect life in the real world.