Skip Navigation

Deep Learning

Sharon Li looks at what happens when AI systems meet a messy reality.

“Almost all machine-learning models make an assumption,” says Sharon Li, a UW–Madison professor of computer sciences. “They assume the data that they will see in the future will look a lot like the data that they had seen in the past, during the time they were [being developed]. [But] if you think about what AI systems would encounter in reality, it often involves unfamiliar data, that could be data. And unfortunately, these modern artificial intelligence systems can struggle with these unfamiliar situations.”

Li joined the UW faculty in 2020, and her field — machine learning, a subset of artificial intelligence — has been booming in recent years. On March 14, Li will be one of several UW experts separating AI fact from AI fiction on a UW Now Livestream conversation.

My Main Area of Research Is:

I broadly work in the area of machine learning and deep learning. More specifically, my research focuses on the aspect of AI safety, especially when it comes to deploying machine learning models in the real world. In machine learning, there often exists a major gap between how the model was trained versus what it will encounter in reality. Our goal is to develop a generic algorithm based on a generalized formulation that hopefully can be plugged into different use cases [to help machine learning models deal with unfamiliar data].

Tonight on The UW Now Livestream, I’ll Discuss:

A lot of our conversation will revolve around clarifying what artificial intelligence really entails, what it is capable of versus incapable. What I really want to bring to this conversation is to encourage the audience to look at these technological advances in a more well-rounded perspective, not just in terms of what AI can do, but also what it cannot do yet.

The One Thing I Want Viewers to Remember Is:

The point that I want to make is despite the fact that artificial intelligence models have achieved remarkable success, they don’t necessarily know what they don’t know. That is the big motivation for my work. AI is really expanding its role in our lives in a lot of different application areas. Therefore, this need for safe and reliable decision-making is really, really critical.

To Get Smart Fast, Read:

A lot of researchers in machine learning nowadays like to announce and share their research over Twitter. People tend to write in a very digestible, accessible way. I personally like to tweet about our research. There are also some nice blogs. Google has some really nice blog posts around AI safety. An organization that’s dedicated to promoting AI safety and alignment is letsrun.com.

Related News and Stories

UW Now guests Greg Reichow and Eric Kazyak answer further questions from UW Now Livestream viewers.

According to speakers Eric Kazyak, Rod Copes, and Greg Reichow, America is on the cusp of an electric driving transformation.

Voting is open! Help choose the new design for The Red Shirt before 5 p.m. CDT on Friday, April 26. Vote now!