TRIADS Speaker Series: Large Neural Models' Self-Learning Symbolic Knowledge

Speaker: Heng Ji, University of Illinois Urbana-Champaign

Large Neural Models' Self-Learning Symbolic Knowledge

Recent large neural models have shown impressive performance on various data modalities, including natural language, vision, programming language and molecules. However, they still have surprising deficiency (near-random performance) in acquiring certain types of knowledge such as structured knowledge and action knowledge.

In this talk, Heng Ji proposes a two-way knowledge acquisition framework to make symbolic and neural learning approaches mutually enhance each other. In the first stage, we will elicit and acquire explicit symbolic knowledge from large neural models. In the second stage, we will leverage the acquired symbolic knowledge to augment and enhance these large models. Ji will present three recent case studies to demonstrate this framework.

Co-sponsored by the Digital Intelligence & Innovation Accelerator

Speaker Bio:

Heng Ji is a professor at Computer Science Department, and an affiliated faculty member at Electrical and Computer Engineering Department and Coordinated Science Laboratory of University of Illinois Urbana-Champaign. She is an Amazon Scholar. She is the Director of Amazon-Illinois Center on AI for Interactive Conversational Experiences (AICE). She received her B.A. and M. A. in Computational Linguistics from Tsinghua University, and her M.S. and Ph.D. in Computer Science from New York University. Her research interests focus on Natural Language Processing, especially on Multimedia Multilingual Information Extraction, Knowledge-enhanced Large Language Models, Knowledge-driven Generation and Conversational AI.

RSVP