Title: Pretrained Language Model and Semantic Textual Understanding
Speaker: Dr. David Seunghyun Yoon, Research Scientist @ Adobe Research
Time : 2022 Sep 13th 13:00 ~ 14:00
Location:
Online: https://us02web.zoom.us/j/81170319980 (PW: 0913)
After the talk, we will have an informal Q&A session with Dr. Yoon. Please leave any questions about your research or career.
Abstract:
Semantic textual understanding is a fundamental research topic engaging in many text-related tasks (semantic search, word/phrase/sentence-level similarity/representation). Recently, we have observed huge advancements in this area, leveraging pre-trained language models (PLMs). However, it is still uncertain whether PLMs can understand the semantic meaning of phrases.
In this talk, I'll introduce recent techniques for computing textual representation, their strengths/limitations, and new research resource.
Further, I'll present recent research that leverages a pre-trained textual encoder to retrieve textual information and generate a synthetic training dataset for a named entity recognition task.
Bio:
David Seunghyun Yoon is a Research Scientist at Adobe Research. He received his M.S and Ph.D. in Electrical and Computer Engineering from Seoul National University in 2017 and 2020. His research interests are in the areas of machine learning and natural language processing (NLP). He is particularly interested in learning language representation, multimodal representation (i.e., text, visual, and audio), and their practical use case (e.g., semantic information retrieval). His works have been published in top NLP/AI conferences, including (ACL, NAACL, EMNLP, and AAAI).