Breadcrumb

Are hallucinations in text generation always undesirable? A perspective from text elaboration

Prof. Yue Dong, Department of Computer Science and Engineering, UCR

Recent developments in deep learning have led to exponential improvements in Natural Language Generation (NLG), particularly in terms of fluency and coherency. On the other hand, deep learning-based text generation is also susceptible to hallucinating unintended text that is not directly supported by the source document. These unsupported texts are called hallucinations and are often perceived as undesirable. In this talk, we investigate whether hallucinations are always undesirable through the lens of text summarization and simplification. Surprisingly, we show that many of these unsupported generations provide background knowledge for comprehension. In fact, although the primary tool for summarization and the implication is content reduction, content addition (aka elaborative generation) may also be beneficial for many real-world applications.

Dr. Yue Dong

Tags