Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2 
Published in Proceedings of the 30th ACM international conference on information & knowledge management (CIKM), 2021
In this work, we present a lightweight yet accurate deep learning-based method to predict taxicabs’ next locations to better prepare for targeted advertising based on demographic information of locations.
Published in Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021
We introduce HyperCLOVA, a Korean variant of 82B GPT-3 trained on a Korean-centric corpus of 560B tokens. Enhanced by our Korean-specific tokenization, HyperCLOVA with our training configuration shows state-of-the-art in-context zero-shot and few-shot learning performances on various downstream tasks in Korean.
Published in Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL), 2022
In this work, we study the challenge of imposing roles on open-domain dialogue systems, with the goal of making the systems maintain consistent roles while conversing naturally with humans.
Published in Findings of the Association for Computational Linguistics: EMNLP 2022, 2022
We present a novel task and a corresponding dataset of memory management in long-term conversations, in which bots keep track of and bring up the latest information about users while conversing through multiple sessions.
Published in Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023
We propose a novel alignment learning framework with synthetic feedback not dependent on extensive human annotations and proprietary LLMs.
Published in arXiv preprint, 2024
We introduce HyperCLOVA X, a family of large language models (LLMs) tailored to the Korean language and culture, along with competitive capabilities in English, math, and coding. HyperCLOVA X was trained on a balanced mix of Korean, English, and code data, followed by instruction-tuning with high-quality human-annotated datasets while abiding by strict safety guidelines reflecting our commitment to responsible AI.
Published:
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.