The advent of large language models (LLMs) offers transformative opportunities for personalization, enabling systems to predict and adapt to individual user needs through natural language. However, these innovations also bring significant challenges, particularly the opaque representations in traditional information-seeking systems, which raise concerns about user trust, comprehension, and control over their data and decision-making process. This workshop focuses on the timely and critical need to address these transparency challenges by exploring how LLMs can be leveraged to achieve transparent user engagement and personalization.
Our focus spans several research topics, including building transparent user profiles, fostering natural language-driven preference elicitation, and creating feedback loops centered on explanation-driven interactions. Additionally, we are interested in addressing ethical and bias-related issues arising from opaque representations. By advancing transparency, this workshop aims to improve the trustworthiness, accountability, and usability of systems employing LLMs, thereby reshaping user interaction paradigms across diverse domains such as e-commerce, education, writing assistants, and content streaming. To this end, we seek to provide a forum for discussing the challenges, opportunities, and future directions in leveraging LLMs and generative AI for more personalized, transparent, and ethical user engagement.