Skip to content
Gallery
LLM-TiP @ KDD 2025
Share
Explore

icon picker
LLM-TiP @ KDD 2025

The 1st Workshop on Large Language Models for Transparency in Personalization

Overview

The advent of large language models (LLMs) offers transformative opportunities for personalization, enabling systems to predict and adapt to individual user needs through natural language. However, these innovations also bring significant challenges, particularly the opaque representations in traditional information-seeking systems, which raise concerns about user trust, comprehension, and control over their data and decision-making process. This workshop focuses on the timely and critical need to address these transparency challenges by exploring how LLMs can be leveraged to achieve transparent user engagement and personalization.
Our focus spans several research topics, including building transparent user profiles, fostering natural language-driven preference elicitation, and creating feedback loops centered on explanation-driven interactions. Additionally, we are interested in addressing ethical and bias-related issues arising from opaque representations. By advancing transparency, this workshop aims to improve the trustworthiness, accountability, and usability of systems employing LLMs, thereby reshaping user interaction paradigms across diverse domains such as e-commerce, education, writing assistants, and content streaming. To this end, we seek to provide a forum for discussing the challenges, opportunities, and future directions in leveraging LLMs and generative AI for more personalized, transparent, and ethical user engagement.

Research Questions

Our workshop aims to answer key questions, such as:
How can LLMs enable more transparent user models including profile building?
How can we train LLMs to generate accurate yet transparent user profiles?
How can users express and modify preferences through natural language for improved steerability?
What mechanisms are necessary to empower users to retain control and autonomy in utilizing their data?
How to identify and address ethical and bias-related issues in LLM-driven personalization systems?
What are the best practices for fostering user trust and accountability in LLM-powered applications for personalization (e.g., recommender systems, dialogue systems)?

Topics of Interest

We welcome papers on all topics related to LLMs and generative AI for transparent user engagement and personalization. These topics include but not limited to:
Transparent and Scrutable User Representation: LLMs can generate textual summaries of user profiles that capture their preferences over time. These summaries provide users with a transparent view of how the system perceives their preferences, facilitating a better understanding of the recommendation process. These scrutable user representations can be edited by users to control the recommendation.
Natural Language Interaction for Preference Elicitation: Beyond static user profiles, LLMs can engage in natural language dialogue to elicit user preferences in a conversational manner. This allows for a more nuanced understanding of user preferences, including context, mood, subjective features, and specific needs at the moment, which can be difficult to infer from behavior alone. This also includes the development of conversational policies powered by reinforcement learning.
Personalized Explanation-driven Feedback Loop: By providing explanations for decision-making, LLM-based systems enable a feedback loop where users can directly interact with the explanations (e.g., by expressing agreement or disagreement). This feedback can refine user models in real-time, enhancing the accuracy of decision-making as well as the user's trust and engagement in the system. Recognizing that different users have different preferences for how explanations are presented, LLMs can also tailor the style, complexity, and detail level of explanations to satisfy diverse user needs, enhancing the personalization and effectiveness of the recommendation system.
User Autonomy, Ethical, and Bias Considerations: Empowering users to edit, audit, and personalize AI behaviors can promote automation with human oversight for ethical AI deployment. On the flip side, introducing user autonomy has the potential to introduce new types of biases (e.g., those present in their pre-training data and amplified by their training). Better understanding these considerations will be essential, notably to obtain transparent and trustworthy user models.

Call for Papers

megaphone
All the accepted submissions will be presented at the workshop, either in oral sessions or the poster session.
We inivite quality research contributions and application studies in different formats:
Original research papers, both long (limited to 8 content pages) and short (limited to 4 content pages);
Extended abstracts for vision, perspective, and research proposal (4 content pages);
Posters or demos on user modeling and interaction in recommendation systems through LLMs (4 content pages).
Workshop papers that have been previously published or are under review for another journal, conference or workshop should not be considered for publication. Workshop papers should not exceed 12 pages in length (maximum 8 pages for the main paper content + maximum 2 pages for appendixes + maximum 2 pages for references). Papers must be submitted in PDF format according to the ACM template published in the ACM guidelines, selecting the generic “sigconf” sample. The PDF files must have all non-standard fonts embedded. Workshop papers must be self-contained and in English. The reviewing process is double-blinded.
At least one author of each accepted workshop paper has to register for the main conference. Workshop attendance is only granted for registered participants.

Important Dates

megaphone
Time zone:
Event
Date
Submission deadline
5/8/2025
Acceptance notification
6/8/2025
Camera Ready
6/22/2025
There are no rows in this table

Invited Speakers

image.png
U of Toronto
image.png
Stanford
image.png
MSR
image.png
UMich, Ann Arbor
image.png
CMU




The Team

image.png
Mila, Stanford
image.png
Mila, HEC Montreal, CIFAR AI Chair
image.png
Mila, University of Montreal
image.png
Google Deepmind
image.png
UC San Diego
image.png
Cornell
image.png
Cornell
image.png
Cornell
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.