1st International Workshop on Multi-Method Evaluation of Personalized Systems

MuMe 2018


Engineering & Computer Science (General)



-------------------------------------------------------------------
1st International Workshop on Multi-Method Evaluation of Personalized Systems (MuMe 2018)
https://multimethods.info
held in conjunction with UMAP 2018 (User Modeling, Adaptation and Personalization)
http://www.um.org/umap2018/
8 - 11 July, 2018 at Nanyang Technological University, Singapore
-------------------------------------------------------------------
Intro
-------------------------------------------------------------------
The MuMe 2018 workshop is based on the objective to raise awareness in the user modeling community for the significance of using multiple methods in the evaluation of recommender systems and other personalized systems.
Employing a multi-method evaluation integrating a number of single methods (e.g., a combination of think-aloud and survey with open-ended questions or a combination of offline prediction simulation with an open dataset and survey with closed- and open-ended questions) allows for getting a more integrated and richer picture of user experience and quality drivers of personalized systems.
The primary goal of this workshop is to build a community around the multi-method evaluation topic and to develop a long-term research agenda for the topic.
-------------------------------------------------------------------
Topics
-------------------------------------------------------------------
We solicit position and research papers (4 pages excluding references, UMAP 2018 Extended Abstracts Format) that address challenges in the multi-method evaluation of recommender systems and other personalized systems. This includes
- "lessons learned" from the successful application of multi-method evaluations,
- "post mortem" analyses describing specific evaluation strategies that failed to uncover decisive elements,
- "overview papers" analyzing patterns of challenges or obstacles to multi-method evaluation, and
- "solution papers" presenting solutions towards identified challenges.
Possible questions addressed may include (but are not limited to):
- How can we select evaluation methods that allow to identify blind spots in user experience? What may be criteria to compare and evaluate the suitability of methods for given evaluation objectives and how can we develop those?
- How can we integrate and combine the results of multiple methods to get a comprehensive picture of user experience?
- What are the challenges and limitations of single- or multi-method evaluation of RecSys? How can we overcome such hurdles?
- What are viable user-centric multi-method study designs (guidelines) for evaluating RecSys? What are the lessons learned from successful or unsuccessful user-centric multi-method study designs?
-------------------------------------------------------------------
Important Dates
-------------------------------------------------------------------
Submission deadline: April 17, 2018
Notification: May 15, 2018
Deadline for camera ready version: May 27, 2018
Workshop date: July 8, 2018
(all deadlines are AoE)
-------------------------------------------------------------------
Organizers
-------------------------------------------------------------------
Christine Bauer, Johannes Kepler University Linz, Austria
Eva Zangerle, University of Innsbruck, Austria
Bart P. Knijnenburg, Clemson University, USA
For details visit the workshop’s website: https://multimethods.info
-------------------------------------------------------------------