The last six decades information retrieval scholars and practitioners have put an enormous effort to developed an arsenal of evaluation techniques and frameworks that vary from collection-based approaches to user studies and in-situ methods to assess the effectiveness of information access systems (mainly search and recommender systems). In an era of enormous digital data availability information access systems have become ubiquitous in all kinds of environments while the number and heterogeneity of tasks people perform through them is ever increasing. Beyond search and recommendation companies nowadays offer all kind of information access services or components of them: identify entities, extract relations, summarize documents, create timelines of news, predict outbreaks of diseases, diversify online product offers, mine business reputation in online text, find experts are only a few examples.
Small and medium size enterprises often lack the resources needed to develop proper evaluation infrastructures, but also to follow the research development in the field of evaluation. Similarly, academics lag behind in (a) understanding real practical issues raised when it comes to the evaluation of real systems - e.g. even depth-k pooling is often infeasible when an SME has a single ranking algorithm developed, and (b) sensing the breadth of applications and tasks on which systems require evaluation and the challenges of them.
The goal of this workshop is to bring together on one hand SME's that have (novel) evaluation problems that need to solve, and on the other academic and industrial researchers that have done important work on evaluation - theoretical and practical, with the goal of a offering to both groups a hands-on experience on practical issues and solutions to these issues. We plan to reach out to SME's throughout Europe and collect problems they face on the question of evaluation before hand, and spend a day discussing and tackling these problems during the workshop.