Call for papers

The primary goal of this workshop is to synthesize existing research work, in ubiquitous crowdsourcing and crowdsensing, for establishing guidelines and methodologies for the evaluation of crowd-based algorithms and systems. This goal will be achieved by bringing together researchers from the community to discuss and disseminate ideas for comparative analysis and evaluation on shared tasks and data sets. A variety of views has emerged on the evaluation of crowdsourcing, across research communities, but so far there has been little eff ort to clarify key di fferences and commonalities in a forum. The aim of this workshop is to provide such a forum; such that, it creates the time and involvement required to subject the di fferent views to rigorous discussion. It is expected that the workshop would result in a set of short papers that will clearly argue the positions on the issue. These papers will serve as a base resource for consolidating research in the field and moving it forward. Further, it is expected that, the discussions at the workshop would provide basic specifi cations for metrics, benchmarks, and evaluation campaigns that can then be considered by the wider research community.

Scope:

We invite submission of short papers which identify and motivate comparative analysis and evaluation approaches for crowdsourcing. We encourage submissions identifying and clearly articulating problems in terms of evaluating crowdsourcing approaches or algorithms designed for improving the process of crowdsourcing. We welcome early work, and particularly encourage submission of position papers that provide possible directions towards improving the validity of evaluations and benchmarks. Topics include but are not limited to:

  • Domain or application specifi c datasets for the evaluation of crowdsourcing/crowdsensing techniques
  • Cross platform evaluation of crowdsourcing/crowdsensing algorithms
  • Generalized metrics for task aggregation methods in crowdsourcing/crowdsensing
  • Generalized metrics for task assignment techniques in crowdsourcing/crowdsensing
  • Online evaluation methods for task aggregation and task assignment
  • Simulation methodologies for testing crowdsourcing/crowdsensing algorithms
  • Agent-based modeling methods for using existing simulation tools
  • Benchmarking tools for comparing crowdsourcing/crowdsensing platforms or services
  • Mobile-based datasets for crowdsourcing/crowdsensing
  • Data sets with detailed spatio-temporal information for crowdsourcing/crowdsensing
  • Methodologies for using online collected data for offline evaluation

Submissions Guidelines:

Each submitted paper should focus on one dimension of evaluation and benchmarks in crowdsourcing/crowdsensing. Multiple submissions per author are encouraged for articulating distinct topics for discussion at the workshop. Papers are welcome to argue the merits of an approach or problem already published in earlier work by the author (or anyone else). Papers should clearly identify the analytical and practical aspects of evaluation methods and their specfi city in terms of crowdsourcing tasks, application domains, and/or type of platforms. During the workshop, papers will be grouped together into tracks, with each track further elaborating upon a particular critical area meriting further work and study.

Submitted papers must be original contributions that are unpublished and are not currently under consideration for publication by other venues. All submissions will be reviewed by the Technical Program Committee for relevance, originality, significance, validity and clarity. We are also looking for opportunities to publish extended version of workshop papers in journals and book chapters.