Tagungsbeiträge

Comparative Evaluation of Recommendation Systems for Digital Media

AutorDomonkos Tikk, Roberto Turrin, Martha Larson, David Zibriczky, Davide Malagoli, Alan Said, Andreas Lommatzsch, Sandor Szeely
  
LinksBibTeX 

TV operators and content providers use recommender systems to connect consumers directly with content that fits their needs, their different devices, and the context in which the content is being consumed. Choosing the right recommender algorithms is critical, and becomes more difficult as content offerings continue to radically expand. Because different algorithms respond differently depending on the use case, including the content and the consumer base, theoretical estimates of performance are not sufficient. Rather, evaluation must be carried out in a realistic environment. The Reference Framework described here is an evaluation platform that enables TV operators to impartially compare not just the qualitative aspects of recommendation algorithms, but also non-functional requirements of complete recommendation solutions. The Reference Framework is being created by the CrowdRec project including the most innovative recommendation system vendors and university researchers in the specific field of recommendations system and their evaluation. It provides batch-based evaluation modes, but looks forward to also support stream-based modes in the future. It is able to encapsulate open source recommenders and evaluation frameworks, making it suitable for a wide scope of evaluation needs.