top of page

Existing recommender systems mainly focused on recommending individual items by utilizing user-item interactions. However, little attention has been paid to recommend user generated lists (e.g., playlists and booklists). On one hand, user generated lists contain rich signal about item co-occurrence, as items within a list are usually gathered based on a specific theme. On the other hand, user's preferences over a list also indicate his/her preferences over items within the list, and vice versa. We believe that 1) if the rich relevance signal within user generated lists can be properly leveraged, a better recommendation for individual items can be provided,  and 2) if user-item and user-list interactions are properly utilized, and the relationship between a list and its contained items is discovered, the performance of user-item and user-list recommendations can be mutually reinforced.

Towards this end, we devise embedding factorization models, which extend traditional factorization method by incorporating item-item (item-item-list) co-occurrence information with embedding-based algorithms. Specifically, the factorization model is employed to capture users' preferences over items and lists; embedding-based models are utilized to discover the co-occurrence information among items and lists. The gap between the factorization model and embedding-based models is bridged by sharing items' latent factors. Moreover, our proposed framework is capable of solving the new-item cold-start problem, where items have never been consumed by users but do exist in user generated lists. Overall performance comparisons and micro-level analyses have demonstrated the promising performance of our proposed approaches.

Abstract
Datasets

We constructed the dataset by crawling data from Netease Cloud Music, which enables consumers to select their desired independent songs or user generated playlists via entering keywords or browsing from genres. We further processed the dataset by retaining playlists possessing at least 10 songs, songs appearing at least in 5 playlists, and users consuming at least 10 songs and 10 playlists. Based on these criteria, we ultimately obtained 18,528 users, 123,628 songs, 22,864 playlists, 1,128,065 user-song interactions, and 528,128 user-playlist interactions. In the experiments for evaluating EFM-Side, user-playlist interactions were erased and we denoted this dataset as User-Song. Meanwhile, in the experiments on EFM-Joint, user-playlist interactions were considered and we denoted this dataset as User-Song-Playlist.

Research Questions

RQ1: How does our designed EFM-Side approach perform as compared with other state-of-the-art competitors?
RQ2: How does EFM-Side perform in handling the new-item cold-start problem?
RQ3: Does EFM-Side consistently beats other algorithms with respect to items with differenct scale of ratings?
RQ4: How is the recommendation performance of user-item and user-list under the EFM-Joint framework?
RQ5: Are the items within a list equally important? Is EFM-Joint able to find the most representative item within a list?

Individual Items Recommendation (RQ1)

To demonstrate the overall effectiveness of our proposed EFM-Side as introduced in Section 5.1.3, we compared the EFM-Side with several state-of-the-art recommendation approaches: 1) BPR and 2) CoFactor. Users' interactions with lists were ignored in this stage. EFM-Side acquires the co-occurrence relationships among items from user generated lists, while CoFactor obtains them from user consumed item sequences.

New-Item Cold-Start Problem (RQ2)

As introduced in Section 4.1.2, the new-item cold-start problem refers to recommend items that are never consumed by users but exist in user generated lists. We compared the performance of EFM-Side with that of 1) Random and 2) BPR-map. Random is the worst result for ranking performance, which randomly selects items for recommendation. For the new-item cold-start scenario, if we know nothing about the new items, the random guess is the most reasonable results. BPR-map utilizes a two-step strategy to cope with the cold-start problem. In the first step of BPR-map, the latent factors of items are obtained by factoring item-item co-occurrence matrix. In the second step, user personalized ranking are optimized by fixing item latent factors, which are obtained in the first stage, and adjusting user latent factors.

Performance Analysis w.r.t. Items (RQ3)

The data sparsity problem is extremely serious for the items as revealed in Figure 4c. The majority of items have only few ratings. Moreover, we know EFM-Side overall outperforms other algorithms from the results illustrated in Section 5.2, but we are not sure whether EFM-Side consistently beats other competitors with respect to items with different scale of accumulated ratings. In order to answer this question, we further disposed the results obtained in Section 5.2 by selecting items whose number of training ratings are located in specific ranges (i.e., 1-10, 11-20, ..., >50).

Jointly Recommend Items and Lists (RQ4)

In the EFM-Joint framework, the recommendation performance of items and lists can be mutually reinforced. We compared the performance of EFM-Joint with other state-of-the-art algorithms: 1) BPR; 2) LIRE; and 3) CoFactor. LIRE treats the list as a combination of items, and user preferences over items and lists are mutually reinforced. CoFactor discovers the co-occurrence among items and lists from users' interactions with both items and lists.

Importance of Items within Lists (RQ5)

As discussed in word embedding algorithms [15,24], the syntactic analogies and the semantic analogies of words and sentences can be found by computing the cosine distance among them. Once we have obtained the embedding representations of both items and lists, we can compute the similarity (the similarity is equal to $1$ - distance) between a list and its contained words. In our datasets, we only know the rank of each item within a list, but we are not sure whether the items within a list are equally important. To answer this question, we explored the similarity between the embedding representation of a list and the embedding representations of its contained items. If the value of the similarity between a list and a item within it is relatively high, it shows the item is relatively representative and important to the list.

Conclusion and Future Work

This paper presents novel embedding factorization models for jointly recommending user generated lists and their contained items. The embedding factorization models are the combinations of factorization models and embedding-based algorithms. Particularly, EFM-Side deploys the list as side-information to discover the co-occurrence relationships among items. As a byproduct, it is capable of solving the new-item cold-start problem, where items are not yet consumed by users but do exist in lists. By utilizing user interactions with item and list simultaneously, the recommendation performance of user-item and user-list can be mutually reinforced under the EFM-Joint framework. To validate the effectiveness of our proposed approaches, we constructed two benchmark datasets. Experiment results over these two datasets have demonstrated the effectiveness of our work. We also performed micro-analysis to show whether the items within a list are equally important, and whether EFM-Joint is able to find the most representative item in a list.

 

In the future, we plan to expand our work in the following two aspects: 1) Modeling the sequential feature in user generated sequential behaviors. Although the sequential behavior is ignored in user generated lists, sequential feature is extremely important in some real-world scenarios (e.g., music listening behavior and visiting tourist attractions). 2) Realizing the item and list recommendation in an online setting. Users' personal interests evolve over time, so do the content of user generated lists and their contained items. It would be helpful to utilize users' reviews to capture the dynamic changes.

bottom of page