## WSDM 2011 Paper Reading

In this post, I would like to review several papers from WSDM 2011, which worth to read, in my opinion.

- “Personalizing Web Search using Long Term Browsing History” by
**Nicolaas Matthijs**and**Filip Radlinski**This paper investigated the possibility to incorporate the whole browsing history into the personalization framework such that the ranking performance can be significantly improved. A user is represented as a “user profile”, which consists of terms extracted from visited web pages, the queries and some meta information (e.g., clicks, time-stamps). These features are weighted with several weighting schemes discussed, such as TF-IDF, modified BM25 and raw term frequencies. In addition, several re-ranking strategies are discussed as well. The experiments are rich. The authors conducted both off-line and on-line experiments against non-trivial baselines, including non-personalized version of Google ranking results and a previous study of personalized ranking. All results show significant improvement of ranking results by using long-term browsing history. This paper is interesting and also surprising in the sense that the approach is straightforward and simple while the result is strong. I’m really impressed that no one did this before.

- “Quality-Biased Ranking of Web Documents” By
**Michael Bendersky**,**W. Bruce Croft**and**Yanlei Diao**This paper introduced a way to incorporate quality factors into ranking functions. It is a little bit unexpected that quality factors are never considered in most ranking models. The proposed method is straightforward, based on Markov Random Field IR model, which is claimed one of the state-of-the-art IR models. It is surprisingly easy to embed these new features into the ranking model. The experiments demonstrated a significant improvement over baselines and also the one with PageRank. Overall, this is an paper worth to read.

- “Mining Named Entities with Temporally Correlated Bursts from Multilingual Web News Streams” by
**Alexander Kotov et al**.

This paper provides a novel approach to a heated discussed research topic, mining bursts events or name entities in this paper, from correlated text streams. Many previous methods are based on topic models. Here, the authors proposed a method based on Markov Modulated Poisson Process (MMPP). Their approach is a two-stage approach where the first stage is to fit the MMPP model and use the fitted model to align correlated bursts by dynamic programming. The complexity of the approach is much simpler than topic models. Although it is overall interesting, the paper lacks of certain comparison with other similar methods, yielding the results that we do not know how well this approach is in reality. - “Everyone’s an Influencer: Quantifying Influence on Twitter” by
**Eytan Bakshy et al.**This paper analyzed how users influence their followers on Twitter for a particular type of messages, the messages containing URLs. More specifically, they focused on only “bit.ly” URLs. The authors conducted three finds of experiments. First, by using several simple features, they found that the past influence provides the most informative features as well as the number of followers. The second set of experiments was conducted on a small dataset where the authors asked Amazon Mechanical Turks to classify these messages into topical categories (including spams). Unfortunately, they found the predictive performance decreased as content-based features added. They claimed that the content features are noise to detect the influential role of individual post. The third set of experiments is “targeting strategies”, namely how a hypothetical marketer might optimize the diffusion of information by systematically targeting certain classes of users. A simple cost function is proposed and the authors demonstrated how different assumptions may lead to different costs. Overall, I feel this part a little bit shallow and premature to be included in the paper. In this paper, “influencer” is not pre-defined, but rather as a function of several features.

- “Identifying Topical Authorities in Microblogs” by
**Aditya Pal**and**Scott Counts**This paper has the similar goal as the previous one. However, they proposed a totally different approach. On high level, they utilized a clustering algorithm (Gaussian Mixture Model) with a set of wide range of features. However, the authors only focused on the clusters which have high average values of three features (Topical Signal, Retweet Impact and Mention Impact), discarding all other clusters. In addition, they proposed a ranking mechanism using the CDF of Gaussian function for each feature and combined all features using multiplication. In the experiments, they first compared their methods with graph-based ranking methods (e.g., PageRank). They found that empirically, their method can discover users who are more interesting and more authoritative. In their later experiments, they focused on comparing the ratings (from human judges) of different methods. I feel that these comparisons less intuitive. Overall, the paper does not really demonstrate something new or totally surprising.

- “#TwitterSearch: A Comparison of Microblog Search and Web Search” by
**Jaime Teevan**,**Daniel Ramage**and**Meredith Ringel Morris**This paper in general explored a question that whether information seeking behavior on Twitter is different from Web search. They first initialized their study by a user-study, conducted within Microsoft. Although it might be interesting to know how people use Twitter search, the small scale of user-study and the highly biased sample prevent us to generalize the conclusion made in the paper. In later study, they focused on a large query log from Bing Toolbar. Several interesting things: 1) They found that Twitter search more focused on celebrities and temporal events. 2) Users do issue the same queries to both Web search and Twitter search, implying some underlying information needs associated with. 3) Users do issue the same queries to Twitter search, indicating re-finding needs. They paper suggested several directions that future search tools can improve upon 1) Enhancing Temporal Queries 2) Enriching People Search 3) Leveraging Hashtags 4) Employing User History. 5) Providing Query Disambiguation.

- “Using Graded-Relevance Metrics for Evaluating Community QA Answer Selection” by
**Tetsuya Sakai et al**.

This paper introduced a graded-relevance metrics for QA answer retrieval task. They hired a number of human judges to evaluate the relevance of answers and adopted a number of graded-relevance metrics, like NDCG, to answer retrieval. Two major conclusions made in the paper: 1) they can detect many substantial differences between systems that would have been overlooked by Best-Answer based evaluation. 2) they can better identify hard questions compared to Best-Answer based evaluation. Although these two points are novel to CQA community, it is not totally surprising that graded-relevance is better than binary relevance, if we consider the experiences in IR. - “Recommender Systems with Social Regularization” by
**Hao Ma et al**.

The idea of the paper is pretty straightforward. The social connections have the influence on the results of recommendation. Two models are proposed to the basic matrix factorization framework. The first model assumes that the user’s tasts should be simlar to the average of his or her friends. The second model assumes the regularization should be put on individual user pairs. Both models can utilize user-defined similarity functions where in the paper, the authors showed the results on consine similarity and Pearson Correlation Coefficient. The paper is easy to understand. - “Low-order Tensor Decompositions for Social Tagging Recommendation” by
**Yuanzhe Cai et al**.

This paper is interesting. It provides an improvement to the popular tensor factorization method to the problem social tagging recommendation. The basic idea is that the 3rd decomposition of a tensor can be improved by zero-order, 1st order and 2nd order. Indeed, these lower orders can be seen as an average to the corresponding dimensions. For instance, zero-order is essentially the average over all elements. Thus, these order statistics are indeed adding biases into the model, which is a popular technique in matrix factorization based recommendation systems. The paper also discussed how to handle missing value problem. Overall, this paper is worth to read. - “eBay: An E-Commerce Marketplace as a Complex Network” by
**Zeqian Shen**and**Neel Sundaresan**This paper is a good reference to understand eBay as a complex network. There’s nothing strikingly new here but it confirms a lot of things with other types of media, such as Web and Wikipedia. Two interesting facts: 1) in terms of Bow-Tie structure, the Strongly Connected Component (SCC) part and IN part on eBay is very small. 2) in terms of popular triad structures, the most significant motif is “two sellers sell to the same buyer and they also sell products to each other”. In addition, they found that sellers have more interactions than buyers. The paper is worth to skip.

- “Let Web Spammers Expose Themselves” by
**Zhicong et al**.

This paper is interesting. It provides another view of identifying spammers. The basic idea is to mine web forums of SEO information since spammers may seek link exchange in those forums. They formulated the problem into an optimization framework with semi-supervised learning techniques. They demonstrated substantial improvement of performance. - “Improving Social Bookmark Search Using Personalized Latent Variable Language Models” by
**Morgan Harvey**,**Ian Ruthven**and**Mark J. Carman**This paper has at least two interesting point. First, the two models proposed in the paper is essentially like Probabilistic Tensor Factorization. Second, the superior of the second model demonstrated that it might be a better choice to “generate” response variables from latent factors, rather than the other way around. Thus, it would be nice to see the third model that the user is also the result of latent factors, a fully resemble to Tensor Factorization. The drawbacks of the paper is evaluation. It does not use any standard datasets.

- “Topical Semantics of Twitter Links” by
**Michael J. Welch et al**.

There are several interesting points in the paper, though it is straightforward overall and sometimes it seems lacking of rigorous experiments to support some ideas. First, the authors demonstrated that the PageRank for follow relationships is significantly different from Retweet induced graph. In fact, the authors would better to provide Kendall’s tau or something to demonstrate this. They showed that there is a drop in both rankings although the paper did not go deep in this point. I personally would think that this may due to the fact that the core of the Twitter network is extremely dense, compared to the other parts. The second part of the paper tried to demonstrate that retweet induced links carry much stronger semantic meanings, although the results are not very convincing. - “A Framework for Quantitative Analysis of Cascades on Networks” by
**Rumi Ghosh**and**Kristina Lerman**This paper is interesting and worth to be studied more thoroughly. First, the authors proposed a “cascade generating function”, which characterizes the cumulative effect on current node, gathering from its connections. This simple function can be used to calculate a number of cascading properties, such as size, diameter, number of paths and average length. Then, the authors introduced a computational framework for the generating function. More specifically, they construct a cascade graph from cascades and then generate two matrices, contagion and length matrices. These two matrices encoded all information of all cascades. In addition, we can reconstruct or approximate the original cascades from the matrices. Equipped with these tools, the authors examined the cascades from Digg and showed some interesting results.

- “Linking Online News and Social Media” by
**Manos Tsagkias**,**Maarten de Rijke**and**Wouter Weerkamp**First, the idea of the paper is interesting while I think the experiments are a little bit confusing. The authors tried to link social media to news articles. The original research question posed by the authors is: given a news article, find social media utterances that implicitly reference it. However, in the end, the task becomes to retrieve blog posts by using news articles as queries and tried to explore what kind of query representation is better. In the end, it seems that the query model based on article itself outperforms all others.

- “Predicting Future Reviews: Sentiment Analysis Models for Collaborative Filtering” by
**Noriaki Kawamae**This paper introduced a fairly complicated extension of Topic Models to collaborative filtering and sentiment analysis. It introduced a number of new latent variables to the model to explain words, items and ratings. One interesting point is that this model also use the formalism that response variables are fully explained by latent factors. To be honest, as topic models, the evaluation is not totally convincing.