CIKM’10 Paper – Term Necessity Prediction

Posted on November 13, 2010

0


Query modeling — finding an optimal representation of query — has been a hot topic for IR research (at least in UMass) for recent years. In addition to a classic theme of query expansion, new papers on query reduction, query-term weighting, query structuring (based on document fields) has been published. We even had a lab seminar on this topic last year.

While each of these papers propose a new representation of user’s query, this paper by Le Zhao and Jaime Callan is interesting in that they got back to a fundamental concept of  term necessity. In their definition, term necessity — P(t|R) — refers to the probability of a term t occurring in documents relevant to a given query.

If you look at the odd-ratio derivation of probabilistic ranking principle (PRP) above, it says that the optimality is achieved when ranking documents higher as it may show up more in the relevance distribution (the distribution of terms built by the relevant documents), and it may show up less in the non-relevance distribution. You can see that term necessity P(q_i|R) is central here, and the latter term becomes an IDF, since non-relevance distribution can be approximated by the whole collection.

Since P(t|R) can be estimated only when we know the class of documents that are relevant to given query (unlike P(t|NR) which is IDF), the authors suggest techniques for estimating the quantity from many features, which include whether a term is central to a topic, whether a term can be replaced by synonyms, whether the term is abstract, and so on.

To calculate these features in a lower dimensional term-document space, this paper employs singular vector decomposition (SVD) based on Top K result documents for initial query. This use of SVD for feature calculation is something I haven’t seen in recent papers. I wonder what would happen if they used another concept-space projection technique (e.g., Latent Dirichlet Allocation).

Experimental results are summarized in paper as follows:

Predicted necessity used as user query term weights significant- ly improves ad-hoc retrieval of verbose queries. On 6 standard TREC ad-hoc retrieval and web search collections of different sizes and judgment depths, predicted necessity brings a consistent and significant 10% to 25% improvement on verbose queries. Results also show that weighting terms by their ground truth necessity estimated from relevance judgments gives a significant 30% to 80% improvement over state-of-the-art ad-hoc retrieval models.

Among recent papers [1] [2] where they studied supervised training for term weights, I think  that the formal foundation (on PRP) of this paper  is a strength, yet they ultimately seem to be heading for the same direction — finding optimal weight for each query-term. In fact, as authors point out, common performance advantages of pseudo-relevance feedback techniques stems from effective term weighting they bring in as well as expansion terms.

Although the popularity of learning to rank methods seem to marginalize the research on term weighting in general, which has been a holy grail of traditional IR research, I believe there are much work remains to be done in this space. Another paper titled ‘Examining the Information Retrieval Process from an Inductive Perspective‘ (by Ronan Cummins et al.) also derives new term weighting function by revisiting core principles of IR.

 

Advertisements
Posted in: IR