HCIR’11 — an eye-opening experience

Posted on October 25, 2011


HCIR’11 was like a great school to me. I always felt that a good 1-day workshop can be better than week-long conferences, yet this was a special opportunity for me who is trying to extend my horizon to more user-conscious part of IR research.

Gary’s keynote was particularly enchanting, providing a good overview on HCIR research as well as ongoing challenges. I went on looking up his HCIR overview lecture at MIT, and found it also interesting. There he describes HCIR model as follows:

Think of IR from the perspective of active human with information needs, information skills, powerful IR resources (including other humans), situated in global and connected communities — all of which evolve over time.

What an engaging description! He also outlines several challenges for HCIR research, some of which includes:

  • How can we shift the focus of research from retrieval to understanding and problem solving?
  • What are sensible ways to combine several different modes of evaluation? (lab + field study + simulation)
  • How can we evaluate user’s whole interaction with the system, beyond query-based effectiveness measures?

In the following sessions, we saw many works with great diversity. There were many tasks — health information finding, search over e-government, known-item finding in personal collection, exploratory search in paper archives. People mostly employed user study to study various aspects of given tasks, although there were log-based studies and simulation studies as well.

These are certainly different studies from what I’ve seen in SIGIR or CIKM, where people focus on improving well-known performance metrics for some of ‘major’ search domains, such as web search, and the role of user is minimized, if any. While I certainly think that the field of IR can benefit from richer understanding and modeling of users in general, I think HCIR can also benefit from the emphasis of  traditional IR on comparative evaluation over standard tasks.

From this perspective, I found HCIR Challenge this year as an interesting effort in the right direction. We saw the demo of several systems which solved the information availability problem over Citeseer corpus. While traditional demos just showed the operation of a single system, the focus here was to evaluate a set of systems over a set of possible tasks. Another important point was that people (workshop participants) did the evaluation, instead of relying on some metric.

While going through with coast-to-coast travel to participant in one-day workshop wasn’t a choice I was sure about, it certainly provided me with new perspectives on the field. Kudos to organizers who did a great job! Hopefully we’ll see more of interesting researches which address some of the challenges above.

p.s. Rob Capra in UNC posted a SIGIR forum report, which gives a nice summary of the workshop.

Tagged: ,
Posted in: HCI, IR