Abstract
Keyword-based search engines often return an unexpected number of results. Zero hits are naturally undesirable, while too many hits are likely to be overwhelming and of low precision. We present an approach for predicting the number of hits for a given set of query terms. Using word frequencies derived from a large corpus, we construct random samples of combinations of these words as search terms. Then we derive a correlation function between the computed probabilities of search terms and the observed hit counts for them. This regression function is used to predict the hit counts for a user's new searches, with the intention of avoiding information overload. We report the results of experiments with Google, Yahoo! and Bing to validate our methodology. We further investigate the monotonicity of search results for negative search terms by those three search engines.
Original language | English (US) |
---|---|
Title of host publication | 2010 IEEE/WIC/ACM International Conference on Web Intelligence, WI 2010 |
Pages | 162-166 |
Number of pages | 5 |
Volume | 1 |
DOIs | |
State | Published - Dec 13 2010 |
Event | 2010 IEEE/WIC/ACM International Conference on Web Intelligence, WI 2010 - Toronto, ON, Canada Duration: Aug 31 2010 → Sep 3 2010 |
Other
Other | 2010 IEEE/WIC/ACM International Conference on Web Intelligence, WI 2010 |
---|---|
Country | Canada |
City | Toronto, ON |
Period | 8/31/10 → 9/3/10 |
All Science Journal Classification (ASJC) codes
- Artificial Intelligence
- Computer Networks and Communications
- Software