Efficient data reconciliation

Munir Cochinwala, Verghese Kurien, Gail Lalk, Dennis Shasha

Research output: Contribution to journalArticlepeer-review

80 Scopus citations

Abstract

Data reconciliation is the process of matching records across different databases. Data reconciliation requires "joining" on fields that have traditionally been non-key fields. Generally, the operational databases are of sufficient quality for the purposes for which they were initially designed but since the data in the different databases do not have a canonical structure and may have errors, approximate matching algorithms are required. Approximate matching algorithms can have many different parameter settings. The number of parameters will affect the complexity of the algorithm due to the number of comparisons needed to identify matching records across different datasets. For large datasets that are prevalent in data warehouses, the increased complexity may result in impractical solutions. In this paper, we describe an efficient method for data reconciliation. Our main contribution is the incorporation of machine learning and statistical techniques to reduce the complexity of the matching algorithms via identification and elimination of redundant or useless parameters. We have conducted experiments on actual data that demonstrate the validity of our techniques. In our experiments, the techniques reduced complexity by 50% while significantly increasing matching accuracy.

Original languageEnglish (US)
Pages (from-to)1-15
Number of pages15
JournalInformation sciences
Volume137
Issue number1-4
DOIs
StatePublished - Sep 2001
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Software
  • Control and Systems Engineering
  • Theoretical Computer Science
  • Computer Science Applications
  • Information Systems and Management
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Efficient data reconciliation'. Together they form a unique fingerprint.

Cite this