179 Variability Assessment of Nationally Reportable Conditions Across State Jurisdictions *

Sunday, June 14, 2015: 3:00 PM-3:30 PM
Exhibit Hall A, Hynes Convention Center
Denisha N. Abrams , Northrop Grumman Information Systems, Atlanta, GA
Shu McGarvey , Northrop Grumman Information Systems, Atlanta, GA
Catherine Staes , University of Utah, Salt Lake City, UT

BACKGROUND:   In the United States, selected diseases and conditions must be reported to public health authorities by physicians, hospitals and laboratories to control disease and poisonings.  Currently, health care personnel must manually review various sources to determine reporting criteria. The Reportable Condition Knowledge Management System (RCKMS) project is focused on developing an infrastructure to present reporting criteria in a human-readable and electronic format to support automated reporting.   The objectives of the research described in this poster were to: a) describe variation in the reporting specifications across different jurisdictions, and b) identify opportunities to harmonize specifications to support automated event detection, particularly for those health care settings and laboratories that report to more than one jurisdiction.

METHODS: The variability assessment was conducted using data collected from jurisdictions that are currently or previously participated in RCKMS pilot projects, including seven states (Colorado, Delaware, Illinois, New York, Utah, Virginia & Washington) and four counties (Houston, New York City, Southern Nevada & San Diego). A workgroup comprised of pilot participants and subject matter experts met weekly to assess variability, determine where variability could be reduced and harmonized across jurisdictions.  We consolidated the jurisdictional reporting specifications and used the Council of State Territorial Epidemiologists (CSTE) Position Statements for reporting Pertussis and blood lead levels as a guide for comparison.  We identified differences in the clinical, laboratory and epidemiologic criteria and recorded the reasons for the deviations across jurisdictions.  Two additional reportable events will be analyzed as part of this pilot.

RESULTS:   Participating jurisdictions identified complexities in interpreting reporting requirements concerning what should be reported, when, and to whom.  Among the three categories of criteria (clinical, laboratory and epidemiologic), there was no complete agreement in the logic across jurisdictions.  For example, concerning Pertussis, we found there were multiple names for the same test for detecting Pertussis, and the methods of detection varied from use of cultures versus serology testing. During the weekly workgroup sessions, participants examined the differences in reporting logic and collaboratively developed a proposed logic set that met each jurisdictions reporting requirement.  

CONCLUSIONS: Variation does exist among jurisdictions and collective data harmonization efforts across states can reduce variability and streamlining of requirements, thus improving the quality of data and reports. Additionally, this research revealed the need to think about how to express reporting criteria differently to support automation, which could also lead to the way in which position statements are expressed in the future.