Hostname: page-component-848d4c4894-pftt2 Total loading time: 0 Render date: 2024-05-16T23:57:02.231Z Has data issue: false hasContentIssue false

An intervention to improve notifiable disease reporting using ambulatory clinics

Published online by Cambridge University Press:  09 May 2008

M. J. TREPKA*
Affiliation:
Department of Epidemiology and Biostatistics, Stempel School of Public Health, Florida International University, FL, USA Office of Epidemiology and Disease Control, Miami-Dade County Health Department, FL, USA
G. ZHANG
Affiliation:
Office of Epidemiology and Disease Control, Miami-Dade County Health Department, FL, USA
F. LEGUEN
Affiliation:
Office of Epidemiology and Disease Control, Miami-Dade County Health Department, FL, USA
*
*Author for correspondence: Dr M. J. Trepka, Florida International University, University Park, HLS II 595, 11200 SW 8th Street, Miami, FL 33199, USA. (Email: trepkam@fiu.edu)
Rights & Permissions [Opens in a new window]

Summary

Strong notifiable disease surveillance systems are essential for disease control. We sought to determine if a brief informational session between clinic and health department employees followed by reminder faxes and a newsletter would improve reporting rates and timeliness in a notifiable disease surveillance system. Ambulatory clinics were randomized to an intervention group which received the informational session, a faxed reporting reminder and newsletter, or to a control group. Among intervention and control clinics, there were improvements in the number of cases reported and the timeliness of reporting. However, there were no statistically significant changes in either group. Despite improved communication between the health department and clinics, this intervention did not significantly improve the level or the timeliness of reporting. Other types of interventions should be considered to improve reporting such as simplifying the reporting process.

Type
Original Papers
Copyright
Copyright © 2008 Cambridge University Press

INTRODUCTION

Public health surveillance systems are essential for monitoring rates and distributions of infectious diseases so that outbreaks can be identified and controlled [Reference Thacker, Choi and Brachman1]. In addition to naturally occurring outbreaks, the identification of biological terrorism events also depends on local and state surveillance systems [2]. Communicable disease surveillance in the United States is primarily based on a passive, notifiable disease surveillance system, in which laboratories and health-care providers report cases of notifiable diseases to local or state health departments [Reference Chorba3]. In addition to specific diseases, the majority of states require reporting of suspected outbreaks, and many also require reporting of unusual conditions [Reference Chorba3]. The early identification of the Hantavirus outbreak in New Mexico was partially due to an effective surveillance system and a willingness of providers to report [Reference Sewell4], and the rapid identification of the Anthrax attack of 2001 in Palm Beach, Florida was due to an infectious disease physician immediately notifying the Palm Beach Health Department of a possible anthrax case [Reference Perkins, Popovic and Yeskey5]. Although the identification of the Hantavirus outbreak and Anthrax attack are examples of traditional public health surveillance systems working well, systems often perform sub-optimally due to delays in reporting or underreporting. A review of studies evaluating underreporting in the United States between 1970 and 1999 indicated that on average 79% of sexually transmitted diseases, tuberculosis, and acquired immunodeficiency syndrome cases were reported compared with 49% for other conditions [Reference Doyle, Glynn and Groseclose6], the group of conditions which includes agents that could be used in a biological terrorist attack. Several studies have been conducted to better understand underreporting [Reference Ktsanes7Reference Allen and Ferson14]. Reasons for underreporting offered by clinicians in these studies include not reporting because they believe that laboratories or someone else is reporting for them [Reference Ktsanes7Reference Konowitz, Petrossian and Rose10, Reference Allen and Ferson14], not knowing how or what to report [Reference Ktsanes7, Reference Schramm, Vogt and Mamolen9Reference Abdool Karim and Dilraj11, Reference Allen and Ferson14], not knowing that they had to report [Reference Schramm, Vogt and Mamolen9], not feeling responsible for reporting [Reference Jones8], a belief that data are not acted upon [Reference Abdool Karim and Dilraj11, Reference Seneviratne, Gunatilake and de Silva13], having confidentiality concerns [Reference Ktsanes7, Reference Allen and Ferson14], and finding reporting too time consuming [Reference Ktsanes7, Reference Abdool Karim and Dilraj11, Reference Friedman12]. All but the last two reasons are primarily related to knowledge deficits.

Syndromic surveillance systems, which monitor levels of people presenting with disease syndromes instead of specific diagnosed conditions, are being developed to identify outbreaks earlier [Reference Wagner15]. Although these systems may prove to be effective in identifying illnesses in their early stages or non-reportable conditions such as influenza, they complement rather than replace traditional reportable disease surveillance because of their own limitations such as not being useful in identifying isolated cases of rare diseases or small clusters of illness. The index case of anthrax in the Anthrax attack of 2001 would not have been identified by syndromic surveillance. Similarly, the West Nile Virus outbreak in New York City was identified due to an infectious disease practitioner reporting four cases of an uncommon condition, encephalitis with severe muscle weakness, as opposed to an increase in meningitis or encephalitis cases [Reference Fine and Layton16]. Therefore, in addition to developing new surveillance systems such as syndromic surveillance systems, existing notifiable disease surveillance systems should be improved.

Miami-Dade County, Florida (2002 estimated population 2·3 million) [17] had 6108 licensed physicians in 2003. In Florida all cases of notifiable diseases must be reported by laboratories, hospitals, and physicians [18]. Every year physicians are sent a list of the current notifiable diseases and given reporting forms so that they can report notifiable diseases by fax or mail.

The extent of underreporting in Miami-Dade County is unknown. However, there have been several situations in which cases were reported by laboratories and not by physicians and of physicians not reporting cases in a timely manner leading to delayed control measures. Compared with ambulatory clinics, underreporting and reporting delays by hospitals seem to occur less frequently possibly due to a strong relationship between infection control practitioners and Miami-Dade County Health Department (MDCHD) and reportable conditions frequently diagnosed at hospitals. When MDCHD employees have contacted providers who did not report cases (e.g. cases identified through laboratory reporting only), providers' explanations included: (1) not knowing that they needed to report, (2) believing that laboratories report for them, (3) case had inadvertently not been reported, or (4) did not know how to report; explanations similar to those reported in other areas [Reference Ktsanes7Reference Abdool Karim and Dilraj11, Reference Allen and Ferson14].

To have an effective surveillance system, it is also important that there be regular training of clinicians about disease reporting [Reference Fine and Layton16] and that public health authorities disseminate surveillance data results that are relevant to clinicians on an ongoing basis [Reference Fine and Layton16, Reference Krause, Ropers and Stark19]. At the time of the study, MDCHD's contact with most providers was limited to an annual mailing to physicians advising them about reporting requirements and to the instances when additional information was needed to complete a case investigation. A monthly surveillance report had been disseminated since 1999 but primarily to hospitals due to the lack of clinic contact information.

Thus, in order to improve reporting, we believed that we had to communicate reporting requirements in another way, in addition to mailing, and provide regular feedback to health-care providers. Furthermore, studies have indicated that reporting could be improved by active surveillance (health department employees actively soliciting cases) [Reference Brachott and Mosely20Reference Squires25]. Active reporting, however, can be labour-intensive and thus costly. Therefore, we conducted a randomized controlled trial of ambulatory clinics in Miami-Dade County to determine if a brief one-on-one informational session between clinic and MDCHD employees followed by bi-weekly electronic faxed reporting reminder and electronic faxed monthly surveillance newsletter would improve reporting rates and timeliness of reporting compared with control clinics which continued to receive the annual mailed reporting disease instructions only.

MATERIALS AND METHODS

Ambulatory clinic selection

Ambulatory clinics in the Miami-Dade County area were identified manually from the ‘physician and surgeons’ sections of the Miami-Dade County Yellow Pages. These sections included the ‘physician and surgeons – MD’, ‘physician and surgeons – gynecology’, ‘physician and surgeons – gynecology and obstetrics’, ‘physician and surgeons – general practice’, ‘physician and surgeons – infectious diseases’, ‘physician and surgeons – internal medicine’, ‘physician and surgeons – obstetrics’, and ‘physician and surgeons – pediatrics’ sections. Clinics were called by MDCHD employees to obtain contact information including the clinic fax number and the number, names and specialities of physicians who practice at the clinic, the estimated number of patients seen in the clinic during an average week, and the estimated number of patients seen in the clinic during a year. These clinics did not include ambulatory clinics affiliated with hospitals because area hospitals use their infection control staff to report notifiable diseases diagnosed among in-patients or outpatients in their affiliated clinics. Health department clinics were also excluded. Duplicate listings and clinics in which the majority of providers are not primary-care providers (e.g. family practitioners, paediatricians, obstetricians/gynaecologists, or internists) or infectious disease specialists were excluded from study. The remaining clinics were randomized to the intervention clinic or control clinic group by having SAS version 9.0 (SAS Institute, Cary, NC, USA) generate 503 random numbers so that each clinic had a number. Then numbers were sorted, and the clinics with the first 278 numbers were assigned to the intervention group and the remaining to the control group. In some cases a physician worked at two or more clinics; however, randomization was at the clinic and not the physician level.

Intervention

The intervention clinics were divided between four MDCHD employees by geographic area. Each of the four employees was a person whose usual job involved obtaining information for cases reported to the surveillance system and performing contact investigations. During the last two weeks of May and the month of June 2003, he/she met once with the person in the clinic who was responsible for reporting to explain the reporting rules, provide a list of reportable diseases and reporting forms, and to share timely public health information. During the subsequent 6-month period, the MDCHD employee bi-weekly faxed a reporting reminder to the clinic reporter. If the clinic did not respond to the fax with at least one report or a notification stating that there was nothing to report, the clinic reporter was contacted by phone. The control clinics received no intervention except for the annual reporting instructions mailing in December 2003.

Description of surveillance system

All cases of illness reported to the MDCHD are entered into the Florida Department of Health's Intranet-based surveillance database. For each case, a MDCHD employee calls providers to obtain any missing patient contact and illness-related information in order to determine if the case meets the case definition for the reportable condition, the likely location of exposure (in-state, out-of-state but in the United States, and outside the United States), and if the patient attends a child-care facility or has an occupation that may pose a risk to others (e.g. food handler if enteric illness). The MDCHD employee subsequently calls the patient if there is any remaining missing information or if a contact investigation or treatment monitoring is needed. In addition to several other fields, the surveillance database fields include the name of the initial reporter, the onset of illness date, the laboratory report date, the date the case was reported to the health department, clinic or physician name, and clinic address. These fields were used for the outcome measures.

Outcome measures

Six measures were evaluated comparing changes in each of the two clinic groups between the pre-intervention and intervention time periods. The pre-intervention period was defined as January–June 2003, and the intervention period was defined as July–December 2003. The first two measures assessed the amount of reporting. The first measure was the number of clinics in each group that reported a case of a notifiable disease (including cases that may have been first reported by laboratories). The second was the number of cases of reportable diseases reported by control and intervention clinics. All reportable conditions were included except chronic hepatitis B and C because these are non-acute conditions and sexually transmitted diseases, HIV/AIDS and tuberculosis because these were reported to other health department units. The third, fourth, and fifth measures measured timeliness of reporting. The third measure was the proportion of cases reported to MDCHD within 2 days of the laboratory report to the physician. The fourth measure was the median number of days between the date the case was reported by the clinic to the health department and the date it was reported by the laboratory to the clinic. The fifth measure was the percentage of reported cases to MDCHD initially reported by the clinic as opposed to a laboratory. The sixth measure was a measure of non-reporting and was the percentage of cases not reported by a clinic at least 14 days after the laboratory result date. This assumes that the clinic would have received the laboratory result but was not intending to report it. In addition, the cost per additional reported case was calculated by dividing the number of additional reported cases by the costs associated with the intervention.

Analysis

Analyses were conducted using the intention-to-treat model. Univariate analyses were conducted using SAS version 9.0. The changes between the before and during time periods were calculated for the intervention and control groups using χ2 or Wilcoxon rank-sum tests as appropriate. This project was deemed as not being human subjects research by the Florida Department of Health Institutional Review Board (IRB).

RESULTS

Ambulatory clinic selection and randomization

From Yellow-Page listings, 1388 ambulatory clinics were identified. MDCHD employees were unable to obtain reliable estimates of the number of patients seen at 65% of the clinics due to clinic employees being unable or unwilling to provide the information. Clinics in which the majority of providers were not family practitioners, paediatricians, internists, or infectious disease specialists (n=885) were excluded from the study. Of the remaining 503 clinics, 278 were randomized to the intervention group and 225 to the control group. However, later analyses indicated that 34 intervention group and 26 control group clinics were actually duplicate listings of other clinics leaving 244 intervention clinics and 199 control clinics. The median number of physicians was 1 (range 1–8) in the intervention group clinics and 1 (range 1–6) in the control group clinics.

Intervention

Of the 244 clinics in the intervention group, 41 (16·8%) declined a visit from a health department employee or asked for no further contact at the time of the visit. Of the remaining 203, 13 (6·4%) clinics dropped out primarily during the first month of the intervention by requesting that they do not receive the faxed reporting reminders. Thus 190 (77·9%) obtained the full intervention. MDCHD employees who visited the clinics reported that they encountered many questions at the clinics about which diseases were reportable, what information had to be reported, and when and how the information had to be reported. During their follow-up calls, MDCHD employees usually interacted with clinic managers and continued to encounter reporting questions and other questions about communicable diseases in the county.

Number of reported cases

Prior to the intervention (January–June 2003), there were 963 non-duplicated cases of notifiable conditions reported by health-care providers in Miami-Dade County. Of these, 394 (40·9%) were initially reported by laboratories, 493 (51·2%) by non-study health-care providers, 32 (3·3%) by control clinics, and 44 (4·6%) by intervention clinics. During the intervention (July–December 2003), there were 911 non-duplicated cases of notifiable conditions reported by health-care providers in Miami-Dade County. Of these, 430 (47·2%) were initially reported by laboratories, 385 (42·2%) by non-study health-care providers, 39 (4·3%) by control clinics, and 57 (6·3%) by intervention clinics.

Comparing the periods before and during the intervention, the percentage of clinics reporting a case (including those that may have been first reported by laboratories or other sources) increased from 17·2% to 18·6% in the intervention group and decreased from 18·6% to 18·1% in the control group. However, neither of these increases was statistically significant (Table 1). The total number of cases reported by all intervention clinics (including those that may have been first reported by laboratories or other sources) was 110 prior to the intervention and 121 during the intervention. The total number of cases reported by control clinics was 78 prior to the intervention and 80 during the intervention. Among intervention and control clinics, the number of cases per 100 clinics reported increased, but neither of the changes were statistically significant (Table 1).

Table 1. Reporting outcome measures by ambulatory care clinic intervention status before and during the intervention, Miami-Dade County, Florida, 2003

* Any reported cases even if first reported by a laboratory.

χ2 test.

Wilcoxon rank-sum test.

§ Defined as cases not reported by a clinic within 14 days of the laboratory result date. Assumes that clinics had the result but did not report it.

Prior to the intervention 14 cases reported from intervention clinics and 10 cases reported from control clinics were associated with outbreaks. During the intervention, five cases reported from intervention clinics and two cases reported from control clinics were associated with outbreaks. However, no outbreaks were identified as a result of the cases reported from the intervention clinics.

Timeliness

The percentage of cases reported to the MDCHD within 2 days from the date of the laboratory report did not change significantly among intervention clinics or control clinics (Table 1). The median number of days between the date when the clinic reported the cases to MDCHD and the date of the laboratory report to the clinic decreased non-significantly from 2 days (range 0–180 days) to 1 day (range 0–180 days) in the intervention group and from 3 days (range 0–180 days) to 2 days (range 0–180 days) for the control group.

It was possible to determine the source of the initial report for 96·7% of the cases. The percentage of ambulatory clinic cases that were reported initially from clinics compared with laboratories increased among intervention and control groups, but these changes were not statistically significant (Table 1). The percentage of cases that were not reported by a clinic within 14 days of the laboratory results date decreased in the intervention and control groups, but neither of these changes were statistically significant (Table 1).

The cost of this intervention was primarily in employee time. The cost for locating the clinics in the Yellow Pages and entering the clinic information into a database was US$539, and the cost of calling the clinics to obtain contact information and eliminate duplicate listings was US$4289. Contacting these clinics also allowed the MDCHD to create a database of all clinics and their contact numbers for communication purposes in the event of a public health emergency. The intervention-specific costs which included the time for employees to arrange a visit with the clinic, visit the clinic, and send the reminder and newsletter faxes was US$15 036 over 6 months. The travel costs which included mileage, tolls and parking was US$1584. Thus the total cost of the intervention plus creation of clinic database was US$21 448, of which US$1620 was for the intervention itself. The cost per additional case identified was US$1650 if all costs are included and US$1278 for costs excluding those related to creating the clinic contact information database.

DISCUSSION

We found that during the study time period only a minority of cases (20·8%) were reported from ambulatory clinics in the study because most cases were reported by hospitals or laboratories. Although we found that our intervention had no statistically significant effect on the number of cases reported, the timeliness of reporting, and the percentage of cases that were not reported, all changes were in the desired direction. The intervention also seemed to improve communication between the health department and the clinics. It may be that the intervention was not intensive enough. Other more intensive active surveillance interventions conducted many years ago led to an increase in the number of reported cases. Beginning in 1965, Israeli clinics within one district were visited every 2 weeks by a nurse; this intervention led to two- to threefold higher rates of viral hepatitis being reported in the intervention district [Reference Brachott and Mosely20]. In 1975 in Denver several interventions were compared including letters and telephone contact with a clinic nurse, and the only intervention group with an increase in cases was the telephone contact group which reported twice as many cases of gonorrhoea as during the year prior to the intervention [Reference Rothenberg, Bross and Vernon21]. In 1980 in Vermont a weekly telephone call from a nurse was made to clinics; this resulted in twice as many cases of hepatitis, measles, rubella and salmonellosis per patient attending the clinic being reported by the group receiving the telephone call compared with the passive surveillance group [Reference Vogt22]. A 1980 study in Monroe County, New York found that among private physicians, telephone contact increased reporting of hepatitis, measles, rubella and salmonellosis 4·6-fold compared with 1·8-fold for the weekly letter [Reference Thacker23]. In 1983 in Kentucky, weekly telephone calls to physicians resulted in a 2·8-fold increase in reported hepatitis A cases by physicians randomly assigned to receiving weekly telephone calls compared with those in the passive surveillance group [Reference Hinds, Skaggs and Bergeisen24]. However, comparisons are limited by the small number of diseases evaluated in the other studies and the fact that the most recent study was conducted over 20 years ago, when the clinical practice environment was different. In each of the five studies evaluating active surveillance, the increase in reporting was greater than we found. Our intervention did not involve regular visits or telephone calls but was primarily by fax or e-mail. An evaluation of pertussis and varicella reporting in an enhanced surveillance system in Canada which was more similar to ours involved a mailing, a hotline, and a monthly newsletter. It resulted in significantly improved reporting for varicella but not for pertussis [Reference Squires25]. Another possible explanation for the lack of significant improvement in reporting was that our study did not have enough power to detect the changes due to the small percentage of cases being reported by the ambulatory clinics.

No additional outbreaks were identified through cases reported from the intervention or control clinics during the time period of the enhanced surveillance. Over a period of several years, a similar active surveillance system in Los Angeles County that involved volunteer physicians whose clinics were called weekly did not result in an increase in the number of cases of illness but led to the identification of a number of outbreaks [Reference Weiss, Strassburg and Fannin26].

Only the New York and Vermont studies evaluated timeliness [Reference Vogt22, Reference Thacker23]. As in our study, neither of these studies found an improvement in timeliness. Timely data are needed to enable identification, investigation and control of clusters and outbreaks [Reference Jajosky and Groseclose27].

The cost of the enhanced surveillance was substantial at US$16 620 for 6 months for the intervention itself or US$1278 per additional case. This is less than the cost of the Vermont active surveillance experiment which resulted in a cost per additional case found of US$1922 (in 2003 dollars) [Reference Vogt22], probably due to the fact that the MDCHD employees who visited the clinics were not nurses. However, the cost was substantially more than that of an intervention in Kentucky which cost US$883 (2003 dollars) per additional hepatitis A case found [Reference Hinds, Skaggs and Bergeisen24]. Our cost per additional case was high due to the small increase in the number of cases.

One attribute that should be considered in surveillance systems is how acceptable it is to clinics [28]. The system was relatively simple with one visit and a weekly primarily electronic contact. If we use the acceptance rate of the visit and contact throughout the 6-month time period as a measure of acceptability, 190 (78%) out of 244 of the clinics found it acceptable making it slightly more acceptable than the telephone-based intervention in Kentucky in which 68% of physicians participated [Reference Hinds, Skaggs and Bergeisen24].

Physicians question why they should report if the laboratory also reports. Locally, we find that reporting is timelier if physicians report because of delays in obtaining laboratory results directly from the laboratory. These delays are usually due to out-of-state laboratories reporting at the state level and the state health office then having to distribute the cases by county. However, as laboratory reporting becomes electronic, these delays will probably shorten. Furthermore, we find that even if a case is reported by a laboratory, there is usually not enough information in the report to determine if the case meets the case definition or to conduct any necessary contact investigations. Thus, a physician's office gets called. However, given the number of reportable conditions (in Florida over 90), consideration should be given to improving the timeliness of reporting from laboratories. This would relieve physicians from reporting non-urgent cases which are more likely to have a laboratory report so that physicians can concentrate on reporting suspected clusters, unusual conditions, cases potentially related to bioterrorism, and cases for which there is no laboratory diagnosis (e.g. possible rabies exposure). Although it is crucial to have close relationships between physicians and health departments, simplifying the work of the physicians may improve reporting.

There are several limitations to this study. First, our sampling frame included only clinics that were identified through Yellow Pages. Some clinics may not be in the Yellow Pages, particularly smaller ones. Second, we were unable to determine patient volume in the clinics. Randomization should have resulted in clinics with similar patient volumes. However, we were unable to measure this because we could not obtain the information about patient volume from the majority of clinics. Because the total number of cases reported by the control clinic group was lower than that by the intervention clinic group during the pre-intervention period, it is possible that the control clinic group had a lower total patient volume. There may be other differences between the two groups that we were unable to measure. However, we did have the pre-intervention time period to serve as another control for both the intervention and control clinics. A third limitation is that 41 clinics assigned to the intervention group refused the initial visit and thus did not receive any of the intervention and 13 additional clinics asked not to be contacted during the 6-month follow-up period. Because of the intention-to-treat analysis, this group, which had less timely reporting and reported fewer cases, was included with the intervention group. When this group was excluded from the intervention group, the effect of the intervention was greater. However, including them gives us a better indication of how the intervention works in a real-life setting. Another limitation is that we had a relatively short follow-up period of 6 months, but it is unlikely that the intervention effects would increase over time. Finally, our two comparison time periods include different seasons and there are some reportable diseases which have seasonal changes in incidence. There may also have been undetected outbreaks during either of the two time periods. However, both the seasonal changes and any sizable undetected outbreaks would probably have affected both the intervention and control groups. Although the number of outbreak-related cases was higher in the period prior to the intervention than in the period during the intervention, the difference was seen in both the intervention and control groups.

In conclusion, we found that an intervention of increasing contact with ambulatory clinics resulted in small improvements in the percentage of cases reported and the timeliness of reporting. However, none of these changes were statistically significant. It may be that more intensive interventions are needed and/or that reporting needs to be simplified for the ambulatory clinics. Given the small percentage of cases that were actually reported from ambulatory clinics in comparison to laboratories and hospitals, it would be useful to assess the importance of reporting by ambulatory clinics as opposed to laboratories in surveillance, outbreak identification, and control efforts for each of the reportable diseases. Furthermore, it should be explored if other measures such as shortening the list of reportable diseases for ambulatory clinics to those requiring contact investigations or immediate control efforts increase compliance.

ACKNOWLEDGEMENTS

This study was funded by the Florida Department of Health. We thank the following persons for their assistance in data collection and data management: Raul Garcia, Antonio Gonzalez, Jennifer Lawrence, Alvaro Mejia-Echeverry, and Donnamarie Milazzo.

DECLARATION OF INTEREST

All three authors were employees of the Florida Department of Health at the time of the study. The authors and the Florida Department of Health have no financial interests that are affected by the material in the manuscript.

References

REFERENCES

1. Thacker, SB, Choi, K, Brachman, PS. The surveillance of infectious diseases. Journal of the American Medical Association 1983; 249: 11811185.CrossRefGoogle ScholarPubMed
2. Hearings Before the Subcommittee on National Security, Veterans Affairs, and International Relations, Committee on Government Reform, U.S. House of Representatives (22 September 1999) (testimony of Scott R. Lillibridge, MD, National Center for Infectious Diseases) 106th Congress, 1st Session. Washington, DC.Google Scholar
3. Chorba, TL, et al. Mandatory reporting of infectious diseases by clinicians. Journal of the American Medical Association 1989; 262: 30183026.CrossRefGoogle ScholarPubMed
4. Sewell, CM. Overcoming barriers and reaping the benefits of surveillance for infectious diseases: the New Mexico Perspective. Journal of Public Health Management and Practice 1996; 2: 3136.CrossRefGoogle ScholarPubMed
5. Perkins, BA, Popovic, T, Yeskey, K. Public health in the time of bioterrorism. Emerging Infectious Diseases 2002; 8: 10151018.Google Scholar
6. Doyle, TJ, Glynn, MK, Groseclose, SL. Completeness of notifiable infectious disease reporting in the United States: an analytical literature review. American Journal of Epidemiology 2002; 155: 866874.CrossRefGoogle ScholarPubMed
7. Ktsanes, VK, et al. Survey of Louisiana physicians on communicable disease reporting. Journal of the Louisiana State Medical Society 1991; 143: 2728, 3031.Google ScholarPubMed
8. Jones, JL, et al. Physician and infection control practitioner HIV/AIDS reporting characteristics. American Journal of Public Health 1992; 82: 889891.CrossRefGoogle ScholarPubMed
9. Schramm, MM, Vogt, RL, Mamolen, M. The surveillance of communicable disease in Vermont: who reports? Public Health Reports 1991; 106: 9597.Google ScholarPubMed
10. Konowitz, PM, Petrossian, GA, Rose, DN. The underreporting of disease and physicians' knowledge of reporting requirements. Public Health Reports 1984; 99: 3135.Google ScholarPubMed
11. Abdool Karim, SS, Dilraj, A. Reasons for under-reporting of notifiable conditions. South African Medical Journal 1996; 86: 834836.Google ScholarPubMed
12. Friedman, SM, et al. Suboptimal reporting of notifiable diseases in Canadian emergency departments: a survey of emergency physician knowledge, practices, and perceived barriers. Canada Communicable Disease Report 2006; 32: 187198.Google Scholar
13. Seneviratne, SL, Gunatilake, SB, de Silva, HJ. Reporting notifiable diseases: methods for improvement, attitudes and community outcome. Transactions of the Royal Society of Tropical Medicine and Hygiene 1997; 91: 135137.CrossRefGoogle ScholarPubMed
14. Allen, CF, Ferson, MJ. Notification of infectious diseases by general practitioners: a quantitative and qualitative study. Medical Journal of Australia 2000; 172: 325328.Google Scholar
15. Wagner, MM, et al. The emerging science of very early detection of disease outbreaks. Journal of Public Health Management and Practice 2001; 7: 5159.CrossRefGoogle ScholarPubMed
16. Fine, A, Layton, M. Lessons from the West Nile viral encephalitis outbreak in New York City, 1999: implications for bioterrorism preparedness. Clinical Infectious Diseases 2001; 32: 277282.Google ScholarPubMed
17. U.S. Census Bureau, State and County Quick Facts. Miami-Dade County, Florida (http://quickfacts.census.gov/qfd/states/12/12025.html). Accessed 13 August 2001.Google Scholar
18. Florida Statute. Section 381.0031 (1,2).Google Scholar
19. Krause, G, Ropers, G, Stark, K. Notifiable disease surveillance and practicing physicians. Emerging Infectious Diseases 2005; 11: 442445.CrossRefGoogle ScholarPubMed
20. Brachott, D, Mosely, JW. Viral hepatitis in Israel: the effect of canvassing physicians on notifications and the apparent epidemiological pattern. Bulletin of the World Health Organization 1972; 46: 457464.Google ScholarPubMed
21. Rothenberg, R, Bross, DC, Vernon, TM. Reporting of gonorrhea by private physicians: a behavioral study. American Journal of Public Health 1980; 70: 983986.CrossRefGoogle ScholarPubMed
22. Vogt, RL, et al. Comparison of an active and passive surveillance system of primary care providers for hepatitis, measles, rubella and salmonellosis in Vermont. American Journal of Public Health 1983; 73: 795797.Google Scholar
23. Thacker, SB, et al. A controlled trial of disease surveillance strategies. American Journal of Preventive Medicine 1986; 2: 345350.Google Scholar
24. Hinds, MW, Skaggs, JW, Bergeisen, GH. Benefit-cost analysis of active surveillance of primary care physicians for hepatitis A. American Journal of Public Health 1985; 75: 176177.CrossRefGoogle ScholarPubMed
25. Squires, SG, et al. Improved disease reporting: a randomized trial of physicians. Canadian Journal of Public Health 1998; 89: 6669.CrossRefGoogle ScholarPubMed
26. Weiss, BP, Strassburg, MA, Fannin, SL. Improving disease reporting in Los Angeles County: trial and results. Public Health Reports 1988; 103: 415421.Google ScholarPubMed
27. Jajosky, RA, Groseclose, SL. Evaluation of reporting timeliness of public health surveillance systems for infectious diseases. BMC Public Health 2004; 4: 29 (http://www.biomedcentral.com/1471-2458/4/29).CrossRefGoogle ScholarPubMed
28. Centers for Disease Control and Prevention. Updated guidelines for evaluating public health surveillance systems: recommendations from the guidelines working group. Morbidity Mortality Weekly Report 2001; 50: 135.Google Scholar
Figure 0

Table 1. Reporting outcome measures by ambulatory care clinic intervention status before and during the intervention, Miami-Dade County, Florida, 2003