A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://pmc.ncbi.nlm.nih.gov/articles/PMC2669631/ below:

Effect of Evidence-Based Acute Pain Management Practices on Inpatient Costs

Abstract Objectives

To estimate hospital cost changes associated with a behavioral intervention designed to increase the use of evidence-based acute pain management practices in an inpatient setting and to estimate the direct effect that changes in evidence-based acute pain management practices have on inpatient cost.

Data Sources/Study Setting

Data from a randomized “translating research into practice” (TRIP) behavioral intervention designed to increase the use of evidence-based acute pain management practices for patients hospitalized with hip fractures.

Study Design

Experimental design and observational “as-treated” and instrumental variable (IV) methods.

Data Collection/Extraction Methods

Abstraction from medical records and Uniform Billing 1992 (UB92) discharge abstracts.

Principal Findings

The TRIP intervention cost on average $17,714 to implement within a hospital but led to cost savings per inpatient stay of more than $1,500. The intervention increased the cost of nursing services, special operating rooms, and therapy services per inpatient stay, but these costs were more than offset by cost reductions within other cost categories. “As-treated” estimates of the effect of changes in evidence-based acute pain management practices on inpatient cost appear significantly underestimated, whereas IV estimates are statistically significant and are distinct from, but consistent with, estimates associated with the intervention.

Conclusions

A hospital treating more that 12 patients with acute hip fractures can expect to lower overall cost by implementing the TRIP intervention. We also demonstrated the advantages of using IV methods over “as-treated” methods to assess the direct effect of practice changes on cost.

Keywords: Pain management, instrumental variables, cost, as-treated

In this study we estimate cost changes associated with implementing a “translating research into practice” (TRIP) behavioral intervention designed to increase the use of evidence-based acute pain management practices for hospitalized older patients with hip fractures. This paper is a companion analysis to an assessment of the effects of the TRIP intervention on the use of acute pain management practices (Titler et al. 2008). Hospitals were randomized into intervention and comparison and health care providers in intervention hospitals took part in a multifaceted, interdisciplinary intervention promoting evidence-based acute pain management practices. The first objective of this paper is to estimate the overall cost change associated with implementing the TRIP intervention from the prospective of the hospital administrator. This assessment includes both the cost of implementing the TRIP intervention and the average inpatient cost change that results from the TRIP intervention.

The second objective is to estimate the direct effect of changes in evidence-based acute pain management practices on inpatient cost. The average inpatient cost change associated with the TRIP intervention will differ from this estimate because in the TRIP study providers had ultimate discretion over the evidence-based acute pain management to provide to each patient. This discretion led to “incomplete compliance” with each arm of the study—providers in comparison hospitals used evidence-based acute pain management practices on certain patients and providers in intervention hospitals did not use evidence-based acute pain management practices on certain patients. Incomplete compliance breaks the link between estimates associated with the intervention and estimates associated with practice change (Sheiner and Rubin 1995; Kaufman, Kaufman, and Poole 2003). For example, consider an acute pain management practice that in fact reduces a patient's hospital cost. Suppose that the ultimate effect of an intervention promoting use of the acute pain management practice is a higher usage rate in the intervention hospitals relative to the comparison hospitals (e.g., 60 percent of the patients in the intervention hospital and 40 percent of the patients in the comparison hospitals end up receiving the acute pain management practice postintervention). Complete compliance requires 100 percent of the patients in the intervention hospitals and 0 percent of the patients in the comparison hospitals to receive the practice. With incomplete compliance, the treatments for many patients are not affected by the intervention and the estimated average cost difference between the intervention and comparison hospitals would be effectively diluted by these “noncompliers.” The estimated average cost difference between the intervention and control patients would stem only from the difference in the percentage of patients receiving the acute pain management practice between the intervention and control hospitals (the 20 percentage point difference in the example above). As a result, the estimated average cost reduction associated with the intervention would be less than the cost reduction available from performing the acute pain management practice for an individual patient.

With incomplete compliance it is also problematic to use the “as-treated” practice variation observed in the study to make inferences on direct effect of evidence-based acute pain management practices on inpatient cost. Providers may have sorted patients into different practice levels independent of the intervention based on factors unmeasured by the researcher but related to patient outcomes (e.g., condition severity), which can confound estimation. Instrumental variable (IV) methods offer an alternative estimation approach that can be used to avoid confounding problems in this scenario. In this study, we demonstrate the advantage of IV methods by contrasting “as-treated” and IV estimates of the direct relationship between practice change and inpatient cost. We also demonstrate the link between the estimated effects of the TRIP intervention itself on inpatient costs and the IV estimates of the effects of the evidence-based pain management practices on inpatient costs (Kaufman, Kaufman, and Poole 2003). The next section provides additional background on the use of IV methods to estimate the direct effect of practice changes in behavioral studies with incomplete compliance.

Use of Instrumental Variable Methods with Incomplete Compliance

Instrumental variable methods offer an approach to estimate a direct causal relationship between practice and outcomes when providers incompletely comply with a behavioral intervention designed to change practice. In the application of IV methods to this problem, the behavioral intervention becomes the “instrument” that generates the practice variation used to estimate the causal relationship. Kaufman, Kaufman, and Poole (2003) describe the simple relationship between the effect of the behavioral intervention on an outcome and the effect of a practice change on an outcome using IV methods. Define X as the practice affected by the behavioral intervention; Y the outcome measure; Z=1 if the patient is treated by a provider that received the intervention, 0 otherwise; and u, v, and w are random error terms. We are after “c ” in the relationship: Y=cX+u. Kaufman et al. show that the IV estimate of “c ” is equivalent to first estimating the relationships between the intervention and practice (X=bZ+v) and the intervention and outcome (Y=dZ+w), and then solving for “c” as d/b. As such, the IV estimate reflects the average change in the outcome (Y) for a one unit change in practice (X) that was caused by the randomized intervention (Z). IV estimates are considered causal in this case because they are estimated using only the practice variation caused by the randomized intervention.

In previous IV research in health care, the subset of patients whose treatment choices were affected by instruments was called “marginal patients” (McClellan, McNeil, and Newhouse 1994; Harris and Remler 1998). When using data from a behavioral intervention, the marginal patients are those patients whose providers complied with the intervention or those patients whose treatments were determined by the behavioral intervention. As a result, it is risky to generalize IV estimates to the full set of patients (Imbens and Angrist 1994; Sheiner and Rubin 1995; Angrist, Imbens, and Rubin 1996; Greenland 2000; Kaufman, Kaufman, and Poole 2003; Greevy et al. 2004), but it can be argued that the marginal patients are those that are most relevant to policy makers, as they are the patients whose practices are mostly likely affected by policy-initiated behavioral interventions. While IV methods have been used in health care to estimate the effectiveness of treatments using observational databases (McClellan, McNeil, and Newhouse 1994; McClellan and Newhouse 1997; Brooks, McClellan, and Wong 2000; Frances et al. 2000; Hadley et al. 2002; Beck et al. 2003; Brooks et al. 2003), the advantage of IV methods to compensate for incomplete compliance in randomized studies has also been recognized. IV-based methods have been used to assess the effects of pharmaceutical regimens on health outcomes in randomized controlled trials with noncompliance (Frangakis and Rubin 1999; Baker 2000) and the effect of changes in patient behavior resulting from patient-direct behavioral interventions on outcomes (Baker 1998; Mealli 1999).

In this study we theorize that, regardless of the study arm, patients with higher initial pain levels will receive higher levels of evidence-based acute pain management practices and that these patients will also require more hospital resources resulting in higher costs. As such, we expect that estimates of the effect of higher levels of evidence-based pain management practices on inpatient costs that are obtained from comparing patients as-treated will be biased low. In contrast, IV methods only use the variation in level of evidence-based acute pain management practices that resulted from the intervention to estimate the effect of these practices on inpatient hospital costs. If the intervention provided a true randomization over unmeasured confounders, the IV estimates will yield an unbiased estimate of the effect of evidence-based acute pain management on inpatient costs for the set of patients whose pain management choices were affected by the intervention.

Behavioral Intervention Background

Twelve Midwest acute care hospitals that discharged at least 30 patients per year older than age 65 with a hip fracture participated. The 12 hospitals were stratified by size and randomized within each of three strata to either an intervention or comparison group. Each hospital identified the nonintensive care units where adult hip fracture patients received inpatient services. All hospitals in the study were provided an evidence-based practice guideline on Acute Pain Management in the Elderly (Herr et al. 2000) that was developed by study investigators and reviewed by national pain experts. A multifaceted, interdisciplinary TRIP intervention that promoted the adoption of the evidence-based practices recommended in the guideline was applied to the intervention hospitals. The study intervention, based on Rogers’ model on diffusion of innovations (a) integrated local physician and nurse opinion leaders; (b) developed nurse change champions; (c) educated nurse opinion leaders and change champions via a train-the-trainer program; (d) educated physician opinion leaders using principles of academic detailing; (e) educated nursing and medical staff via a web-based course; (f) supplied resource texts, videotapes, and training manuals; and (h) provided outreach visits every 3 weeks by an advanced practice nurse for the purposes of consultation with staff, concurrent medical record data abstraction, and data feedback (Rogers 2003). Early in the TRIP intervention (engagement phase), study investigators met with physicians and nurses at each intervention hospital to review indicators of acute pain management indicators for the patients admitted to their hospital with a hip fracture (performance gap assessment). Subsequent to provision of these data, ongoing audit and feedback of pain data were achieved through concurrent medical record abstraction of older adult patients admitted during the implementation phase and presentation of data in graph form to nurses and physicians every 6 weeks for 10 months (six reports). The comparison hospitals were informed that they could use the guideline in any way they deemed appropriate, but the study intervention was not implemented at these hospitals. The study was approved by the Internal Human Subjects Review Board at the University of Iowa and by the corresponding human subjects review boards at participating hospitals.

Patient Sample

Study subjects included patients 65 years or older with a hip fracture in the 12 study hospitals. The implementation phase of the TRIP intervention was initiated on January 1, 2001, and its main components were completed by April 1, 2002. Medical records personnel at each study hospital submitted a list of all eligible patients admitted during a pre-TRIP implementation phase (January 1, 2000 to December 31, 2000) and a time period initiating 90 days after the beginning of the TRIP implementation phase (April 1, 2001 to March 31, 2002)—the intervention phase. Up to 75 medical records per study hospital were randomly selected for each period and a total of 1,401 medical records were audited. We excluded from this analysis, patients with missing charge data (n=7), patients with missing admission or discharge dates (n=9), or patients with lengths-of-stay greater than 1 month (n=7) whose care likely involved treatment of chronic conditions beyond the initial hip fracture that would not be responsive to acute pain management. These exclusions resulted in a final patient sample of 1,378 (720 in the pre-TRIP and 658 in the intervention phase).

Data

Medical record data regarding the acute pain management practices of physicians and nurses were abstracted retrospectively at each site by a single trained research assistant using a consistent medical record abstraction form. The form enabled collection of highly detailed data pertaining to evidence-based acute pain management practices. Evidence-based acute pain management is a complex set of behaviors and the evidence-based guideline on acute pain management includes 50 recommended individual behaviors based on an extensive review and critique of research and clinical literature (Herr et al. 2000). To provide a summary measure of pain guideline adherence from these behaviors, study investigators and four nationally recognized pain experts used a Delphi approach to select the pain management practices in the guideline that were most critical to measure. The Delphi technique resulted in a list of 18 individual evidence-based pain management practices (Table 1). Every patient was assigned a binary indicator for each of the 18 individual practices, 1 if the patient received the practice, 0 otherwise. A summative index score of patient management practice for each patient was computed by summing the binary indicators across the 18 practices. The minimum summative index score was 0 if no individual practice indicators were met, to a maximum of 18 if all individual practice indicators were met. In this study the summative index score was used to measure the overall extent to which a patient received evidence-based pain management practices. Detailed information on the summative index score, its development, and its content validity and construct validity is available (Titler in press).

Table 1.

Individual Evidence-Based Acute Pain Management Practices in the Summative Index

1. Patient's pain was assessed every four hours (1 miss permitted/24 hours) during the 72 hours after admission. 2. The location of the patient's pain was assessed at least every 12 hours during the 72 hours after admission. 3. Thirty percent or more of a patient's analgesic administrations were followed by a reassessment of pain within 60 minutes. 4. The pain scale used was documented for at least half of the pain assessments. 5. Patient received some pain management education during the first 72 hours after admission. 6. Patient was repositioned at least once every 24 hours during the first 48 hours after admission. 7. Patient received at least one nonpharmaceutical intervention other than repositioning at least twice during the 72 hours after admission. 8. Patient received ≥0.7 mg of parenteral morphine equivalent milligrams of an opioid per hour during the first 24 hours after admission. 9. Patient was assessed for opioid side effects every 24 hours that an opioid was administered. 10. The number of days that a stool softener or laxative was administered to the patient divided by the number of days during which an opioid was administered is ≥1. 11. Patient did not receive any opioid analgesics via the intramuscular route during the first 72 hours after admission. 12. Patients received between 1,500 and 4,000 mg of acetaminophen during the period 48–72 hours after admission. 13. Patients received around-the-clock administration of an opioid during any 24-hour period within 72 hours after admission. 14. The patient received PCA administration of an analgesic at least once within 72 hours after admission. 15. Patient received around-the-clock administration of a nonopioid during the period 48–72 hours after admission. 16. More than 50% of the patient's analgesic administrations within 72 hours after admission included an opioid and a nonopioid administered within 30 minutes of each other. 17. Patient received no meperidine within 72 hours after admission. 18. Patient received no propoxyphene within 72 hours after admission.

Patient cost data were obtained from the Uniform Billing 1992 (UB92) claim form submitted for each patient to Medicare by each of the 12 hospitals in the study. The UB92 claim form contains total charge information and detailed charges based on individual hospital revenue centers. We converted the charge information on each claim to cost estimates using the hospital-specific cost-to-charge ratio from 2002 using data from the Center for Medicare and Medicaid Services Payment Impact Files (Center for Medicare and Medicaid Services 2002). Based on revenue center codes for each patient, we disaggregated costs into 14 separate categories. Table 2 contains the mapping from revenue center codes to the 14 disaggregated cost categories. We recorded each patient's length-of-stay (LOS) from the UB92 information and calculated total cost per day. We also used information from the UB92 form to compute the number of distinct procedures performed for each patient during each inpatient stay, the number of distinct diagnosis codes listed for each stay, and the patient's discharge status/destination (to home without home health care, to home with home health care, skilled nursing facility, intermediate care facility, transfer to another hospital, deceased). To compute the number of distinct procedures received by each patient and avoid overlap of similar procedures, we mapped the ICD-9 procedure codes listed on each patient's UB92 form to procedure classifications within AHRQ's Clinical Classifications Software (CCS) (Elixhauser, Steiner, and Palmer 2004). We then counted the number of distinct CCS procedure categories utilized by each patient. We performed a similar analysis for diagnoses by mapping the ICD-9 diagnosis codes listed on each patient's UB92 form to procedure classifications within AHRQ's CCS and counted the number of distinct CCS diagnoses for each patient.

Table 2.

Revenue Centers Used to Define Inpatient Cost Categories

Cost Categories Revenue Codes Room and board 10* (all inclusive rate), 11* (room and board private), 12* (room and board—semi private), 14* (room and board—private deluxe) Use of special nonoperating rooms 19* (subacute care), 20* (intensive care), 21* (coronary care), 71* (recovery room), 76* (treatment of observation room) Extra nursing services 23* (incremental nursing charge rate) Pharmacy 25* (pharmacy), 26* (IV therapy), 63* (drugs requiring specific identification) Laboratory services 30* (laboratory), 31* (laboratory pathological) Radiation services 32* (radiology—diagnostic), 33* (radiology—therapeutic), 34* (nuclear medicine), 35* (CT scan), 40* (other imagining services), 61* (MRI), 73* (EKG/ECG) Operating room 36* (operating room services) Pulmonary and respiratory services 41* (respiratory services), 46* (pulmonary function) Therapy services 42* (physical therapy), 43* (occupational therapy), 44* (speech/language pathology) Anesthesia 37* (anesthesia) Emergency room 45* (emergency room) Blood 38* (blood), 39* (blood services) Supplies 27* (medical/surgical supplies), 62* (medical/surgical supplies extension) Other costs Total cost minus the sum of costs across specific groups above

In addition to medical record data and billing data, costs associated with creating and implementing the intervention were collected during the project. Costs for labor and material resources were recorded throughout the development and implementation of the intervention. Key nurse leaders at each intervention hospital estimated labor hours related to the intervention and sent them to project staff via a monthly e-mail form. Published national average wage rates by labor type (e.g., administrators, nurses, physician specialty) were used to calculate the cost of staff time at intervention hospitals (Hawthorne and Rolster 2000; Medical Group Management Association 2000; Mee and Carey 2000; PAOS Job Survey 2001; Salary Wizard 2002).

Analysis

We assessed the costs of the intervention from the perspective of a representative hospital by estimating (1) the average direct intervention cost as the mean cost of implementing and providing the intervention across the six intervention hospitals; and (2) the average indirect intervention cost change as the average patient-level total cost change resulting from the intervention multiplied by an assumed number of patients treated. We then used “as-treated” and IV methods to assess the incremental effects of evidence-based acute pain management practices on costs.

We employed a consistent base model specification to estimate both the average patient-level cost changes associated with the intervention and the direct relationships between change in evidence-based pain management practices and costs. Total inpatient cost was the dependent variable on which we mainly focused. We also estimated and reported the models with 16 other resource-related dependent variables including cost in each of the 14 categories described in Table 2, LOS, and total cost per day. All model specifications included the following control variables thought to be related to both cost and the use of acute pain management practices during each stay: the number of distinct procedures performed during the stay; the number of distinct diagnoses listed during the stay; the age of the patient; a binary variable representing gender (male=1, 0 otherwise); a set of binary variables representing the patient's discharge status; a binary variable to reflect the study phase in which the stay occurred (TRIP intervention phase=1, pre-TRIP intervention phase=0); and a set of binary variables for each hospital in the study to control for unmeasured hospital-specific characteristics related to cost.

To evaluate each study objective, we estimated separate estimation models that were characterized by a distinct independent variable added to the base model specification described above. To estimate the effect of the intervention on cost, we included an independent variable that equaled 1 if the patient was treated in an intervention hospital during the intervention phase, 0 otherwise. To contrast the two approaches to estimate the direct effect of additional evidence-based acute pain management practices on cost, we first estimated “as-treated” models by adding the patient-specific summative index score to the base model specification. Next, we applied a two-stage least squares (2SLS) variant of IV estimation (McClellan, McNeil, and Newhouse 1994; McClellan and Newhouse 1997; Angrist and Evans 1998; Brooks, McClellan, and Wong 2000; Angrist 2001; Beck et al. 2003; Brooks et al. 2003). In the first stage, we regressed the summative index score for each patient on the model control variables and the indicator variable for the intervention. In the second stage, we added the predicted summative index score for each patient from the first-stage regression to the base cost model specification. Because the control variables are specified in both the first- and second-stage regressions, the only variation in the summative index score used in the second-stage model to estimate its respective parameter is the variation in the summative index score associated with the intervention.

Linear model specifications with cost as the dependent variable yield parameter estimates with straightforward interpretations. In the estimation model that includes the binary variable for the intervention, the parameter associated with the intervention variable is an estimate of the average cost difference between the patients in the intervention and control hospitals. The parameter estimate associated with the summative index score in the “as-treated” model and the IV model estimates the change in cost for a unit change in the summative index. It has been noted, though, that cost relationships within health care studies often have error terms that are not normally distributed and are generally skewed to the right (Manning 2006). When this occurs, ordinary least squares (OLS) estimates of the parameters in a linear cost model remain unbiased but the estimated standard errors for these estimates in small samples may be inaccurate and produce misleading statistical inferences (Austin, Ghali, and Tu 2003; Baum 2006). It is possible to estimate standard errors for OLS estimates that are robust to the underlying error structure, but in small samples the statistical properties of the estimates remain unclear (White 1980; Baum 2006). To mitigate the problem of skewed error terms, researchers have suggested using nonlinear transformations of the underlying cost model. Suggestions include using the natural logarithm of cost as the dependent variable or using generalized linear models (GLM) with a log specification of cost and an error structure better able to accommodate a skewed error distribution (Manning 1998, 2006; Andersen, Andersen, and Kragh-Sorensen 2000; Austin, Ghali, and Tu 2003). However, these nonlinear approaches require additional assumptions about the relationship between cost and the independent variables that may not be warranted. For example, in models with a logarithm-transformed dependent variable, the average cost change associated with the intervention is constrained to be a function of the remaining model covariates (Austin, Ghali, and Tu 2003). In contrast, OLS parameter estimates of the linear model do not have this constraint and with a large sample size, the estimates from the linear model are normally distributed regardless of underlying distribution of the error terms via the central limit theorem (Lumley et al. 2002; Greene 2003; Baum 2006). Given these tradeoffs, we assessed whether our findings in the model with total cost as the dependent variable were robust to the estimation approach. We estimated models for the average total cost associated with the intervention using (1) OLS on total cost; (2) OLS on the natural logarithm of total cost; and (3) GLM on the natural logarithm of total cost and an assumed gamma error distribution as suggested by earlier research (Manning and Mullahy 2001). For specifications (2) and (3), we then estimated the average total cost of the intervention at the mean level of the other model covariates. We found that the estimated effect of the intervention on cost was robust across estimation approaches in terms of the magnitude of the intervention effect estimate and statistical significance. As a result, given the straightforward interpretations of the OLS estimates with total cost as the dependent variable, we report these estimates below. To be consistent with the interpretations of the OLS estimates, we also used a linear specification for our 2SLS-IV estimation. Because 2SLS is a generalized method of moments estimator, 2SLS estimates are consistent and asymptotically normally distributed regardless of the distribution of the underlying error (Greene 2003). Stata software, version 9.0, was used for estimation. The REG procedure in Stata was used for the intervention and “as-treated” models, and the IVREG procedure was used for the IV models. All models were estimated with robust standard errors.

Results

Table 3 contains the mean of each dependent variable used in the analysis and the estimates of the effect of the behavioral intervention on each of the dependent variables. The patients in our sample received an average of 8.53 of the 18 evidence-based pain management practices within the summative index. The range of the summative index score in the sample was 1–18 with 80 percent of the sample having index scores between 5 and 12. Patients at intervention hospitals during the intervention phase had an average of 0.94 additional evidence-based acute pain management practices performed (p<.001) relative to patients at the comparison hospitals. The effects of the intervention on individual acute pain management practices are not reported in Table 3, but the intervention had a significant and positive impact (p<.05) on 11 of the 18 practices listed in Table 1 (practices 1 through 6, 9, 12, 13, 15, and 17).

Table 3.

The Effect of the Intervention on Pain Practices, Inpatient Costs, and Length-of-Stay

Dependent Variable Mean of Dependent Variable Parameter Estimate Robust Standard Error p Value Summative index score 8.53 0.9365637 0.23300043 <.001 Main outcome measures Total cost 8,050.38 −1,500.362 341.55 <.001 Length-of-stay 5.71 −0.497 0.26 .055 Total cost per day 1,476.61 −151.20 58.96 .010 Disaggregated cost Room and board 1,237.04 −224.95 59.58 <.001 “Extra” nursing 21.57 4.65 1.51 .002 Use of special nonoperating rooms 201.60 52.63 20.70 .011 Pharmacy 864.01 −230.54 62.43 <.001 Laboratory services 584.28 −190.07 48.97 <.001 Radiation services 454.65 −168.72 61.77 .006 Operating room 1,342.69 −355.61 70.20 <.001 Pulmonary and respiratory services 125.87 −85.21 29.59 .004 Therapy services 276.12 18.35 19.00 .334 Anesthesia 217.69 −33.71 10.93 .002 Emergency room 158.85 −36.85 16.35 .024 Blood 111.66 −51.44 21.92 .019 Supplies 2,240.79 −130.16 153.66 .397 Other costs 213.54 −68.73 47.65 .149

The intervention reduced the cost of an average inpatient stay by a little over $1,500 (p<.001). Therefore, a hospital treating 100 acute hip fracture patients could expect a reduction in patient treatment costs of over $150,000 from implementing the intervention. The cost reduction per inpatient stay stemmed from a nearly half a day reduction in LOS (p=.055), and a reduction of over $150 in cost per day (p=.010). The intervention increased “extra nursing” cost (p=.002) and the cost associated with special nonoperating rooms (p=.011). These cost increases were more than offset, though, by reductions in room and board (p<.001), pharmacy (p<.001), laboratory services (p<.001), radiation services (p=.006), operating room (p<.001), pulmonary and respiratory services (p=.004), anesthesia (p=.002), emergency room services (p=.024), and blood supplies (p=.019). Using the intervention cost data collected throughout the study, the average direct cost to implement the TRIP intervention at intervention hospitals equaled $17,714. Therefore, a hospital treating 100 acute hip fracture patients could expect an overall cost reduction of $132,286 ($150,000–$17,714) from implementing the TRIP intervention. A hospital would only need to treat more than 12 acute hip fracture patients to realize cost-savings from the TRIP intervention.

Table 4 contains estimates of the direct effect of the summative index score on the cost and LOS. The first three columns contain the estimates of the “as-treated” analysis. These results suggest that an increase in the number of evidence-based acute pain management practices has little effect on inpatient cost and LOS. In contrast, the IV estimates in Table 4 show that a unit change in the summative index score led to over a $1,600 drop in total inpatient cost (p=.003) which stems mainly from a reduction of over $161 cost per day (p=.040). LOS also fell by a half a day with a unit increase in the summative index score, but was not statistically significant. The relationship between the IV estimates and the direct estimates from the intervention as described by Kaufman, Kaufman, and Poole (2003) can be seen in our results in Tables 3 and 4. The parameter b, the change in the summative index level that resulted from the intervention equals 0.9365637 and the parameter d, the change in total cost that resulted from the intervention, equals $−1,500.362. The ratio of d/b is $−1,601.99 which is the IV estimate in Table 4 of the change in total cost from increased use of evidence-based acute pain management practices.

Table 4.

Alternative Estimates of the Average Effect of an Increase in Acute Pain Management Practices (Summative Index Score) on Cost and Length-of-Stay

As-Treated (Ordinary Least Squares) Instrumental Variables Dependent Variables Parameter Estimate Robust Standard Error p Value Parameter Estimate Robust Standard Error p Value Total cost −57.71 39.27 .197 −1,601.99 534.29 .003 Length-of-stay −0.09 0.03 .002 −0.531 0.294 .071 Total cost per day 17.87 6.55 .006 −161.44 78.37 .040 Discussion

The first goal of this paper was to estimate the cost change associated with a TRIP behavioral intervention directed at providers in an inpatient setting. A secondary goal was to contrast “as-treated” and IV methods to estimate the direct effect of changes in use of evidence-based acute pain management practices on inpatient cost. In Table 3 we show that the TRIP intervention itself led to a sizable reduction in total cost per inpatient stay ($1,500 reduction). Total cost per day and LOS were both reduced by the TRIP intervention. The intervention increased the costs per inpatient stay for extra nursing and special nonoperating rooms. The increase in the extra nursing cost appears related to the increased intensity of nursing care resulting from the intervention. Consider the evidence-based practices significantly affected by the intervention (practices 1 through 6, 9, 12, 13, 15, and 17 in Table 1). These practices, with the exception of practice 17, are nurse-labor intensive. It appears, though, that the increased nurse cost associated with the intervention was a wise investment that led to decreases in total inpatient hospital cost stemming from decreases in the cost of room and board, pharmacy, laboratory services, radiation services, operating room, pulmonary and respiratory services, anesthesia, emergency room services, and blood supplies. We estimated that it would cost a hospital over $17,000 to implement the TRIP intervention. Given the estimated cost savings per inpatient stay, it would only take the treatment of 12 patients with acute pain for the TRIP intervention to reduce hospital cost.

While the estimates in Table 3 provide compelling evidence that the TRIP intervention itself reduces cost per stay, because of incomplete compliance with the intervention, these estimates do not provide direct estimates of the effect of evidence-based acute pain management practices on costs. The “as-treated” approach to estimate the direct effect of evidence-based acute pain management practices on hospital cost risks treatment selection bias. Providers may have often tailored the acute pain management practices to the needs of individual patients and not complied with their respective study arm. Biased “as-treated” estimates will result if these pain management practice choices were based on confounding variables observed by the provider but unavailable to the researcher. Our IV estimates avoid treatment selection bias by using only the practice variation induced by the randomized intervention. But, likewise, our IV estimates can only be strictly generalized to the set of patients whose pain practices were influenced by the intervention. As seen in Table 4, the attempt to estimate the direct effect of additional evidence-based acute pain management practices on hospital costs using the “as-treated” approach appears to understate the effect of acute pain management practices on inpatient cost. Providers must have been providing more evidence-based acute pain management practices to more serious and costly patients regardless of the intervention which biases the estimated effect of these practices on inpatient cost toward zero. The IV results in Table 4 also provide average estimates of the effects of a unit change in evidence-based acute pain management practices on inpatient cost. Because IV estimation only used the practice variation stemming from the TRIP intervention, these estimates are free of treatment selection bias. However, it should be noted that these estimates only describe changes in inpatient cost for patients whose evidence-based acute pain management practices were affected by the intervention. The characteristics of patients that share this property cannot be ascertained from our analysis, but we suspect that these patients are probably not those at the extremes of pain severity because the absence of pain or presence of severe pain probably limits the treatment discretion providers have for these patients regardless of the study arm. Because of this limitation, it would probably not be wise to assume that cost savings similar to our IV estimates would be available from a change in the number of evidence-based acute pain management practices for those patients at the extremes of pain severity.

Acknowledgments

The authors wish to thank the support of the Agency for Healthcare Quality and Research (AHRQ) for the financial support of this project through R01 HS10482.

Special gratitude to Dr. Xian-Jin Xie and Dr. William Clark for their superb help with statistical analysis; Dr. J. Lawrence Marsh and Dr. Margo Schilling for their conceptual guidance; and Kimberly Jordan for her administrative support.

Financial and Other Disclosures: None.

Disclaimers: Any remaining errors are attributable to the authors. This paper does not represent policy of AHRQ. The views expressed herein are those of the authors and no official endorsement by AHRQ is intended or should be inferred.

Supporting Information

Additional supporting information may be found in the online version of this article:

Appendix SA1: Author Matrix.

Please note: Wiley-Blackwell is not responsible for the content or functionality of any supporting materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article.

References
  1. Andersen C K, Andersen K, Kragh-Sorensen P. Cost Function Estimation: The Choice of Model to Apply to Dementia. Health Economics. 2000;9(5):397–409. doi: 10.1002/1099-1050(200007)9:5<397::aid-hec527>3.0.co;2-e. [DOI] [PubMed] [Google Scholar]
  2. Angrist J D. Estimation of Limited Dependent Variable Models with Dummy Endogenous Regressors: Simple Strategies for Empirical Practice. Journal of Business & Economic Statistics. 2001;19(1):2–16. [Google Scholar]
  3. Angrist J D, Evans W N. Children and Their Parent's Labor Supply: Evidence from Exogenous Variation in Family Size. American Economic Review. 1998;88(3):450–77. [Google Scholar]
  4. Angrist J D, Imbens G W, Rubin D. Identification of Casual Effects Using Instrumental Variables. Journal of the American Statistical Association. 1996;91(434):444–72. [Google Scholar]
  5. Austin P C, Ghali W A, Tu J V. A Comparison of Several Regression Models for Analysing Cost of CABG Surgery. Statistics in Medicine. 2003;22(17):2799–815. doi: 10.1002/sim.1442. [DOI] [PubMed] [Google Scholar]
  6. Baker S. Analysis of Survival Data from a Randomized Trial with All-or-None Compliance: Estimating the Cost-Effectiveness of a Cancer Screening Program. Journal of the American Statistical Association. 1998;93(443):929–34. [Google Scholar]
  7. Baker S. Analyzing a Randomized Cancer Prevention Trial with a Missing Binary Outcome, an Auxiliary Variable, and All-or-None Compliance. Journal of the American Statistical Association. 2000;95(449):43–50. [Google Scholar]
  8. Baum C F. An Introduction to Modern Econometrics Using Stata. College Station, TX: Stata Press; 2006. [Google Scholar]
  9. Beck C A, Penrod J, Gyorkos T W, Shapiro S, Pilote L. Does Aggressive Care Following Acute Myocardial Infarction Reduce Mortality? Analysis with Instrumental Variables to Compare Effectiveness in Canadian and United States Patient Populations. Health Services Research. 2003;38(6 Pt 1):1423–40. doi: 10.1111/j.1475-6773.2003.00186.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Brooks J M, Chrischilles E A, Scott S D, Chen-Hardee S S. Was Breast Conserving Surgery Underutilized for Early Stage Breast Cancer? Instrumental Variables Evidence for Stage II Patients from Iowa. Health Services Research. 2003;38(6 Pt 1):1385–402. doi: 10.1111/j.1475-6773.2003.00184.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Brooks J M, McClellan M, Wong H S. The Marginal Benefits of Invasive Treatments for Acute Myocardial Infarction: Does Insurance Coverage Matter? Inquiry. 2000;37(1):75–90. [PubMed] [Google Scholar]
  12. Center for Medicare and Medicaid Services. Medicare PPS Payment Impact File. 2002. Available at http://www.cms.hhs.gov/AcuteInpatientPPS/HIF/list.asp.
  13. Elixhauser A, Steiner C, Palmer L. Clinical Classifications Software CCS. Rockville, MD: Agency for Healthcare Research and Quality; 2004. [Google Scholar]
  14. Frances C D, Shlipak M G, Noguchi H, Heidenreich P A, McClellan M. Does Physician Specialty Affect the Survival of Elderly Patients with Myocardial Infarction? Health Services Research. 2000;35(5 Pt 2):1093–116. [PMC free article] [PubMed] [Google Scholar]
  15. Frangakis C, Rubin D. Addressing Complications of Intention-to-Treat Analysis in the Combined Presence of All-or-None Treatment Noncompliance and Subsequent Missing Outcomes. Biometrika. 1999;86(2):365–79. [Google Scholar]
  16. Greene W H. Econometric Analysis. Englewood Cliffs, NJ: Prentice-Hall; 2003. [Google Scholar]
  17. Greenland S. An Introduction to Instrumental Variables for Epidemiologists. International Journal of Epidemiology. 2000;29:722–9. doi: 10.1093/ije/29.4.722. [DOI] [PubMed] [Google Scholar]
  18. Greevy R, Silber J H, Cnaan A, Rosenbaum P R. Randomization Inference with Imperfect Compliance in the ACE-Inhibitor after Anthracycline Randomized Trial. Journal of the American Statistical Association. 2004;99(465):7–15. [Google Scholar]
  19. Hadley J, Polsky D, Mandelblatt J S, Mitchell J M, Weeks J C, Wang Q, Hwang Y T. An Exploratory Instrumental Variable Analysis of the Outcomes of Localized Breast Cancer Treatments in a Medicare Population. Health Economics. 2002;12(3):171–86. doi: 10.1002/hec.710. [DOI] [PubMed] [Google Scholar]
  20. Harris K M, Remler D K. Who is the Marginal Patient? Understanding Instrumental Variables Estimates of Treatment Effects. Health Services Research. 1998;33(5):1337–60. [PMC free article] [PubMed] [Google Scholar]
  21. Hawthorne G W, Rolster C J. 10th Annual Compensation and Salary Guide. Hospital and Health Networks. 2000;74(9):38–46. [PubMed] [Google Scholar]
  22. Herr K, Titler M G, Sorofman B, Ardery G, Schmitt M, Young D. Evidence-Based Guideline: Acute Pain Management in the Elderly. Iowa City, IA: The University of Iowa Research Team on Evidence-Based Practice: Acute Pain Management in the Elderly. AHRQ R01 HS10482; 2000. [Google Scholar]
  23. Imbens G W, Angrist J D. Identification and Estimation of Local Average Treatment Effects. Econometrica. 1994;62(2):467–75. [Google Scholar]
  24. Kaufman J, Kaufman S, Poole C. Casual Inference from Randomized Trials in Social Epidemiology. Social Science and Medicine. 2003;57(12):2397–409. doi: 10.1016/s0277-9536(03)00135-7. [DOI] [PubMed] [Google Scholar]
  25. Lumley T, Diehr P, Emerson S, Chen L. The Importance of the Normality Assumption in Large Public Health Data Sets. Annual Review of Public Health. 2002;23:151–69. doi: 10.1146/annurev.publhealth.23.100901.140546. [DOI] [PubMed] [Google Scholar]
  26. Manning W G. The Logged Dependent Variable, Heteroskedasticity, and the Retransformation Problem. Health Economics. 1998;17(3):283–95. doi: 10.1016/s0167-6296(98)00025-3. [DOI] [PubMed] [Google Scholar]
  27. Manning W G. Dealing with Skewed Data on Costs and Expenditures. In: Jones A M, editor. The Elgar Companion to Health Economics. Northhampton: Edward Elgar Publishing Limited; 2006. pp. 439–46. [Google Scholar]
  28. Manning W G, Mullahy J. Estimating Log Models: To Transform or not to Transform. Journal of Health Economics. 2001;20(4):461–94. doi: 10.1016/s0167-6296(01)00086-8. [DOI] [PubMed] [Google Scholar]
  29. McClellan M, McNeil B, Newhouse J. Does More Intensive Treatment of Acute Myocardial Infarction in the Elderly Reduce Mortality? Analysis Using Instrumental Variables. Journal of the American Medical Association. 1994;272(11):859–66. [PubMed] [Google Scholar]
  30. McClellan M, Newhouse J P. The Marginal Cost-Effectiveness of Medical Technology: A Panel Instrumental-Variables Approach. Journal of Econometrics. 1997;77:39–64. [Google Scholar]
  31. Mealli F. Analyzing a Randomized Trial on Breast Self-Examination with Noncompliance and Missing Outcomes. Biostatistics. 1999;5(2):207–22. doi: 10.1093/biostatistics/5.2.207. [DOI] [PubMed] [Google Scholar]
  32. Medical Group Management Association. Physician Compensation and Production Survey: 2000 Report Based on 1999 Data. Englewood, CO: Medical Group Management Association; 2000. [Google Scholar]
  33. Mee C L, Carey K W. Nursing 2000 Salary Survey. Nursing. 2000;30(4):58–61. doi: 10.1097/00152193-200030040-00036. [DOI] [PubMed] [Google Scholar]
  34. PAOS Job Survey. PAOS Job Survey: Practice Survey of Physician Assistants Practicing in Orthopedics. 2001. [accessed on March 22, 2001]. Available at http://www.paos.org/PRACSERV-2.html.
  35. Rogers E M. Diffusion of Innovations. New York: The Free Press; 2003. [Google Scholar]
  36. Salary Wizard. 2002. [accessed on September 29, 2008]. Available at http://www.nurseweek.com/salary/
  37. Sheiner L, Rubin D. Intention-to-Treat Analysis and the Goals of Clinical Trials. Clinical Pharmacology and Therapeutics. 1995;57(1):6–15. doi: 10.1016/0009-9236(95)90260-0. [DOI] [PubMed] [Google Scholar]
  38. Titler M G. Summative Index: Acute Pain Management in the Elderly. Applied Nursing Research. doi: 10.1016/j.apnr.2008.03.002. In press. [DOI] [PubMed] [Google Scholar]
  39. Titler M G, Herr K, Brooks J M, Xie X J, Ardery G, Schilling M L, Marsh J L, Everett L Q, Clarke W. A Translating Research into Practice Intervention Improves Management of Acute Pain in Older Hip Fracture Patients. Health Services Research. 2008 doi: 10.1111/j.1475-6773.2008.00913.x. DOI 10.1111/j.1475-6773.2008.00913.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. White H. A Heteroskedasticity-Consistent Covariance Matrix Estimator and a Direct Test for Heteroskedasticity. Econometrica. 1980;48(4):817–38. [Google Scholar]
Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.3