As Dr. Rempher and Ms. Manna discussed this week, data from the NDNQI is used to improve nursing practices and support the strategic outcomes of an organization. This data is also used to create the Dashboard. The Dashboard, then, is used to create an action plan. Correctly interpreting information presented on the Dashboard provides nurses with a better understanding of the goals of the action plan.

To prepare

For this Assignment, use the Dashboard located in this week’s Resources, to interpret the data and frame a nursing plan based on best practices.

Save your time - order a paper!

Get your paper written from scratch within the tight deadline. Our service is a reliable solution to all your troubles. Place an order on any task and we will take care of it. You won’t have to worry about the quality and deadlines

Order Paper Now

· Review  the NDNQI and use of dashboards

· Choose a Nurse-Sensitive Quality Indicator that needs improvement based on the data presented in the Dashboard. Reflect on how you would develop a nursing plan with suggestions on how to improve performance in the chosen area.

· Develop a nursing plan that outlines suggestions on how to improve performance in the chosen area.

· Provide at least five best practices from evidenced-based literature to support your nursing plan.

Assignment

· Draft a 4- to 5 page paper analyzing areas where there is good performance and areas of opportunity from the sample Dashboard.

· Analyze the data provided in the Dashboard and select an area of performance that needs improvement. Include information on why this area was chosen.

· Develop a nursing plan that includes suggestions on how to improve performance on the selected indicator. Be sure to provide at least five best practices from the evidenced-based literature to support your suggested nursing plan.

Guide

Provided a brief paragraph that included an overview of the assignment and its purpose.

Presented an analysis of the data from the Sample Dashboard.

Provide a thorough description of the area of performance selected that needs improvement and discussed the statistical findings and why the area was selected.

Present a nursing plan on five or more best practices that could be implemented to improve performance on the selected indicator. 

Included how each best practice could address the selected performance. Each best practice must be supported by the professional literature.

Provided a summary of the main points in the paper.

APA Formatting: cover page, title of paper on second page, level headings, Times New Roman 12 font, 1″” margins, and page numbers. APA References: Uses in-text citations appropriately and format correctly. Paraphrases to avoid plagiarizing the source.
Reference list is in alphabetical order, hanging indent, double spaced in format. Each specific entry contains all information required by APA format including author, year of publication, title of publication, pages, DOI, website, as appropriate.

Accountability and Nursing Practice

Accountability and Nursing Practice Program Transcript

[MUSIC PLAYING]

NARRATOR: Personal accountability.

KENNETH REMPHER: I think there’s a direct relationship between holding nurses accountable and their own perceptions of what they contribute to the overall health care process in an organization. And I think one of my greatest epiphanies as an administrator was when one of the nurses looked at me, and she said, if you expect more from us, you’re going to get it. And I was like, what do you mean. And she said, if you expect us to produce a higher level, you give us the tools that we need to do it, we’re not going to disappoint you.

NARRATOR: This week, Miss Nitza Santiago, Miss Diane Johnson, Dr. Kenneth Rempher, and Miss Maria Manna share strategies for improving nursing practice and promoting patient’s rights.

NITZA SANTIAGO: Involving patients in their care has a very direct impact on safety and quality and partly because having them fully knowledgeable of what is expected during that patient’s stay, what is that treatment plan, really allows them to look at what’s happening around them and to confirm that if the doctor said you’re going to have some blood work done later today, that if it doesn’t happen, that they’re questioning the health care team.

FEMALE SPEAKER: I just want to talk to you about what they did today. What Dr. Herval found.

NITZA SANTIAGO: The more involved the patient is in their care, it’s going to lead to a more positive outcome at the point of discharge. And yes, you know, the patient should have to be worried about everything that’s going on, are we doing the right thing. But as we all know, unfortunately, errors do occur. So it’s important that everybody is involved in the care and attentive to what needs to happen.

And I also believe that if they know their treatment plan, if they have a good understanding of their diagnosis, they can take an active role in knowing how to prevent signs and symptoms or to catch signs and symptoms early on and call the nurse, and say, I’m feeling really lightheaded, dizzy. All of that helps us to intervene earlier.

DIANE JOHNSON: The changes that occurred in our organization, when I think of previous times to what’s happening, is being less focused on what’s convenient for hospital staff. OK? And looking, as part of our patient-centered

© 2016 Laureate Education, Inc. 1

Accountability and Nursing Practice

care, looking at what is best for the patient and family and engaging them in a partnership.

And so that right now was a big focus area for me is to realize that, again, we’re here for the patient, but we need to make sure that while we may have the expert knowledge in terms of the science, et cetera, the patient knows themselves best. OK? And it is our responsibility to make sure we’re collaborating with them. That we’re not simply telling them this is what’s going to happen.

We ask them about their preferences. We plan the time, the schedule. Some things we can’t do that, but there are lots of opportunities where we could say, would you like to eat at this time versus this time, or you have these three tests today, we might be able to change the order of the tests. Is there a preference that you would have for one versus the other. They’re in a foreign environment, away from a lot of things that are familiar to them. And anything that we can do to ease that, I think it’s our duty and our responsibility.

KENNETH REMPHER: The NDNQI, which is the National Database of Nursing Quality Indicators, is a database that’s been around probably since the mid ’90s. 1994, I think, was the original version that was developed in collaboration with the American Nurses Association. What the NDNQI allowed was for nurses to take those components of clinical practice that were as nurse-sensitive as possible, pull those out, find a way to assess those, to measure those, and to present those in some form that would be meaningful to nurses that would require– and I honestly do believe that the NDNQI is probably the soul or one of the strongest contributors to this movement that holds nurses accountable for their practice. Because for the first time, nurses have been given data that say, we know that you’re there 12 hours a day, every day, and then you’ve got a counterpart that comes on to replace you, but we’ve never had a way to really quantify what it is that you do to tell you how good you are at what you’re doing as a discipline or as a profession.

NDNQI does that. In our organization, at Sinai Hospital Baltimore, there are a series of local councils called outcomes and practice councils. The data for each service line are presented to the local councils, the outcomes and practice councils, and they’re presented on a consistent quarterly basis, where we have been using this dashboard process.

And the dashboard process is a mechanism whereby we record not only our own scores, but we record benchmarks that are provided to us through NDNQI, because they provide you with various percentile rankings for hospitals like your hospital in cities like your city that provide services like your unit. So that’s the other important thing about NDNQI, is that it presents unit-based data. Now, the bottom line is that once it comes to these local practice councils, we now have the mechanism in place to hold nurses accountable for their practice.

© 2016 Laureate Education, Inc. 2

Accountability and Nursing Practice

MARIA MANNA: The NDNQI set benchmarks for indicators that are key to a specific practice area in most instances. So if you have no benchmarks or no measurements to look at, sometimes your interventions or what actions you are employing don’t hold a lot of significance for you. Making nurses aware of the key indicators in their practice areas and where they are in reference to a national benchmark gives them an idea of where they need to go and how they need to move. And they develop action plans in order to meet those benchmarks that the action plans being developed by them gives them buy-in.

It makes them feel supported that they actually have the authority to create the action plans. It’s not something that’s handed down to them and saying, OK. Now you need to do this. They’re actually telling leadership. As direct care nurses, we do this every day, and this is what we believe needs to be done in order to achieve this result.

In my practice area, we look at seclusion and restraint. That’s one of the key indicators. And we look at ways to perhaps reduce the number of seclusion episodes. So nurses were involved in an action plan where we would request a peer review and have a consultation with the physician involved in the case and have an interdisciplinary meeting to resolve issues that we believe would potentially reduce the amount of time any particular patient might spend in a seclusion situation.

We would look to see if that has an impact on reducing our seclusion episodes. And if we’re effective, we would continue that. If not, the nurses have an opportunity to modify their action plan. We have an outcomes and practice committee where nurses attend, and that is where that work is done, and then it’s presented to the entire nursing staff on a particular unit. So those benchmarks that NDNQI drives actually focus the nurses on key indicators that they need to work towards for the betterment of the patient. It’s improved quality.

KENNETH REMPHER: So the NDNQI, overall, has gone from a database that just collects data to something that we can enforce at the unit level that really holds nurses accountable for providing the optimum patient experience. We’re very proud of one particular unit. In our organization there’s an intermediate care unit that has a relatively high acuity in terms of patients. Where the acuity increases, there tends to be an alignment with increase in nosocomial or even community-acquired pressure ulcers.

This particular unit had a pretty significant rate. Up to 40% of its patients during their stay on this unit were acquiring pressure ulcers, which is an abysmal rate. It’s something that needed to be addressed immediately. Through the use of the NDNQI, we were able to track that data. The nurse manager and the clinical specialist for that service line met with the direct care staff to say, this is not how we practice. This is not who we are. We are much better than this. What can we do to turn this around?

© 2016 Laureate Education, Inc. 3

Accountability and Nursing Practice

Through a lot of creative thinking and through multidisciplinary collaboration, they developed a magnificent program that has resulted in no nosocomial pressure ulcers for 26 months. There was a gradual trend downward. But the important point here is that using this data allowed them to see how poorly they were doing.

It caused them to become creative and to develop through an interdisciplinary process of plan. Implement that plan. Reevaluate the data. Submit it. Come back now, for 26 months, or actually for almost eight quarters, we’ve had absolutely no acquired nosocomial ulcers.

We’ve had the opportunity to influence practice literally around the world. And by doing that, we have been able to, as I said, we’ve consulted with other hospitals, we have presented at international, national, local, and regional poster competitions. We have articles in line for publication at this point. So there’s been a true opportunity, and this has been important for direct care nurses to see that what they thought was mundane and routine can really be construed as something powerful and allows them to see the presence of nursing at the bedside by qualified people can make a big difference.

MARIA MANNA: Traditionally in psychiatry, we don’t see a lot of pressure ulcers. The majority of our patients are up walking around. They’re generally well hydrated. And so we identify that we really needed to make sure that our patients were receiving the appropriate skin assessments, that we were able to recognize wounds that need to be staged, et cetera.

So the nurse felt empowered enough to attend an eight-hour workshop. She initiated that attendance. And then what she did was she created a poster board. A three-part poster board. It’s absolutely beautiful, with pictures of wounds, directions about how to stage, what’s important to look at far as assessment.

And she presented that to her peers so that she could share her knowledge with them. And that in the end, it would go to improving the quality of care that the psychiatric patients would receive. And this again goes to dementia patients who are more debilitated, who do need that additional treatment, perhaps, and may need to have a wound assessed. Now, because she felt empowered to take advantage of education provided and empowered to pass that education along to her peers, the patients can benefit from that. And the other nurses feel that they have the tools that they need to do their job effectively.

In psychiatry, we have not only the patient’s bill of rights that every other patient has, we have a specific patient bill of rights for psychiatric patients. And nurses are responsible for making patients aware of those rights when they’re admitted. Every patient admitted to a psychiatric unit is apprised of those individual psychiatric patient’s bill of rights.

© 2016 Laureate Education, Inc. 4

Accountability and Nursing Practice

This is a very vulnerable population. They feel that because of their illness, they, of course, can’t trust people. They have generally struggled with many psychosocial issues. They may not have a lot of family support. So oftentimes, nursing is the identified party that’s going to advocate for that patient when it comes to their right. And a right to refuse medication belongs to that patient.

That’s one of the areas of advocacy. We had a patient admitted to our unit who was certified. That means they were committed against their will because they are considered to be incompetent at the time of their admission. In psychiatry, once treatment commences, people can improve pretty quickly. And so her cognitive state changed.

When this woman was admitted, she had breast [INAUDIBLE] with metastasis and was refusing treatment. She was refusing any diagnostic testing as well. So in order to help her, the treatment team determined that guardianship was probably necessary so that she could take advantage of testing and treatment. And that’s exactly what happened.

Guardianship was pursued. We accomplished that. But in the meantime, the patient’s mental state improved. She was no longer incapacitated. Nursing staff really felt an ethical dilemma existed there because now she had the capacity to make decisions. She had the capacity to understand due to the improvement of her symptoms, yet she no longer had the ability to make her own decisions.

So nurses advocated for her. They called an ethics consult. The ethics committee person met with our physicians and then met with nursing staff to try to resolve the ethical conflicts that they felt on behalf of the patient. So they were advocating again for her right to refuse the treatment. And the result of that, I think, was very gratifying for the nursing staff, and I think the patient felt very supported.

[MUSIC PLAYING]

© 2016 Laureate Education, Inc. 5

Journal for Healthcare Quality

Quartile Dashboards: Translating Large Data Sets into Performance Improvement Priorities Diane Storer Brown, Carolyn E. Aydin, Nancy Donaldson

Abstract: Quality professionals are the first to understand chal- lenges of transforming data into meaningful information for

frontline staff, operational managers, and governing bodies.To understand an individual facility, service, or patient care unit’s

comparative performance from within large data sets, priori- tization and focused data presentation are needed.This article

presents a methodology for translating data from large data sets into dashboards for setting performance improvement

priorities, in a simple way that takes advantage of tools readily available and easily used by support staff.This methodology is illustrated with examples from a large nursing quality data set,

the California Nursing Outcomes Coalition.

Key Words benchmarking

dashboard prioritization

radar diagrams

Dashboards have transformed the way that healthcare professionals and senior leaders iiionitor organizational perfonnance and pri- oritize the design of improvement interven- lions (Donaldson, Brown, Aydin, Bolton, 8c RLiilcdgc, 2005; Rosow, Adam, Coulombe, Race, 8c Anderson, 2003). Dashboards provide data on structure, process, and outcome variables; report cards provide final leporLs on (jutcomes and are often intended for external audiences (Gregg, 2002). Recent public reporting initia- tives and tJie pay-for-perforinance demonstra- tion project funded by tbe Centers for Medicare and Medicaid Semces represent tbe report card strategy in whicb liospital performance isjudged by external constituents incoiporating incentives for performance improvement {Lindenauer, Remus, 8c Roman, 2007). In order to improve performance on public report cards, hospitals construct internal dashboards to review perfor- mance and identify areas in need oí change. Benchmarking with similar hospitals in a confi- dential context is an important clement in this proce.ss (Brown, Donaldson, Aydin, Sc C^aiison, 2001; Gregg, 2002).

Understanding Performance Data Traditionally, large quality data sets have been

for Healthcare Quality summarized using descriptive statistics such as

rcQuality

frequencies, averages, and standard deviations placed in tables, bar graphs, or line graphs to track key metrics over time. Those operationally accountable to improve patient care quality and saiety depend on quality professionals to trans- late data into usable information, which is then used to determine performance thresholds foi’ (Irilkiown analyses oi” benchmarks and perfor- mance goals to understand relative comparative performance. This article uses common defini- tions for perlbrmance metrics as follows from Merriam Webster Online Dictionary (2007): Goal is the end toward which effort is directed (where you want your perfonnance to be) and is synony- mous with target, a goal to be acbieved; threshoùl is a level, point, or value above which something will take place, and below which it will not (the point where performance has declined and you need to drill down further to understand why); a benchmark is something that sei’ves as a standard by which others may be measured or judged (a best practice that you strive to meet or exceed).

Those new to the field of healthcare quality must learn how to translate data for benchmark- ing endeavors based on the data set undei review. Raw data reported out as frequencies (the count or number of occurrences) has liitk- use in performance monitoring, with the excep- tion of monitoring rare events. When monitoi- ing patient safety indicators that occur rarely, monitoring days between occurrences may be an important metric for frontline st;iff watcli- ing zenKolerance indicators such as falls witli major injury. Tbe mean or average, calculated as tbe sum of all occurrences divided by tbe lumi- ber of occurrences, is a statistic likely reported in all numerical data sets. However, the mean is known to be sensitive to extreme values or outliers, especially when sample sizes are small (Dawson 8c Trapp, 2004). This means that one patient with an extreme value can pull the mean for tlie datii set and leave the wrong impies- sion about performance for all patients, which could lead to unnecessary improvement efforts. The median or middle value may be a bettei’

Vol. ;ÎO NO. 6 November/December 20Ü8

reference point for data sets when there are extreme values. The median reflects the middle point of all observations—half the observations are larger than the median, and half are smaller. The median is also more appropriate to use for ordinal data—data where there is an inherent order to the values, but the values themselves may not have meaning. An example of ordinal tiata consists of the numeric response choices on a satisfaction survey where I may represent dissatisfaction and 5 may represent complete satisfaction. The average of these data (response choices of 1-5) may be distorted or skewed by sui’vey respondents selecting complete satisfac- tion (5), and those interpreting the results may not clearly see the distribtition of the patients or statï responses.

Understanding how the data acttially spread out is important for determining per- formance goals and benchmarks from data sets. Traditionally, the average may have been used as a goal. However, in today’s competitive heallhtiue industi”y, sttiving to be average may not be the benchmark that senior leaders wish to target. Quality professionals have the task of interpreting the spread of the data to help establish ti.seful benchmarks from the daui set so that leaders can establish realistic targets. Healthcare qtiality data are often sknoed data— dat;i that are not symmetrically distributed (bell- shaped or normally distributed) in such a way that hail* the data are above the mean and half are belo\v. hi symrnelmal data, the mean and the median are numerically equal. This is important information to confirm when using a mean for a target—when the mean is pulled by extreme values, it may not be represen ta uve. The range may be included In reports to show where the mean sits in the data set. The range describes the data spread from the highest to the lowest luimbeis and is calculated by subtracting the minimimi value from the maximum (Dawson Sc Trapp, 2004). The same infonnation is available il data sets provide the minimum and maximum \alues.

Most data sets report standard deviations when means are reported. The standard dnnaiion mathematically descrihes how the data spread (lut around the mean by representing the aver- age distance of obsei”vations from the mean (Dawson & Trapp, 2004). You might recall from statistics cla.sses that if the observations are symmetrical or normally distributed (in a hell-shaped ciii-ve). then 67% are between the mean and plus or minus I standard deviation;

95%, between the mean and plus or mintis 2 standard deviations; and 99.7%, between the mean and plus or minus 3 standard deviations. By taking the mean and adding or suhtracting 1, 2, and 3 standard deviation values from it, you will see the distribution of the data and will bet- ter understimd the usefulness of the mean to set performance metrics.

An example of setting perfonnance met- lics with semce times (minutes of waiting) follows. When meastiring mintites of waiting, negative values would not be possible (mintites below zero), and if the mean minus 1 standard deviation produces negative ntimbers, consider whether ihere were patients with extremely long wait times that ptiUed tlie group average up (resulting in a large standard deviation). The average tor this data set may not be use- ful for performance metrics. Consider pulling the outliers otit of the data set after re\1ewing the individual datii points. A scattergram is an easy way to see tlie outliers. By looking at the actual data and pulling otit extreme values (e.g., more than 3 standard deviations), the average for these data would be lower and would better reflect actual patient experiences.

As benchmarking data sets become more sopbisticated, reporting percentiles is emerg- ing as another way tí) understand the spread of the data and to pro\ide more specificity for estahlishing performance metrics. Percentiles aie easier to explain to those who operationally use the data, and it is easier to set benchmarks or targets with percentiles. A percentile is the per- centage of a distribution (responses or \~alues) that are equal to or below that number (Dawson &: Trapp, 2004). Percentiles are commonly repoited in healthcare with growth charts for children and in academia wilh test scores. For example, in a growth chart, if 60 pounds is the 90th percentile, that ntimber tells us that 90% of the children at that age weigh (30 pounds or less, and 10% of the children weigh more. It is easy to understiuid that this child is heavier than 89% of the other children the same age.

When percentiles are available, quartiles and interquartile ranges describe how the data spread out and tluis are extremely valuable for establishing performance metrics. Quartiles divide the data set into four quarters, with the 25th percenlile as the firet or lower quartile; the 50th percentile as the median or middle, which separates the second and third qnartiles; and the 75th percentile as the upper quartile (Figure 1). The interquartile range is the spread

UM Journal for Healthcare Quality

Figure 1. Data Distribution with Percentiles and Quartiles

25% of data

i / /

25% of data ^ B

Interqu range

a rtile

501 th€

^ pereentile mediar

of data

ii

1

1 Î IS

25% of data

1

Quartiles

0%

Percentiles

20% 40% 60% 80% 100%

U Lower Quartile • Below Median • Above Median 1 Upper Quartile

i)t daia between ihc 25th and 75th qiiatiiles— the middle values llial represeiu 50% of” ilie data set. Quartiles demonsti’ate performance relative to oihers in the data set and are used to set uieauiugful metrics. For example, if senice satisfaction scores are being compared, and your unit or lio.spital falls in the hiwer quartile, this means that 75% of those conipaied have higher satisfaction. A meaningiul goal might be to reacli the 50th pereentile for performance. Setting tlie 75tli percendle or upper quartile as the goal may be a stretch goal and diñicult to achieve, creating frustration for those account- able to implement improvements. The 50th percendle, or median, could be a short-term goal; and the 75th pereentile, a long-term goal. Another hospital might already be in the upper quartile at the 85th percendle; quality profes- sionals at tliat hospital may wish to set the 75th pereentile as the tlneshold indicating that their performance has declined (or indicating that the competition has gotten better).

Use of percentiles and quartiles f(ir bench- marking expands the toolbox for qnality profes- sionals for data display beyond traditional pie charts, bar graphs, and trend or line graphs. Today, qualit)’ prt)fe.ssionals can nse the follow- ing guidelines in deciding which measure of

data spread may be most appropriate for a given data set (Daw.son &• Trapp, 2004):

1. Standard deviations are appropriate when Ihe mean is tised and the data are synnnetrical numerical data.

2. Percentiles and the interquardle range are appropriate when the median is used for ordinal data or the nutnerical data are skewed.

3. Interquartile ranges can be used to describe the middle 50% of the data dis- trihiuion regardless of its shape.

4. Ranges are used with uumerical data when the purpose is to understand extreme valttes.

Where does the quality professional begin to translate data sets into dashboards and set performance t;irgets, thresholds, and bench- marks? Armed with a basic understanding of the statistics described earlier, quartiles may provide a more sophisticated mcthodolog)’ to establish e\idence-based performance metrics. Quartiles or percendles can be .selected as goals for performance, ÍLS thresholds for drilUiown analyses if perfonnance is already at the desired level, or as benchmarks for best practices from high perfomiers.

Vol. M) No. 6 Novcniber/üecember 2008

r- Figure 2. Summary Statisti

^M /v)ccATKoirr- TmoruwT• 100 «I ku ecu

lOOotl« MtOSUK

*ADC – Arafi|t Otity Ccmui

p o

VMMlf

PtrcfntRNHfïfflCife

PfCtfll LVN Hr\ »f C<r(

Ptrcenl Olhti Hri oiCttt

PfTCt”! Co”tlJt1 Hll of C V4

IflUl Hn pit n 0*1

RNHriptrPlDq

No of h i PC RN

UtnMtHnpMPiDay

No 4l Pb ptr iKfntd Suf

Fjlbpfr lOOOhD«)!̂

Irquyfilbptr lOOOPiOip

Pfrttnl RNHn of Cir«

Pfttnt LVN Hri «f C J’t

PKCtfil Olhit Hit ol Cirt

Ptrcml ConEitti Hri of Cv«

Tout Hn pef tt Oi)t

**tldi ff w 4 » tfiU to« on« hoipiUI t«r om month

Noo( HfliHtjb

31

3)

31

31

31

31

31

31

31

3;

33

3}

3J

33

33

Nftof

Rt<?f(fa”

93

93

93

93

93

93

93

91

93

%

%

99

99

99

99

99

Mun

94 ) t

1.(5

3M

IO«I

1 7 1 >

U4(

1(5

i r5

I t ;

I.«

OM

6(9t

aio

9M

Stvtdird

Dtwitwn

S33

3.1)

(96

134?

( 7 Í

Í4)

Ö71

( «

0,?0

312

0*3

I3M

9.0)

I?M

a9-

Mtdun

9a« 000

O.’i

1(57

1S«S

IS3

KIS

149

OCO

OCO

it:)

SSO

>473

7s;

as«

Leww

9C7:

000

000

Utt

1356

i r uia

i r ooc ooo S9X

106

US«

3ia

V

100 90

ISS

««1

I3SI

ia:3

17 SI

t.77

I7SI

l«9

000

000

^3«

ISO?

3} M

Ml

Understanding Data Set Reports UiiUibasi’s pro\’idc iiilbrinaüoii to users in a variety of formats. Selecting which format to iise may he ovei’whelniiiig for new quality profession- als. Keeping tlic purpose of ihe dala review in mind will help make the selection easier. Typical reports include suiniiuiries of multiple indicators at a point in time, compiuison of peifonnance against outside henchmarks, comparison of per- formance on an individual or multiple indicators with a pictuie, and uiouiLoiing performance on individual indicators over time. To illustrale reports that arc commonly availahlc, examples from the (California Nursing Oulcouics Coalition (CalNOC) data set are described, with discussion on hi)w to iLse the reports to meet the reviewer’s iutcuded ptupose.

CalNOC, a regional nursing quality mea- surement database, is a collaborative effort of the American Nursing Association-California (.AJVA/C) and the Association of Clalifornia Nui^c Leaders lo advance improvements in patient care by sustaining a valid and reliable statewide outcomes databa.se. Voluntaiy tnem- beiship is available to all acute care hospitals in the state of California, as well as selected hospital groups in other states in the western

region of the United States. In 2()()7, more than 180 of CJalifornia’s H60 acute care hospitals par- ticipated in CalNOC, with additiotiai hospitals from Nevada, Arizona, (Oregon, and Hawaii. Nuim’-seusitive qtiality indicators are collected at tlie patient care utiit level and clustered into categories of variables related lo muse staffing (houi-s of care, skill tnix, tise of contiacl stalf, staff tutuover, and bed turnover); registered nurse (RN) education level, certification, and years of experience; ¡jatienl falls; pressure ulcer (FU) prevalence; restraint prevalence; central line-associated bloodstream infections; and medication administration accuracy. Hospitals access Web-ha-sed customized reports generated directiy fri)m the data set to compare their own performance with thai of like hospitals, CalNOC hospitals develop their own facility dashboards, combining reports from the Web site with those from other dala sources to display indicators on a single document (Don;üdson et al., 2005). The CalNOC prí)ject has been described in de-tail elsewhere (Aydin et al,, 2004; Brown et al., 2001).

Siimniaiy statistic rt^xrrt.s provide a quick ref- etence for aggregaled data at a given point in time (e.g., the curieut quarter) to populate

Journal for Healthcare Quality

dashboards or view indicatoi”s tracked over time. These reports often provide columns ol’ aggre- gated numeric data without graphs, and they usually include averages and mea.sures of data spread such a.s standard deviations or mini- mum and maximum values and may provide quaitiles. CalNOC summar)’ statistics reports provide member hospitals with aggregated sta- tistics for all CalNOC hospitals on all variahles. Figure 2 shows an example of summaiy statistics for stafTing and falls bv’ unit type and hospital average daily census.

Graph irporLs provide a visual comparison of performance on select indicators at a point in time (e.g., the current quarter). Graphs provide a visual representation of comparative hospital performance, which may quickly provide perfor- mance information. Graphs should not be used to summarize «/idata, only those prioritized for performance monitoring. Wiien reports iiii hide pages and pages of graphs, the key messages and analyses from the data set are lost on those reviewing the teports. Figure 3 shows a sample comparison graph for falls per 1,000 patient days for all medical/surgical units in hospitals with an average daily census under 100 patients. This graph gives hospitals a visual reprcscntit- tion of the variation amotig hospitals, followed by a report that lists the actual performance for each hospital (not included).

Trmd reports provide the ability to monitor prioritized indicators over time. These reports often include graphs as well as a data table for monitoiing. Using trend charts can heip hospi- tals understand their ongoing performance over time by watching tbe slope of the line or bars to uuderstand vvbcther performance is improv- ing, declining, or stable compared to the same hospital (your hospital) each month or quarter. Figure 4 provides an example f)f a hospital trend teport for falls per 1,000 patient days for one ho.spital. Both the facilit)’ average and CalNOC average for tbe .selected time period are shown by Unes across the gi-aph. The report includes the graph sbown, followed by a table listing the actual numeric fall rates ibr each montb (not included).

Be careful when monitoring only trend reports. Even if perfonnance remains stable (i.e., flat slope), comparison to others is still important to see whetlier the bar rises. As the group prioritizes improvement over time, the group average may raise the bar or benchmark. Even if individual performance is stable, relative performance may decline—for example, from

tbe 90th percentile to the 80th percentile— sitTipl} because tlie rest of the group in the data set improved. It would be a mistake to monitor only individtial performance over time.

Monitoring uends over time for prioritized indicators is very importiint in determining whether gains are held. Wiien data are being viewed over time, it is usually better to use line graphs to better visualize trends. Figure 5 pro- vides an example of the same data using verti- cal bai graphs and line graphs. Although both graphs clearly demonstrate die spike in restraint use in 2005, the trend of decrease over time is much clearer in the line graph.

Henchmarking rcpart.s provide a succinct sum- mary of performance, together with the per- formance of like groups. These teports may be helpful to setiior leaders such as the chief officers or the board of directors when data are at the facility level, and ihey may be helpful to individual unit managers when data are at the unit level of analysis. These reports are usually nimieric data in columns and provide compari- sous for Uic individual perlbrmance with other groups such as state or national averages, or averages of other like facilities based on criteiia from the given database. Data may be similar to summary statistics with averages and data spread infonnation and may include percentiles or quartile itiformatioti. GalNOC’s facilitv-level benchmarking leports show stunmary data for tbe total facility and by unit type (i.e., critical care, step-down, and medical/surgical ttnits). Figure 6 shows a facility-level benchmarking report for prevalence studies. Unit-level data allow managers to compare their performance within the facility as well as externally. Unit man- agers can examine imit perfonnance in detail, including botb PU prevention process variables and patient oiUcomes. These statistics track the actttal number of patients with ulcers in addi- tion to tbe percent. Actual ntimbei’s may be meaningful to fiontlinc unit suilf wbcu tracking rare events by days between occurrences. Also included arc statistics useful for performance metrics such as the facility’ mean by unit type, like ho.spital mean by unit type, and CalNOC mean by unit type. Taken togetlier, the statistics on this uuit-Ievel report provide a valuable drill- down into both patient outcomes and the PU prevention process.

Translating Data into Quartile Dashboards A six-step process has been developed to guide quality professionals through the translation

Vol. 30 No. 6 November/December 2008

Figure 3. Graph Report

Compamon Graphs by Hospital Size tor Care Houn and FaUs — By Unit Type FäbpcrlOWnOvi

Fram lANUAItVXOT To IMIIOl»07

Ml QMOC HewtUb CtMW Undv I M

!••• P ” losn

n

« 2 « 5 5 GO 6 9 74 lOe 119 I » )>4 i j > I31

44 $1 S7 «7 n ei 110 I » m m i « t»

FCN (Hospital Numtwr)

/ pt>4 174 l

i&i i«7 177 O 19« 22S 2» 21i IK 1» »j 3» j» iis

AMI I I MOT

Figure 4

V — ^ ^

4

IS

ï

IIHNI ^

. Trend

n 1 1 1 1 1

JAN F 1

Rep(

ri 1 1 1 1 1

î r t

1 1 1 1 1

B MAR APR

IbpirlOOOP

r 1 1 1 1 1

« r «

Trend Report by Total Facility

1 1 1 1 1

MAY

tosm

Fall per 1000 n Dayi From JANUAfir no

•1 1 1 1 1 1

n 1 1 1 1 1

– Monthh

To MARCH »07 •dllty

8

j a AUG SEF

TiniE Sen H (Uonth)

¡MXsnDvt

1

oa

t

h 1 1 1 1 1

1

• • ^^ —1

1 1 1 1

NOV DEC 1

= 8 = . ^

1 1 1 1 1

JAN

r1 1 1 1

FUS (w lOOO n 0«) {WMBHI)

^ 1

1 1 1 1

FEB MAR

Aug16.»a7

Journal for Healthcare Quality

Figure 5. Comparison of Data Using Bar and Line Graphs

0.09 –

0.07 – 0.06 – 0.05 •

0.04 – 0.03 – 0.02 •

0.01 –

1 1!.

Percent of Patients with Restra Med/Surg

in ts

1

/ / , . ^ ^ # / / / / /

ncalNOC

DFacilityX

CalNOC And Facility X Restraint Prevalence

O.()9 T

0.08 ••

1999 2000 2001 2002 2003 2ü(U 2005

Time Period

IQ06 2Q0ft 3QO6 4Q06

process. Continuing with the C âlNOCl example, and using the definition lor dashboards present- ed earlier, prioritized indicators representiiif» stnictnre, process, and ontcomes were selected

to demonstrate a simple method to translate (¡nartile information troni siiiiiniar}- reports using readily availahle tools in software prochiets sitch as Microsoft Excel or PowerPoint.

Vol. 30 No. 6 November/December 2008

r- Figure 6. Benchmarking Report

CalNOC Benchmarking Report – Prevalence Studies by Unit Moit R«c*nt Pr«vai«nc« studi«* for v«ar 2006 To 2007

FCN : 3 : Jun« – 2007 UnitNarTw:5-SE

Unit Unit Typ« aily Ccnfut Unit Unit Unit Facility Uk« CalNOC Croup M t m NunMrator* Dinomlnator* M t m Hospital Mam

By Mtin Unit Unit Unit Typ« Typ« Typ«

S- 5E

A.Xoin.wí«tanyUkm

B. X of PL Mfth Staia II * Ulc«n

C, S of Pt Mfth Hoipttal AC4. Praia. Ulc«n (AU

0. S of Pt with Hoiprtal Acq. fnn. Ulctn 5ta(i 11+

E. I of PL Mtth HDipttd Acq. Pr«ii, Uic«n Stag« III*

F. I of Pt wfth U!c«r Kiik A u m Documtnttd wtfln 24 houn of admltslon

G. % of AmtMd Ft Idanttflid ‘At KA* fer Ulc«nAtA4mbsion

H. % ‘At Rlik* Pti wrth Uictr Pravanbor frotoeol inPI«c«atSurviy

1. K of Pt Mtth any Raitrdnts

1. S«1 Pt. In Rtitradnt (Un« */or V M I onM

M«dical SurgicaL Uniti

Medical SuTflcal UnlU

Unit»

Medical Surfical Unit!

Medical Sufilcal Uniti

MtdicalSWflcal Unfti

Uniti

iMedical Surfjca Unit!

M«dlcaiSur«lcal UnItt

Medical Sunlcal

300«

300*

300*

300*

300*

300*

300*

300*

»0*

«n«

4062

28.12

2500

15«

1250

« . 7

48 28

10000

6.25

000

IÏJOO

9.00

9.00

5 »

4.00

2900

1400

1700

200

000

32«)

32.00

32.00

32.00

32AI

3000

29 00

17.00

3200

32 00

11.27

8.58

5.39

3.t9

1.23

9752

21)2

5191

1.72

0.74

11.14

753

« t )

3.41

0.74

9)35

32 78

6753

372

2 92

11.69

803

574

309

0.95

9225

3950

70*9

294

2 28 Units

Step 1: Príorítizatíon After reviewing all the reports available to qual- ity professionals in databases, tbe next cbal- Icnge is one of syntbesizing tbe information to narrow the focus to indicators that are impor- tant to tnonitor compared to benchmarks. Prioritization should come from tbe key stake- bolders wbo manage operations associated with the data set. Indicators should be limited to the “vital few” and should represent structure, pro- cess, and outcomes. Tbe prioritized indicator list will need to be placed into a .spreadsbeet to create tbe dasbboard.

Step 2: Translating Performance into Quartiles Performance on tlie prioiitizt-d indicators will next need to be translated into quartiles. Gather the reports tbat provide bencbmark quartile values witb facility performance. For eacb indicator, identify tbe numeric value tbat defines tbe range of values for eacb i|uartile in tbe data set. Next, identify tbe facility’s individual performance and wbere tbat value falls witbin tbe identified quartile range (this can be done concurrently or as individual steps). Transfer this information

into tbe spreadsheet. Tbis abstraction from summary reports can be completed by support staff after training on tbe specific reports tbat will be used atid the ftmdamentals of quartile metrics. Figure 7 sbows a very simple worksheet for capluring performance by indicating wbich quartile the hospital fell into for each indicator. Pereentile numbers (25, 50, 75) were assigned in the last colmTin of the worksheet, which will be used to generate dashboard grapbs.

As a practice example for translating quartile infonnation, refer back to Figure 2, Summary Statistics, as a reference. Tolal hours per patient day in medical/surgical units bas tbe following quartiles: the lower quartile is 7.44 {1st to 25tb percentiles), the median value is 8.56 (50th pereentile), and tbe upper quartile begins at 9.75 (75th to lOOtb pereentile). Next, identify the individual hospital’s performance on tbe same indicator. If the value is 7.44 or less, it is in tbe lower quartile; if it is 7.45 to 8.56 (tbe median value), it is below the median but above tbe lower quartile; if it is 8.57 to 9.74, it is above tbe median btit below tbe tipper quartile; and if it is 9.75, it is in the upper quartile.

Journal for Healthcare Quality

Figure 7. Worksheet for Capturing Performance by Indicator

Worksheet 1: CalNOC Indicator Performance from Summary Statistics Quarter 1 2008

Structure (Staffing):

Below Above Facility Lower Upper Performance

Quartile Below Above Quartile (number from 25 Median 50 Median 75 100 column to left)

% RN Hours of Care %LVNHour^ofCare % Other Hours of Care % Contract Hours of Care Total Hours Per Patient Day # Patients Per RN Licensed Hours PPD Sitter Hours Bed Turnover RN Voluntary Turnover LVN Voluntary Turnover Total Voluntary Turnover

X

X

X

X

X

X

X

X

X

X

X

X

50 75 25 100 75 100 100 25 100 100 25 100

Process: % PL) Risk Assess in 24 hours % At Risk for PU % At Risk PL) Prevention % Restrained % Restrained Vest or Limb

X

X

X

X

X

50 25 25 100 100

Outcomes: Palls Falls with Iniury %Hosp Acquired Ulcer % Stage II+ HAPU % Stage III+ HAPU

100 25 100 100 75

Note. lA’N = licensed vocational nurse; RN = registered nurse; PPD = per patient day; PU = pressure ulcer; ! = hospital-acquired tilcer.

Step 3: Creating the Dashboard The next step in Úiv translation process is to use Uie quartile data to create a picture that will show perfomiance priorities tisitig the data in the la.st column of the worksheet and a readily available software application, Microsoft Excel or PowerPoint. Again, support staff will he ahle to accomplish this translation once the indicators have heen selected and the worksheet has been set up.

Figure 8 shows a traditional way to look at these data using horizontal bar graphs. The

quartiles are demarcated numerically hy the percentiles tiiat define them. A more poweritil picture may be available for quartiles using radar or spider diagrams. Rgure 9 provides the same information, but the picture is more powerful \isvially. Similar to the bar graph, the quartiles are demarcated numerically by the percentiles that define them. Ttie center of the diagram represents the lower quartile, with each quartile moving away from the center progressively, so that the upper qnartile is the ()Uter ring of the diagram, which resembles a spider web.

Vol. 80 Nt>. fi Novcmber/Deccmbei 2008

Performance is identified by coloring of ibe (liagrani—^wilb more color Indicaling perfor- mance reacbing out from the center and lower quartile.

Step 4: Consolidation to a One-Page Dashboard Clusici ibc i^iaplis on a one-page document so tbal all infonnation is readily a\ailable at a glance. Two examples are provided in Figure 10 and Figure 11. sbowing the boiizontal bar gia[)h.s and (hi- radai” fliagrams, respeclively, using stiiic- ture, process, and outcome indicators from ihe worksheet. Because all the data are on one page, (be end user can quickly visualize comparative performance on prioritized indicators.

Step 5: Supporting Documentation Creation of an appendix or stipporting docti- menl for tbe dasbboaril is based on die end user’s need for additional information. A t;ible <»f indicator definitions may be inchided, whicb also could provide data sources and time frames for tbe data set. Wben quartiles arc used ;LS bencbmarks, it is also belplul to identily tbe desired direction for perfonnance. For example. using the indicator data in these d;\shboaids for

PUs, process data related to asse.ssment for PU risk or prevention inteiTention perfonnance in tbe uftfjn quartiles would be desirable, and outcome perfonnance related lo acquiring PUs in the toiCíT quartiles would be desirable. AITOWS indicating tlie desired direction can be placed on tbe dasbboard as one helpful tool, as shown in Figure 10. Anolber option, one tequiring liu tber explanation to the users, is to rescale tbe dasbboard so that low performance is always in the Icjwer quartile and desired performance is always in the upper quartile. For ihe infonnation on PUs, this would require transposing actual quartile perfonnance data for acquiring ulcers— in this case, being in the lower {|uarlile is good— and representing that as the upper quartile on the da.shboard. Tbe dasbboard must be clearly labeled witb ibotnotes so Íl is clear to those using the dashboard tbat good perfonnance is always bigh, even though intuiiively you wisb it to achieve low pre\alence.

Step 6: interpretation Tbe final step in the translation process involves analysis or interpretation of comparative perfor- mance to otbers in the data set. The key opera- tional stakeholders wbo prioritized tlie itidicator

Figure 8. Quartile Performance Using Horizontal Bar Graphs

Staffing Performance in Quartiles

Total Voluntary Turnover

LVN Voluntary Turnover

RN Voluntary Tumover

Bed Tumovef

Sitter Hours

Licensed Hours PPD

# Patients Per RN

Total Hours Per Patient Day

% Contract Hours of Care

% Other Hours of Care

% LVN Hours of Care

% RN Hourï of Care

Senes1

% RN Hours of Caie

50

0

% LVN Hours of Care

75

Vo Olfier Hours of Care

25

% Contract Hours ot Care

100

25

Total Houfs Per Palieni

Day

75

i Palienls Per RN

100

50

Licensed Hours PPD

100

Sitter Hours

25

75

Bed Turnover

10Û

RN Voluntary Turnover

100

LVN Voluntary Turnover

25

100

Total Voluntary Turnover

100

Quartiles

Journal for Healthcare Quality

Figure 9. Quartile Performance Using Radar Diagrams –

Staffing Quartile Performance

%RN Hours of Care

Total Voluntary Tumover^. . -^-^ ~~–~-.-_^ % LVN Hours of Care

LVN Voluntary Tumover

RN Voluntary Turnover

Bed Tumover

%aher Hours of Care

% Contract Hours of Care

Total Hours Per Patient Day

Sitter Hours i’ Patients Per RN

Licensed Hours PPD

set mnst be involved in this process. Key conclu- sions must be summarized ioi senior leadersbip.

Continuing with tbe CalNOC example, the following interpretation might be drawn by tbosi- with operational accountability. {Note diat this dashboard was not rescaled for desired perfonnance placement in tbe upper quartile.) Looking at tbe structure data, one sees ibat tbis bospital bas more licensed vocational nurse (LVN) boms tban tbe median and has little L\TS’ tumover olthcstaíí (lower quai lile). Unlicensed support stafFuse is low (lower quartile) although RN lumrs of care are at the median, but the number of patients lor each RN is bigb (upper quartile). Tbe ntimber of patients in a bed (bed turnover) on a given day is bigh (indicat- ing many adtnissions, discbarges. or transfers), whicb would require a lot of RN time. RN ttirn- over on tbe workforce is also bigb (perbaps the unit is too bu.sy), and staüing is accomplished with contract or registry staff (upper quartile). Tbis luiit likely would examine its sUiffing pat- terns because the siiuation appeals to be a dif- ficult one for the RN workforce.

Next, looking at tbe process and outcome data wilbin die context of ibese structure data, one might make tbe following interpretation.

Restiaint use is higb (upper quarlile), altbongb usf t)f sittfis to prevent resuaint or falls is in tbe lower quardle. Patients at risk for PUs are not getting prevention intei-ventions (lower quarlile), and tbe risk assessments ibr PU devel- opment are only at the median. Ri.sk assessments and detennination ()f appropriate intci-veniions may not be gelling accomplisbed, given die RN patterns just identified. Although tbe percent of patients at risk for hijspital-acqiiired PUs (FL’VPL’s) is low (lower quartile), this bospital is in the upper quartile for HAPU develoi> ment. This b(xs]>ital will want to invesiigiue tbese outcomes further by drilling down inlo tbe dala to better understand performance. This hospital may be doing well wilb fall prevention, however: falls with injtir)’ are in the lower quarlile. Note thai “all falls” are higb (upper quartile), whicb could be inter¡5reied as good reporting or as a bigb rate tbat needs fut ther investigadon. If this bospital bas been working on a culture of saiety and respi)nsible reporting, a high fall rate may indicate success in tbis area (good reporting).

Based on this dasbb(iard, quality profes- sionals at tbis bospital would likely prioritize perfonnance improvement around PU develop- ment and use of resti’aint; tbey may wisb to set

Vol. 30 No. 6 November/December 2008

Figure 10. Bar Graph Dashboard starling PtrtBtmiRU In Qmr l l la i

Turnover

lumovw

RN Volunlary TumovB

Bed T.ii-i,vi.i

Sitlei MuuFi

Ucaraad Houn PPO

f Patœms P« RN

ToW Moura Pw PaWnl Day

%Olhe( Hours otCsre

% LVN Hours oí Cars

%RNHauraorCare

. j –

35

NHtilng Procsas Ouaillle PiilormancB Anat^sls F i l l i • I d PnuHra Ulcsr Q i i r l l ia Psrlotmanc» Analytic

Figure 11. Radar Diagram Dashboard

Desired Periormance Direction

SlaHIng auirllte PsrlDimancs

LVN Houn of Cm

cania Hum ol Cm

Tov Hoc P H Pdiint Or,

DutEomai QuaMlli Analysis Rertortninee Knaiysls

Journal for Healthcare Quality

performance targets oí being below the 75th percentile as a short-term goal, and below the 50th percentile or median as a long-tenn goal. Given that they are doing well with injuiy falls, they may wish to set the median as a thresh- old for further analyses should the hospital’s performance decline to that level. They would also likely investigate fnrther staffing patterns to support the higli volume of patients that are admitted, discharged, or translerred Into this uiiii dailv. Given the high RN staff turnover, they may also wish to coudnct a survey or focus grotip to better undeistand the stafFs perspec- ti\e on the work cn\iroument. They may wish to set a performance target to be below the median for total voluntary stafT tumover.

Summary This article provides tools for the quality pro- fessional to translate data sets into dashboards and (o .set performance tingets, thresholds, atid benchtnarks. Armed widi a basic understatiding of the statistics described, quartiles may provide a more sophisticated mt’thodolog)’ lor bench- maiking. Depending on how data are reported, quartiles or petcentiles can be selected as goals for performance, as thresholds for drill-down analyses if performance is already at the desired level, or as the benchmarks for best practices from high performers. Graphs cati be used to create powerful \isual UJOIS to quickly inlbrm froiuline staff, operational leaders, and gcnern- ing bodies on piioritized metiics.

References Aydiii, (1. V… Hiiim-s, IV 1,,, Dodaldson. N.. Brown, L). S.,

lïufTiini, M.. & Saiullui. M. (2004). Crcntiiig and analy/- itiK a slatcwide mirsiiig qiialil)’ measurement database. foumnl ofNiiruufySchotar.^hip. 36(4), S71-378.

Briiwn. O. .S.. Doimldson. N.. .-Vydin, C. K., & Carlson, N. (2(101). Hdspiial minsiiig iM-ncliinaik.s: The Oalif’nrnia Niiisiiig Outcome (loiililicut jnojeel experience, founxd ftir Uealthrarp Qiiality. 23(4). 22-27.

DawNon. B.. &• Trap]), R. (i. (200’4), liitsir antl ctiiiitnt bitntti- listits. I.an|Te Medical Books.

Donaldson, N., Bri>wn, D. .S., Ayditi, C K.. Bohon, M. [,.. Ü- Riitledgf, I). N. {2O0.’i). Leveraging imrse-relatt-d dash- board benchmarks in expedite perlbrmance improve- ineni and d«»< umeiit excellence. Joimtal of Nursing Adminislratum, 35(4), l()3-172.

, A. C:. (2OÜ2). Pertbrmaiue management data sys- tems lor nursing service Kiv^AnVráúowf,. foanial t)f Nursing Admiimtratioii, 32(‘¿), 71-7S.

lindenaner, l\ K,, Remas, I)., Sc Roman, S. (20()7). Public rc|xin- ing and |7ay for perlbmiance in hospital qiuüity inipit)veineiit. Neil’F.n^and Jmmmlof Mfilkive. 556(5), 486—4%.

Meiriam Wehster onlitie dictitmtin. (2007). Retrieved Ocioljn 1.”), 2007, from w’ii’w.merriain-wí-hstercDm/diciionaiy.

Rosow, I!., Adam, J,, Coulnmbe, K., Race, K,, &- Anderson, R, (2()(t.’î). N’irtual iiislrnnifnlatjon and real-time cxeeii- tivf dashboards. Sohnioiis for Iieallh care systems. Nursing Administration (¿uarlerly, 2 7 ( 1 ) , .^i8-7(i.

Authors’ Biographies Diane Stotrr Jhowii. PhD fîN FNA¡Í(¿, is the California Nursing (hitcomrs Coalition (CalNOC) ayfniniipal iiix’es- tigatM’ and has heen part tif the CalNOC research learn for more than 10 years. She is nirrmtly the dinical practice lead- er for hospital accreditation pmgranis at Kaiser P/rmanente Ntirlherri California Region in Oablan/l. CA.

Carolyn E. Aydin, IViD, is a California Niunirig (hitcoines Coalition. {CalNOC} coinvestiga tor and has been tlie CalNOC {fata manager for the f)a.st ¡O^ears. Shf is currently a research scientist nt Ceilitr.s-Sinai Health System. Bums antl Allen Research Institute in. ¡.os Angeles, (A.

Nancy Domildsori, DNSc RN FAAN, is Ihe CaUfmnia Nursitig Outcomes Coalition (CalNOC) cof/rincifial inves- tigattrr and has aLw hem part of tlw CalNOl. research team for imiif than 10 years. She is the .Ameritan Nurses Associâtiini-Catif(/rnia (ANA/(^) CJIINOC pmje/t direclrrr and coprinrifuil investigator as well as the dim tor far the Cinder for Research and ¡nnoi’ation in Patient (Jam al University of Cakfomia-San Frandvo Stnnfi/rd Health (jitv through the University of California—.San Francisco SchiM>l tif Nursing.

For more informatitm on this article, rontact Diane Storer Bwuni at Diane..linnvn@k.p.org.

Joimial for 11 et ill h rare (¿uality is pleastd [o olí( r the opportunity to earn continuing edticatioii (CE) credit to those who read this article and take tlie online posttest at www.nahq. org/Journal/ce. This contitiuing edtication offering, JHQ 209, will prinidi’ I contact liDtn to those who toiuplete it ap|}ropriately.

Core CPHQ Examination Content Area 111. Perlorinaiicf ltnpro\ctuciil

 
Do you need a similar assignment done for you from scratch? We have qualified writers to help you. We assure you an A+ quality paper that is free from plagiarism. Order now for an Amazing Discount!
Use Discount Code "Newclient" for a 15% Discount!

NB: We do not resell papers. Upon ordering, we do an original paper exclusively for you.