Addendum

A database used to help calculate maintenance requirements for all medical equipment in hospitals now part of Mt. Sinai Hospital system in NYC.

This method includes several variables unique to equipment maintenance but the central idea of relating the consequence or degree of harm (or loss) to likelihood of occurrence and other criteria, is the same as with community risk management.

The following is a journal article I wrote, published by AAMI, Association for the Advancement of Medical Instrumentation in 2004. Among other things, it briefly describes the associated method of risk assessment.

The Three Critical Issues I’ve Learned in 23 Years in Clinical Engineering

My career in Clinical Engineering began in 1979, at Bellevue Hospital, a municipal facility of New York City. I chose Bellevue for the broad experience I was sure I would receive, the most impressive of which was witnessing technology’s march into medicine. I went on to other institutions, and over time, concluded that equipment maintenance was the most problematic area of responsibility I faced. And of maintenance, the least favorite tasks I performed were risk assessments, and compliance calculation.

There were good reasons for this unpopularity: I found risk assessment to be subjective, even arbitrary, and achieving compliance was at best difficult, and at worst a game of hide and seek. Added to these problems, I often found little administrative support, or appreciation for the issues that were growing more complex every year. Today’s medical technology oversight includes standards, but few clear paths to follow when it comes to implementing those standards. The following are three issues I have found to be most critical, and my suggestions for addressing them.

  1. Risk Assessment

Ignorance may be bliss, but effective assessments cannot be made if the assessment methodology is a contrivance designed for ease of use. Methodologies must be standardized: factoring in the various subjective decisions that must be made so we can clearly see the effect those decisions have on the conclusions we make.

One formula I have found in use, E + C + (M + F + U)/3, relies on the following weighted equipment variables: function (E), application (C), maintenance requirement (M), likelihood of failure (F), and environment (U). Plug in the numbers, and out comes the conclusion. But each of these variables calls for an educated guess, and three of them (M, F, and U), are weighted equally in the formula. Formula-based methodologies and applications relying on pattern recognition or forms of artificial intelligence (neural nets, fuzzy logic, etc.), have been used to address variables inherent in the maintenance of medical equipment. While a formula may be used only as a guide, it may not be. And though it may take into account significant issues, it represents an opinion that you may not feel free to alter.

You cannot, with absolute certainty, determine a device’s “critical nature,” what the “consequence of failure” will be, or what effect location will have on a patient’s treatment, without including subjective analysis. Analysis that includes subjective observation is often necessary, but offering that analysis as absolute is misleading, and can distract the reviewer from thinking about the significant factors comprising the assessment. Subjective assessments demand a visible process.

Today, we may remove from a maintenance program certain devices, based not on risk assessment per se, but on other rationales. Advanced technologies and manufacturing capabilities have improved to the extent that we may not regularly inspect every pressure transducer, catheter, temperature probe, or other disposables, attachments, and ancillary devices. These pieces are often essential to the clinical integrity of a procedure, yet in some cases, we make a decision not to manage them the way we do other pieces of equipment. Risk assessment can take into account many variables including reliability, production run problems, past history, manufacturer’s recommendations, calibration, lubrication, part replacement, consequence of failure, how apparent a failure is, built-in self test, equipment age, physical environment, user technique and experience. We can establish the criteria. The issue is how to use them without overwhelming the user, and without hiding the facts.

The following four variable groups are presented here as critical in directing risk assessment.

  1. 100% self testing. Check with manufacturers, but a 100% electronic device that performs a self-test on startup is often sufficient to assure operational integrity.
  2. Failure apparent to the user. A problem that is not apparent to the user can stay hidden until a maintenance inspection is performed. Sometimes this characteristic is obvious (compare the verification procedure for insufflator inflation pressure to that for oto/ophthalmoscope quality of light), and sometimes it is not, but this critical issue demands resolution.
  3. Calibration, lubrication, and parts replacement. If it’s required, it’s scheduled.
  4. Consequence of failure to the patient. In some cases this is clear, as an insufflator compared with an oto/ophthalmoscope (great, and slight), in others it is not as clear, as in a transport cardiac monitor, or EKG machine.

Table 1 demonstrates maintenance requirements ascertained for four sample devices. Working out the various permutations will result in a finite number which can be used as a reference either on paper, or in a database expression. Remember, this format is not a formula. It is, in a sense, a triage guide, weeding out those devices whose maintenance characteristics are most readily defined. Its structure provides definition and flexibility, and it forces the user to become involved.

  1. Maintenance Compliance Calculation

Utilizing a formula methodology for risk assessment, but not for inventory management, is like locking the door after the horse has left the barn. The term “inventory” must be defined in absolute terms to effectively track compliance, or to make quality comparisons between institutions.

Theoretically, “compliance” is a ratio of the number of devices that are inspected on time, divided by the total number of devices in the maintenance program—this is easy math. I say theoretically though, because we have not defined what “total number” stands for—this is the tricky part. In the real world, the denominator in compliance calculations is defined by each user. The “total number” could stand for all devices in the maintenance program, or all devices that have been located within a certain time period. In other words, nothing prevents a department from discounting all devices not located during a one month inspection sweep. Here is the issue—percent compliance can be increased without increasing the number of maintenance inspections: report “compliance” as a percent of devices in your maintenance program with inspections completed “as scheduled.”

Table 1 demonstrates maintenance requirements ascertained for four sample devices.

* Interval depends on maintenance required by the manufacturer.
** Establishing exact intervals is not the issue here.
*** The fact that a failure would be apparent to the user, and regular maintenance is not required, does not permit an accurate determination because Consequence of Failure is MODERATE. Miscellaneous variables such as repair history, equipment age, physical environment, and diagnostic use must be considered.

Let’s say it’s November 2003, and you print out a list of 100 devices, which have inspections due that month. Like a good manager, you assign these to your technicians, and like good technicians, they perform their work and enter their work orders. Now it is December 1, and you run your compliance report and find that of the 100 devices handed out, 95 were inspected. You report 95% maintenance compliance for that month, and go on to the next month. So far, so good. For this example, let us assume that all of the devices in your program have a one year inspection interval. One year later, November 2004, you ask for everything that is due in the current month. This time only 95, of the original 100 devices, are handed out to the technicians.

The five not completed from a year ago (lost, stolen, returned to a vendor, hidden, or not having sufficient staff to locate) do not come up—this month or in any other month—because you only ask for devices that have due dates for the current month and the current year. In other words, each and every month, devices “not located” are effectively removed from the maintenance program and from compliance calculations. Relying on this method continually adjusts the work you have to do, to the work you are able to do. Performing calculations in this manner will increase compliance, not by virtue of having inspected more devices, but by reducing on paper, the devices in use.

There are no established standards for how you must treat the denominator in compliance calculations—you could include all devices (generating an unrealistically low number), or exclude all not located each month, as in the example previously mentioned (not a true reflection of maintenance in your facility). These are the extremes. There is a middle ground.

Draw a line representing the oldest due date—to the most future due date, and designate a period (18 months for example), removing from calculation (not from inventory) devices that have not been located for that period (Table 2). The time period should be long enough to reasonably allow equipment to be found. Note: “# of Devices,” the area below the curve, represent devices included in the periodic maintenance program.

The critical issues are: to define the cut-off point, report devices removed from compliance calculation (as a percent of inventory), and report that number to the Environment of Care Committee as you would compliance. The report would look something like: 95% compliant, 4.5% devices not located. The point is, if the numbers reported were 95% compliant, 35% of inventory not located, a red flag should go up somewhere.

Equipment not accounted for can be lost, stolen, returned to a vendor, errors in inventory, hidden, or simply the result of not having enough resources to do the job. Some of this is “forgivable,” and some of it is not. Not standardizing this procedure skews data—there is no distinction between devices that truly do not exist, and those simply not found. Without making this distinction, you cannot effectively track performance, the Joint Commission (JCAHO) cannot accurately compare one institution to another, and patients may be placed at increased risk.

III. Administrative Oversight

“Good people are where you find them,” but don’t leave the selection of technology oversight to good looks and politics. If your key technology person is reporting to the same administrator who oversees laundry, security, and housekeeping, with all due respect, that may be the wrong person. Thirty years ago, medical technology was less prevalent and less complex. If a power plug failed, the maintenance person who fixed the lights and kept the boiler going, repaired it. Other failures were referred to the manufacturer. This starting point has a remnant carried forward to today’s high-tech environment. In some institutions, administrators overseeing food service, security, laundry, patient transport, housekeeping, and the maintenance department, also oversee the environment of medical technology. These administrators may have neither the expertise, nor the interest, to look beyond the quarterly summary reports they receive. The result of this inattention to detail can be increased risk to patients and increased costs to hospitals.

The excuse often used to explain this kind of failure within a bureaucratic system is: “It’s not the individual’s fault, it’s the system.” The failure of an institution may not be incompetence as much as a flawed table of organization. The Clinical Engineering department head (or vendor) should report to someone who understands patient care and the clinical environment—whether physician, PhD, or nursing director.

Summary

Department directors are often caught between cost and service. Help them, not by increasing expenditures, but by building into the institution fabric definitions they can use. Make risk assessment something everyone can see and understand. Make compliance calculation show the whole picture including the integrity of your inventory. And place administrative oversight into the hands of those who best relate to it.

Table 2. A line representing the oldest due date to the most future due date, and designating a period.