Skip to main content

MED-AUDIT Overview

Conceptual Overview

The MED-AUDIT was designed to help improve accessibility and reduce healthcare disparities for populations of medical device users with disabilities. Five specific design objectives were forwarded from the RERC-AMI R3 Project (2003-2008).

  1. MED-AUDIT needed to assess medical devices and provide informative reports for designers and the public that might not know about special needs of people with disabilities.
  2. MED-AUDIT needed to be efficient. While many hundreds of questions could be asked about the accessibility of a product, the questioning process needed to be efficient.
  3. MEDAUDIT scores needed to inform product designers who needed to see how universally designed a product was for all potential users, yet be specific enough for an individual with a disability to see how accessible a product would be for their unique needs.
  4. The assessment output needed to be quantitative so device designs could be compared.
  5. MED-AUDIT scores needed be reliable and valid form a psychometric standpoint.

Overall, the integrated MED-AUDIT scores draw on two major data sources. The first source of data is elicited from designers or other product assessors as they tally which tasks a device requires and which features a device includes. The second data source is imported from a knowledge base of two matrices previously populated by experts. These matrices predict relationships between a) product features and user impairments and b) product features and tasks. These data are integrated using an algorithm that effectively weights the assessor's responses to produce MED-AUDIT scores to indicate the degree of accessibility of the evaluated medical device on a scale of 0-100% accessibility.

See also: Measuring Accessible Medical Instrumentation: Annotated Bibliography (resource document).

Question Domains

The MED-AUDIT was conceptualized with two major question sections: (I) Procedures-Task Analysis and (II) Device Features. The MED-AUDIT team postulated that in order to determine the accessibility of medical devices, it was necessary to know the tasks that a device user was required to perform in order to use the device as well as the accessibility features present in the device design. Relevant tasks are important to measure the accessibility of a device because, for example, if a device did not require users to position themselves on a device (like with an auditory alarm), there would be no concern about users transferring onto the device. Thus, certain tasks become irrelevant for scoring accessibility and other tasks are critical for measuring device accessibility. The second domain of questions focuses on the accessibility features of the device being rated. Obviously, which accessible features a product design integrates would affect the accessibility scores generated. These two domains relate, as some tasks required by a device for use would likely require accessible features. In the example of an auditory alarm, an essential task would be to recognize the sound. For this device to rate highest on accessibility, it would need to include a visual and tactile alarm output as well. Table 1 shows excerpts from the two core MED-AUDIT scoring domains (see also "Black Box System (BBS) MED-AUDIT Taxonomy). The current question taxonomy draft includes 1158 distinct questions, including: 177 task requirements and 981 device features. The questions are arranged in a hierarchal outline that includes 33 major categories: 10 for the task requirements and 23 for device features.

Table 1. Procedures Task Analysis (I) and Device Features (II) Questions for MED-AUDIT
  1. PROCEDURES-TASK ANALYSIS
    1. Prepare for device use
      1. Select appropriate device
        1. Familiarize self with device
        2. Familiarize self with person
        3. Match device to situation
      2. Understand device use
        1. Understand general procedure
        2. Understand component procedure
        3. Understand controls
        4. Understand display info
        5. Receive training to use the device
    2. Position device-prep for use
      1. Locate device
      2. Detect orientation of the device
      3. Approach- move to device
  1. DEVICE FEATURES
    1. Overall Device Features
      1. Parts that req. assembly & disassembly
        1. Easy assembly
        2. Infrequent assembly
        3. Few steps required
        4. Easy disassembly
        5. Infrequent disassembly
      2. Displays
        1. Monitors/screen displays
          1. Enhanced contrast
          2. Screen Size
          3. Brightness contrast
          4. Contrast adjustment
          5. Brightness adjustment
          6. Brightness coding

 

Software and Question Branching

The initial MED-AUDIT software was built on a previously developed software platform called OT-FACT, running on a "Spinnaker Plus" platform. Recently it has been converted to a newer software platform running "Runtime Revolution". Figure 1 below shows a representative screen shot of the prototype software.

Figure 1. Screen Shot of the MED-AUDIT

Screen shot of MED-AUDIT software running in OTFACT 2.0 interface

The question domains (procedures-task analysis and device features) are imported into the software and the MED-AUDIT question taxonomy uses the trichotomous tailored subbranching scoring structure (TTSS) as an efficient question branching method where irrelevant questions are eliminated. Fundamentally, TTSS uses a trichotomous response for each question, where 2 corresponds to no problem, 0 corresponds to total problem, and 1 corresponds to partial problem. When a rater responds to a MED-AUDIT question with a 2 or 0 response, the TTSS software moves to the next major category of questions (skipping the sub-level detailed questions in between major categories). When a rater responds to a question with a 1, the TTSS breaks down the category into more detailed subcategories to request more information from the rater. Thus, the trichotomous scoring is (1) cognitively more simple resulting in increased reliability and response scoring speed, (2) more efficient because it only asks detailed questions when needed so there is potential to include more detailed questions because irrelevant questions are omitted, and (3) more flexible because the verbal anchors that accompany the response sets can be adjusted as necessary and can intentionally vary in construct (e.g., requires tasks, somewhat requires task, does not require task; includes feature, somewhat includes feature, and does not include feature) (Smith 1993, 1994, 1995, 2002).

Impairment Categories

A comprehensive survey of the literature (see: MED-AUDIT Impairment Categories: Working Towards Mapping AMI Usability) was conducted to identify optimal impairment-related categorization schemes for consideration as the basis for generating MED-AUDIT scores (Barbotte, Guillemin, Chau, & the Lorhandicap Group, 2001, Center for Rehabilitation Technology, 2001, Pizur-Barnekow, Lemke, Smith, Winter & Mendonca, 2005, United States Census Bureau, 2004, United States Department of Health and Human Services, 2004, Vanderheiden, & Vanderheiden, 1991, World Health Organization, 2002). From this comprehensive review, a set of thirteen impairments and definitions were developed for the MED-AUDIT with mutually exclusive and exhaustive impairment domains. The impairment categories are used to generate scores for device accessibility. The set 13 impairments are: (1) hard of hearing, (2) deaf, (3) vision limitation, (4) blind, (5) expressive communication, (6) comprehension disorders, (7) other cognitive disorders, (8) mental and behavioral impairment, (9) sensitivity impairment, (10) lower limb impairment, (11) upper limb impairment, (12) head, neck, and trunk impairment, and (13) systemic body impairment.

Accessibility Expert Knowledge Matrices

The expert mapped matrices for MED-AUDIT work in the background of the software algorithm to provide prior likelihoods for the simple Bayes model (Birnbaum, 1999, Gustafson, Cats- Baril, & Alemi, 1992, Malakoff, 1999). These provide relative weightings to the question categories in order to generate overall accessibility scores for medical devices. Two distinct matrices correlate: (1) tasks involved with using the medical device and accessibility features that are related to completing the tasks and (2) medical device features that make devices more accessible for specific user impairment groups. The correlation between the device features and user impairments provides the critical connection for generating overall device accessibility scores. The data contained in the expert knowledge matrices combine with the data completed by the rater for a specific device enables the MEDAUDIT to generate accessibility scores for different user impairment types using the algorithm described in the next section. Tables 2 and 3 below show excerpts of the two matrices.

Table 3. Excerpt of the Expert Knowledge Impairment-Feature Matrix of MED-AUDIT
  IMPAIRMENTS
DEVICE FEATURE Deaf Low Vision Blind Comprehension Disorders Behavioral Impairment Sensitivity Impairment Lower Limb Impairment Upper Limb Impairment
Adequate approach/turning space 1 2 2 1 1 2 2 1
Adequate table/counter height 1 1 2 1 1 2 2 1
Adequate lower limb clearance 1 1 2 1 1 2 2 1
Adequate privacy 2 1 2 1 2 1 1 1
Clear approach/transfer path 1 2 2 1 1 2 2 1
Adequate overhead clearance 1 2 2 1 1 2 1 1
Table 4. Excerpt of the Expert Knowledge Task-Feature Matrix of MED-AUDIT
  TASK REQUIREMENTS
DEVICE FEATURE Access in-program help Use printed manual Use tutorial Use personal help Approach the device Move away from the device Transfer on to the device Transfer off the device
Adequate approach/turning space 2 2 2 2 2 2 2 2
Adequate table/counter height 2 2 2 2 1 1 1 1
Adequate lower limb clearance 2 2 2 2 1 1 1 1
Adequate privacy 0 0 0 2 0 0 0 0
Clear approach/transfer path 2 2 2 2 2 2 2 2
Adequate overhead clearance 2 2 2 2 1 1 1 1

MED-AUDIT Accessibility Scoring Algorithm

Initial development of the MED-AUDIT scoring was completed through conceptualization of the logic and scoring requirements for the algorithm, as shown in Figure 2 below.

Figure 2. Examples of MED-AUDIT Logic and Summative Requirements to Generate Overall Accessibility Scores for Different User Impairments

The scoring algorithm uses a sums of product relationships with 4 product terms: (1) Expert scored device feature requirement for a task, (2) Expert scored device feature requirement for user impairment, (3) Rater scored device feature presence on the device, and (4) Rater scored task requirement for device use.  The rater scores elements of device feature presence and task requirements related to device use within the TTSS scoring taxonomy and software, and the rater scores are multiplied with the expert mapped matrices that contain the accessibility and universal design knowledge related to (1) device feature requirements for tasks and (2) device feature requirement for user impairments.
The magnitudes of the scoring elements are determined by the scores of the expert matrices (1 and 2) and the rater scored feature presence (3) and task requirements (4). Rater scored device feature presence is used to determine if any given scoring element will be positive, negative, or neutral (zero).  When a rater scores a device feature presence of 2 the scoring element is positive, when a rater scores device feature presence as 1 the scoring element is ignored as a zero score, and when a rater scores device feature presence as 0 the scoring element is negative.  Rater scores of 1 (may be present) for device feature presence goes to zero so that a feature that someone is not sure is present does not affect the overall score of the device.

The domains of the tool were established (including the need for device feature and task requirement sections), the expert matrices were conceptualized (including features required for tasks and features required for different impairments), and the basic operations of the scoring modality were established (including the need to increasing and decreasing the overall score, depending on the type of information provided for each particular device). In Figure 2, the different scoring cases can be seen, including a maximum mathematical relationship of +8 and minimum mathematical relationship of -8, middle cases of +4, +2, -2, and -4, as well as a case of 0. Equation 1 below was used for the pilot scoring algorithm during early implementation of the MED-AUDIT (initially developed in Fortran). To generate overall accessibility and usability scores, sums of product relationships with 4 product terms are used: (1) Expert scored device feature requirement for a task [xe-dT], (2) Expert scored device feature requirement for user impairment [xe-iD], (3) Rater scored device feature presence on the device [xr-d], and (4) Rater scored task requirement for device use [xr-T].

Equation 1

MED-AUDIT scoring equation

where value of w sub jk in MED-AUDIT scoring equation

To generate MED-AUDIT accessibility and usability scores for medical technologies, raters score a device for elements of (a) device feature presence [xr-d ] and (b) task requirements related to using a particular device [xr-T ]. All rater scores [0, 2] are generated within the TTSS scoring taxonomy using the MEDAUDIT software. Rater scores are then multiplied with the expert mapped matrices [xe-dT and xe-iD] that contain the accessibility and universal design knowledge related to (1) device feature requirements for tasks and (2) device feature requirement for user impairments. Any scores generated with Equation 1 begin at a normalized 0.50 or 50% accessibility, and then positive and negative aspects of device designs adjust the overall device scores positively or negatively. The magnitudes of the scoring elements are determined by the scores of the expert matrices (1 and 2), the rater scored feature presence (3) and the rater scored task requirements (4), similar to those shown in Figure 7 above.

When raters score a device feature presence of 2, the scoring element is positive (i.e., if the feature is needed then the score will be positive because the needed feature is present, when raters score device feature presence as 1 the scoring element is ignored as a zero score (i.e., the feature is needed the score will not be affected positively or negatively because the needed feature may or may not be present), and when raters score device feature presence as 0 the scoring element is negative (i.e., if the feature is needed then the score will be negative because the needed feature is not present. Pilot testing of this approach was conducted with an improved MED-AUDIT interface that used case specific logic to generate accessibility and usability scores for different medical technologies (subsequently developed in the C++ coding language).

References

Smith, R. O. (1993). Sensitivity analysis of traditional and trichotomous tailored sub-branching scoring
(TTSS) scales.
University of Wisconsin-Madison, Madison, Wisconsin.

Smith, R. O. (1994, 1995). OT FACT, version 2.0 [Computer software]: American Occupational Therapy
Association, Rockville, MD.

Smith, R. O. (2002). OTFACT: multi-level performance-oriented software with an assistive technology
outcomes assessment protocol.
Technology and disability, 14, 133-139.

Barbotte, E., Guillemin, F., Chau, N., & the Lorhandicap Barbotte, E., Guillemin, F., Chau, N., & the
Lorhandicap Group (2001). Prevalence of impairments, disabilities, handicaps and quality of life in the
general population: A review of recent literature.
Bulletin of the World Health Organization, 79, 1047-
1055. Retrieved on January 1, 2005 from
http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=11428067&dopt=
Abstract

Center for Rehabilitation Technology (2001). Barrier Free Education Concepts – Disability Definitions.
Retrieved on November 4, 2004, from https://web.archive.org/web/20041028161154/http://barrier-free.arch.gatech.edu/Research/concepts.html(Archived Version)

Pizur-Barnekow, K., Lemke, M., Smith, R. O., Winter, M., & Mendonca, R. (2005). MED-AUDIT
impairment categories: Working towards mapping AMI usability.
Retrieved January 5, 2009 from
http://www.r2d2.uwm.edu/rerc-ami/archive/impairments.pdf

U.S. Census Bureau (2004). Disability status: 2000 – Census 2000 Brief. Retrieved on November 6,
2004 from https://web.archive.org/web/20041106151812/
http://www.census.gov/hhes/www/disable/disabstat2k/table1.html(Archived Version)

United States Department of Health and Human Services (July 2004). Vital and Health Statistics,
Summary Health Statistics for U.S. Adults: National Health Interview Survey, 2002
, Series 10, Number
222. Table 18. Retrieved on October 6, 2004 from
http://www.cdc.gov/nchs/data/series/sr_10/sr10_222.pdf

Vanderheiden, G. & Vanderheiden, K. (1991). A brief introduction to disabilities. Trace Center.
Retrieved on December 20, 2004 from http://trace.wisc.edu/docs/population/populat.htm

World Health Organization (2002). Body function - ICF categories. Retrieved on November 4, 2004
from http://www3.who.int/icf/beginners/bg.pdf

Birnbaum (1999). Bayesian Calculator. Retrieved September 5, 2009 from
http://psych.fullerton.edu/mbirnbaum/bayes/BayesCalc.htm

Malakoff, D. (1999). “Bayes Offers a New Way to Make Sense of Numbers.” Science, 286, 1460-1464.

Gustafson, D. H., Cats-Baril, W., Alemi, F. (1992). Forecasting without Real Data: Bayesian Probability
Models. Systems to Support Health Policy Analysis.
Ann Arbor, MI, Health Administration Press: 176-
201.