Skip to main content

ATOMS Project Technical Report - Multiattribute Utility Theory
Summarizing a Methodology and an Evolving Instrument for AT Outcomes

Bobbi Blaser Johnson, Eli Gratz, Kathy Longenecker Rust & Roger O.Smith

Updated: March 28, 2007

Summary

There are many measurement and research methodologies that are not typically used for assistive technology (AT) outcomes that we need to understand for their potential contribution to an outcomes system.  Examples include goal-attainment scaling (GAS), dynamic norming, subjective elicitation of data, multiattribute utility techniques (MAUT or MAU) and Bayes. 

Many of these are based on data elicitation principles from the decision sciences.   Decision analysis data collection and models such as MAU and Bayes are heavily used in engineering, economics, mathematics, military strategy and medical practice.  These may provide new strategies for measuring key components of AT outcomes.  This report summarizes an investigation and review of the use of MAU models in decision-making and discusses their relevance to AT outcomes and instrumentation.

Background

Frequently, decisions regarding rehabilitation services, including AT, must be made with atypical and complex sets of variables.  Individual circumstances of people who have disabilities can require unique decision processes.  Diagnostic populations may be small and function can be very idiosyncratic.   Thus, data collection and questioning strategies may need to be flexible, customized to the individual, and created on the spot with the client.  This is especially true for people who use AT.  In addition to their diverse circumstances, these individuals may use more than one device for more than one task in more than one environment.

Researchers and AT practitioners lack reliable and valid tools for outcomes data collection and decision-making and consequently they must adapt existing tools to fit their measurement needs.  Earlier ATOMS project work identified that, while dozens of AT measurement instruments exist, few have been devised with outcomes in mind.  Most have been created as part of the process to identify and select devices to match a need to an individual AT consumer (Rust & Smith, 2004).

Additionally, health and rehabilitation functional performance and related outcomes measures rarely include AT as a co-variate.  Many treat AT as an impairment that lowers performance scores, and even fewer instruments isolate the impact of AT in the outcomes score (Rust & Smith, 2005).  They argue that the failure to understand the role of technology in the outcomes of people who have a spectrum of types and intensities of disabilities neglects a significant opportunity to scientifically better understand the interaction between technology and human disablement.  They present a need for a next-generation outcomes measurement system that utilizes measurement theory not typical to the field of AT outcomes research to measure the impact of assistive technology devices (ATDs).

Description of scope

This effort reviewed the literature on MAU theory to identify the scope in which it is used in order to recommend potential applications relevant to AT outcomes.  A team of engineering and health students started this project in 2004 and completed the work in 2006.  They searched engineering, business, and health databases using key words primarily containing the multiattribute utility theory, (MAU) and derivatives.  The collected articles were coded into three categories 1) general literature, 2) engineering, and 3) health.  Appendix A displays a bibliography of these articles.

This report describes the fundamental measurement characteristics and background of MAU models and how they have been used in health.  It also discusses the implications drawn from the MAU literature and describes the methodology used in decision analysis.  This validates MAU models to strengthen the reliability and validity of qualitative client-centered methods such as GAS and instruments like the Canadian Occupational Performance Measure (COPM).  Lastly, the report discusses the implications of using MAU in AT measurement and data collection.

1. MAU background

MAU is a method to effectively integrate subjective and objective data onto a common scale or index (Garre, 1992) that can be used for decision-making.  The general literature describing MAU reveals that it is a method for decision-making and not traditionally an evaluation toolThis technique uses gathered data with a specific and sensitive weighting system to assess a given decision regarding various attributes (variables or outcomes), in order to find the optimal decision given a specific set of criteria (Barron and Barrett 1996; Herrmann and Code 1996).  Five key steps central for all MAU procedures were described by von Winterfeldt and Edwards (1986):

  1. Define alternative and value-relevant attributes;
  2. Evaluate each alternative separately on each attribute;
  3. Assign relative weights to the attributes;
  4. Aggregate the weights of attributes…to obtain an overall evaluation of alternatives;
  5. Perform sensitivity analyses and make recommendations.

More simply, Chatburn & Primiano (2001) summarize the MAU method as a model that:

  1. Incorporates input from the various stakeholders in the decision;
  2. Identifies the factors that are important in the decision and the alternative decision options;
  3. Weights the factors;
  4. Ranks the alternative decision on how well they serve the factors; and finally,
  5. Provides an overall score that identifies the best options.

Garre (1992) provides an easy-to-follow method that expands the five steps proposed by many MAU enthusiasts to 10 steps.  The following outlines Garre’s process.  Using 10 steps, Garre reported on a case study of a MAUT method used by health care managers to make a decision on whether to keep a 32-hour work week at 40-hour pay or eliminate the program (Table 1).

Table 1 - Example of MAU in health care staffing
General Steps Case Example

1. Determine the appropriate viewpoint for the decision.  Form a committee of stakeholders who are vested in the outcomes of the decision (option; this may be a unitary decision).

1. A committee was formed, including the chief executive officer, vice president of finance, vice president of nursing services, vice president of human resources and vice president of marketing.  

2. Identify decision alternative.  What options are being compared?  What are the variables?

2. The variables in the decision were whether to retain, modify, or delete a program in which nurses who worked 32 hours a week on evening or night shifts were paid for 40 hours.

3. Identify attributes for evaluation.   What are the attributes that characterized the variables affecting the decision?

3. Three attributes characterized the variables affecting the decision: cost, attrition, and morale.

4. Identify factors for evaluating the attributes (option; used if variables can be broken down further).

4. The factors for evaluating these attributes include a) cost: the dollar amount to replace nurses who decide to leave based on the final decision, b) attrition: number of nurses who decide to stay, and c) morale: nursing staffs’ attitude change from severe decrease to no change.  

5. Establish a Utility Scale.  Committee members rate each factor on a scale (i.e.  0-10).  Each member assigns a relative contribution value for each factor. 5. The committee members rated each factor to assign a relative contribution value for each factor; with 0 as “worst” and 10 as “best.”
6. Transform (or aggregate) each factor value to a utility scale.   6. The scores for each attribute were averaged.  
7. Determine the relative weights of each attribute or factor.   7. The committee then assessed the relative importance of cost (42%), attrition (33%), and morale (25%).  This committee obviously valued the dollar in this case.   If nursing staff members had been on this committee, it is likely that morale and attrition would have rated higher.
8. Calculate total utility for each of the decision alternatives.   8. The total utility was calculated by multiplying the utility score of each factor times the ratio weights for each attribute. 
9. Determine which alternative has the greatest total utility score.   Make a decision. 9. The decision to retain the “32 for 40” program had the highest utility score.
10. Perform sensitivity analysis to determine the strength of the analysis. 10. The committee performed a sensitivity analysis to evaluate the strength of the decision.  The committee asked whether a change in weights or in differential scaling would alter the decision.  For example if total ratio weights were changed to .40, .36 and .24 for cost, attrition, and morale, would the decision remain the same?  It was determined that these changes did not alter the ranking of the decision.

Multiattribute theory was formulated and refined in the late 1960s and early 1970s (Gustafson & Holloway 1975; Kenney 1970,) during which time a few groundbreaking articles were published. The literature revealed that MAU analysis has been employed in a wide variety of fields.

Management sciences:  As a means of structuring decision-making (Bier & Connel, 1994; Carroll & Johnson, 1990; Christenson, 1993; Doyle, 1995; Dyer, Fishburn, Steuer, Walleins & Zionts, 1992; Hanson, Kidwell & Ray, 1991; Huber, 1974; Keeney & Raiffa, 1976; Pandey & Kengpol, 1995; Poole & DeSanctis, 1990; Samuelson, 1993).

Assessing programs:  These methods have been proposed for evaluating program alternatives (Dicker & Dicker, 1991; Edwards & Newman, 1982), and are often applied in the fields of public health (Alemi, Stephens, Llorens & Orris, 1995; Camasso & Dick, 1993; Kaplan, Atkins & Wilson, 1988; Salazar & de Moor, 1995), in social services (Hidalgo-Hardeman, 1993; Kemp & Willetts, 1995; Lewis, Johnson & Mangen, 1998), related to consumer choice (Kahn & Baron, 1995; Kahn & Meyer, 1991), in environmental studies (Brown, 1991; McDaniels, 1996; Tzeng, Teng & Hu, 1991), transportation studies (Levine, 1996), in education (Levin, 1983; Lewis, 1989; Lewis & Kallsen, 1995), and in the criminal justice system (Edwards, 1980).

In the field of disability studies and rehabilitation, such a technique has been recently proposed for use in making decisions about program goals and alternatives (Lewis, Johnson, Erickson & Bruininks, 1994; Lewis et al., 1998; Lewis & Johnson, 2000).

In health related fields an early MAU study was completed in 1969, published by Gustafson and Holloway (1975).  This study used the example of burn victims to test a model created to examine severity of illness, and eventually expand to the more general case of health status.  By specifically determining severity of burns, cost-benefit analysis can find the effectiveness of treatment, both financially and in terms of patient health preference.  A group of specialists assigned the weights used in the model based on their expert opinion.

Lewis, Johnson, and Scholl (2003) used MAU analysis as a methodology for evaluating the goals and services of a state vocational rehabilitation (VR) agency that was undergoing a comprehensive strategic planning process and had adopted the MAU model to support aspects of its planning.  In the course of the planning exercise, the agency was interested in: (a) Identifying and reaffirming the agency goals and services; (b) obtaining feedback and establishing consensus with stakeholders (i.e., consumers with disabilities, VR agency staff, others) on what were the most important measurable attributes of these goals; (c) establishing benchmark estimates for each of these attributes for use in program evaluation; and (d) in using MAU evaluation results in program improvement planning and future evaluation comparison.

2. Implications of the MAU literature

One of the issues addressed by the decision sciences is how to robustly measure subjective variables such as preferences, expert estimates and intuitive variables, yet combine the information with empirical data.  These seem consistent with assessment needs in occupational therapy (OT) where variables of interest for people with disabilities include subjective and soft information, (e.g. pain reduction, quality of life, or aesthetic preferences); along with hard data such as range of motion, learning rates, or functional performance.  There are several specific techniques used by decision analysts regarding health-related outcomes, including MAU models.  MAU is traditionally used for decision modeling, but is also a mechanism of innovative data collection and application.  In OT, we use aspects of MAU theory often, but implicitly.

MAU also addresses an issue that has been viewed in rehabilitation measurement as extremely important: acquiring an interval level scale for measurement.  This is viewed as important because to add scores, intervals must be equal.  Otherwise, 2+3 may not equal 5.  Thus, equal intervals are essential for comparing scores between situations or individuals.  Merbitz, Morris & Grip (1989) in an article on “Ordinal Scales and Foundations of Misinference,” depicted the problems with ordinal level data for use in rehabilitation.

We also know that one of the parameters for using inferential parametric statistical analyses is interval or better level data.  Fortunately, some methods and strategies are available to remedy ordinal data once collected.  Some have suggested that ordinal data may be irrelevant.  The phase and important measurement validity concept was suggested by Hamilton & Granger in 1989.  They said “the proof is in the pudding”.  They stated that if predictive validity was evident, then maybe the need for a perfect interval scale was moot.  This makes some sense. Furthermore, many parametric inferential statistical methods (e.g. the student t-test) are robust.  Maybe interval level data are not as important as statistical purity has classically required.

Some scholars have attempted to describe how occupational therapists participate in clinical decision-making through strategies of clinical reasoning.  More recently, attention to client-centered practice has begun to suggest strategies that might be used to improve decision-making in practice.  Plus, some suggestions related to measurement have prompted thinking in the area of new decision and measurement methods.  Examples are the COPM and GAS.  However, as a profession we have not yet seriously considered the theory and mechanisms behind decision analysis and how they might substantiate OT decisions in practice.  For example, might the selection of the best AT device or determining the best treatment intervention for a client benefit from some decision science?

In general though, decision analysis techniques have not been formally adopted in health care.  As an example, twenty years ago, Alemi (1986) wrote an article arguing for training health care administrators in decision analysis.  Despite widespread acceptance of the idea at the time, as demonstrated by published commentaries, decision analysis has not caught on with health care administrators as much as it has in other industries.

3. MAU and client-centered instruments

Three valid reasons for performing evaluations include making decisions regarding monitoring, fine-tuning, and programmatic choice (Edwards & Newman, 1982).  There are two common characteristics that make MAU applicable to all of these reasons.  First, they require comparison of one thing to another.  Second, nearly all decisions have multiple objectives; consequently evaluations should assess as many as are important.  The literature now justifies the approximations and/or assumptions in the applicability of MAU as an evaluation method for arriving at a decision (Edwards, et al.).

Assessment instruments that individualize questions for clients have been attractive to OT practice for several decades (Kiresuk, Smith, & Cardillov, 1994; Ottenbacher, & Cusick, 1990).  Qualitative approaches such as GAS and COPM reflect this interest.  These assessment methods may be cutting-edge but without good arguments to substantiate their use, they are susceptible to question by others.  Qualitative data is difficult to compare across individuals and settings.  The flexibility of these instruments loses the confidence of traditional test and measurement theory based on expected distributions of data and static question sets.  Nevertheless, OT practitioners have become very interested in these approaches as matching the individual client perspective to intervention has high-perceived validity.

What these practioners may not be aware of is that GAS and the COPM have substantial quantitative theoretical support as well.  These assessments borrow certain characteristics typically found in the Multiattribute Utility Theory (Edwards & Newman, 1982).  MAU models can be devised so they represent interval scales, which increase the robustness of the data collected.  Thus, the qualitative data gathered in the COPM or by GAS can be measured quantitatively.  Additionally, by using "sensitivity analysis" an internal check on reliability can be performed.  Applying key aspects of the MAU model and its supporting decision analysis theory and techniques gives the practitioners better tools to articulate and justify their application.

Using Chatburn & Primiano’s (2001) summary of MAUT, clinicians or researchers can assess how the COPM and GAS can qualify as an approximation in the applicability of MAUT.  To explain this we can refer to Chatburn & Primiano’s summary of the MAUT method, the Carswell, McColl, Baptiste, Law, Polatajko, & Pollock (2004) description of the COPM process and the Ottenbacher & Cusick, (1993) synopsis of the GAS.

Table 2 - Comparisons of COPM, GAS and MAUT
Procedure MAU
Chatburn & Primiano’s (2001)
COPM
Carswell et.  el.  (2004), Law et al, (1998)
GAS
Ottenbacher & Cusick, (1993)
Determine the specific decision to be made. Incorporate input from the various stakeholders in the decision. The client identifies issues in areas of self-care, productivity, and leisure. The therapist, client, and family (or other team members) decide on the expected level of outcomes for a particular goal.
Identify the variables in the decision. Identify the factors that are important in the decision and the alternative decision options. The client rates their perceptions of the importance of each activity on a scale from 1 to 10. Outcomes that are both more or less favorable than the expected outcomes also are determined for each goal.   Each level of performance is associated with a numeric value ranging from +2 to –2 with 0 associated with the expected level of outcomes.
 Weigh the importance of each variable. Weight the factors. The clients choose the top five problems they wish to focus on during therapy. Relative weights are assigned to each of three or more goals identified for the client.  For example, if four goals are identified, they are each assigned a number between 1 and 4
 Rank the items. Rank the alternative decision on how well they serve the factors For each of these five problems clients rate their performance & satisfaction with performance on a scale from 1-10.   The weights are ranked; +4 for the most important goal and +1 for the least important.
Assess the scores to make a decision. Provide an overall score that identifies the best options.   The scores are summed and averaged over the number of problems, to produce scores out of 10. After completion of treatment the progress toward achieving the outcomes are derived for each goal.  This is determined using the +2 to –2 scale (level at which the goal was achieved or failed to be achieved).   This information is used to calculate a goal-attainment score.

4. MAUT and AT outcomes – Development of the Isolating the Impact of Interventions (I3) Instrument

Several MAU-like instruments were developed and piloted during the ATOMS Project for the purpose of AT outcomes measures.  An iterative process as three instruments were piloted has led to what we now call the Isolating the Impact of Interventions (I3) Instrument.

The original instruments were The Relative Advantage of Assistive Technology and Services (RAATS), the Student Performance Profile (SPP), and the ATOMS-Division of Vocational Rehabilitation Consumer Survey (A/D-CS).  All three instruments were based on the Integrated Multi-Intervention Paradigm for Assessment and Application of Concurrent Treatments (IMPACT2) Model, (Smith, 2002).  The model describes the theoretical relationship of key intervention approaches used to optimize function of people with disabilities and delineates variables we must measure to understand outcomes of AT interventions as they are practiced in the natural environment.

Although these instruments are not applied to decision-making, MAU concepts are imbedded in their data-collection strategies.  All three provide subjective measures of the amount of contribution of each type of intervention to the AT users’ outcomes.  MAU modeling provides a method to estimate the amount of contribution each approach has on a given outcome.  In brief, the MAU process generates a set of scores on the various attributes within the six rehabilitation approaches.  These scores are then normalized into percentages that can be used to weight the various interventions.

In response to the lack of an AT instrument suitable to their research needs, the ATOMS Project staff and some of their partners developed RAATS to measure the impact of AT relative to other interventions that are working concurrently or in conjunction with AT.  It uses the IMPACT 2 Model as its theoretical foundation.  The items on the RAATS use a seven-point scale to probe the user about the overall impact of the six interventions.  The original RAATS questions were designed for qualitative research and initially used during an unpublished, open-ended interview process.  Lenker (manuscript in development) used the instrument in a study of 62 individuals representing three disability groups (physical, vision, and learning) and the impact of computer-based assistive technology.  The Assistive Technology Infusion Project (ATIP) of the Ohio Department of Education collaborated with the ATOMS Project team in development of the Student Performance Profile (SPP), used to measure the impact of their project (Fennema-Jansen, Smith, Edyburn, 2005).  One challenge identified to measuring AT in the schools is that students received concurrent interventions specific to classrooms.  Therefore, they revised and updated 10 questions based on the original RAATS work as a portion of their instrument.

Outcomes data had been reported on 1,760 students at the time of the cutoff date for inclusion in the first study by Fennema-Jansen, Smith, Edyburn & Binion (2005).  We now have a database with more than 2,000 records and are creating dynamic Web-based displays.  Students with a variety of disabilities from across the state of Ohio are represented in this group.

The percent of the total for all interventions that was attributed to assistive technology devices was multiplied by the ability rating on the relevant IEP goals.  The results of the analysis of variance indicated a significant difference between the amount that AT devices contributed to progress on the goals prior to the assistive technology intervention and after using the technology for eight months.  The authors concluded that the results of this study provide confirmation that the method used to determine the relative contribution of a given intervention has potential, and should be researched more extensively.

Johnson extended the method of investigation that began with the RAATS in her study of AT (or RT – rehabilitation technology, as used in vocational rehabilitation) use of clients served by the State of Wisconsin Division of Vocational Rehabilitation Services (2006).  As part of a multifaceted exploration of RT across stakeholders in the system, she developed and implemented the ATOMS-Division of Vocational Rehabilitation Consumer Survey (A/D-CS), which included questions about experiences with RT services and devices, including mechanisms to make comparisons with other interventions.  During survey development the initial six intervention questions grew to 19.  Reasons were specific to the vocational rehabilitation model.

Among the findings, Johnson reports that A/D-CS is an effective instrument in isolating the contribution of concurrent interventions and to measure the impact of AT on employment goals.  It gathered data on interventions the participants received regardless of whether the VR counselor or another rehabilitation professional provided the services.  It therefore is useful in determining service outcomes using a wider lens.  Data collected in this study included intervention approaches provided by the vocational, medical, education and independent living models.  During the write-up of the VR studies, “I3” was adopted as the nomenclature to represent this line of outcomes inquiry.

Conclusion

This report reviewed the use of MAU models in decision-making, which has been identified as a vital component of performance evaluation.  Because most decisions have multiple objectives, decisions should assess as many objectives as are deemed important in the specific circumstance being analyzed.

Occupational therapists have been attracted to assessment instruments that individualize questions for clients for several decades.  A recent trend toward client-centered practice suggests strategies that might be used to improve decision-making in practice, thus putting emphasis on decision and measurement tools.

When the methodologies of the MAU method, COPM, and GAS are compared side-by-side, similar characteristics are shown that can be used to improve decision-making in practice.  GAS and the COPM have substantial quantitative theoretical support.  Review of MAU literature suggests that MAU, also, may offer significant relevance to AT outcomes systems.

References

Alemi, F. (1986). Decision analysis in health administration programs: An experiment. Journal of Health Administration Education 4(1), 45-61.

Alemi, F., Stephens, R.C., Llorens, S., & Orris, B. (1995). A review of factors affecting treatment outcomes: Expected treatment outcome scale. American Journal of Drug & Alcohol Abuse, 21(4), 83–509.

Barron, F.H. & Barrett, B.E. (1996). Decision Quality Using Ranked Attribute WeightsManagement Science, 42(11), 1515-1522.

Bier, V.M., & Connell, B.L. (1994).  Ambiguity seeking in multi-attribute decisions: Effects of optimism and message framing. Journal of Behavioral Decision Making, 7(3), 169–182.

Brown, N.J. (1991). A multiattribute evaluation model for environmental compliance of existing metal hydroxide precipitation systems in the electroplating industry. Unpublished master’s thesis, Virginia Polytechnic Institute and State University, Blacksburg.

Camasso, M.J., & Dick, J. (1993). Using multiattribute utility theory as a priority-setting tool in human services planning. Evaluation and Program Planning, 16(4), 295–304.

Carroll, J.S., & Johnson, E.J. (1990). Decision research: A field guide. London: Sage.

Carswell A., McColl, M.A., Baptiste, S., Law, M., Polatajko, H., & Pollock, N. (2004).   The Canadian Occupational Performance Measure: a research and clinical literature review. Canadian Journal of Occupational Therapy, 71(4), 210-22.

Chatburn, R.L. & F.P. Primiano (2001). Decision analysis for large capital purchases: How to buy a ventilator. Respiratory Care 46(10), 1038-1054.

Christenson, S.L. (1993). Computer-aided decision analysis: theory and application. Westport, CT: Quorum Books.

Dicker, P.F., & Dicker, M.P. (1991). Involved in systems evaluation? Use a multiattribute analysis approach to get the answers. Industrial Engineering, 23, 43–46.

Doyle, J.R. (1995). Multiattribute choice for the lazy decision maker: Let the alternatives decide. Organizational Behavior & Human Decision Processes, 62(1), 87–100.

Dyer, J.S., Fishburn, P.C., Steuer, R.E., Walleins, J., & Zionts, S. (1992). Multiple criteria decision making, multiattribute utility theory: The next ten years. Management Science, 38(5), 645–654.

Edwards, W.  (1980).  Multiattribute utility for evaluation: structures, uses, and problems. In M.  Klein & K. Teilmann (Eds.), Handbook of criminal justice evaluation.  Beverly Hills, CA: Sage.

Edwards, W. & Newman, J.R (1982). Multivariate Evaluation.  Quantitative Applications. In, J.L. Sullivan and R.G. Niemi (Eds.) The Social Sciences 26, 96.

Fennema-Jansen, S.A. (2004a). Technical report-The Assistive Technology Infusion Project (ATIP) database (1.0). University of Wisconsin-Milwaukee. Retrieved from the World Wide Web: http://www.uwm.edu/CHS/atoms/activities/.

Fennema-Jansen, S.A., Smith, R.O., Edyburn, D.L., & Binion, M. (2005). Isolating the contribution of assistive technology to school progress. Proceedings of the RESNA 28th International Conference on Technology and Disability: Research, Design, Practice, and Policy, Atlanta, GA.

Gustafson, D.H., & Holloway, D.C. (1975). A decision theory approach to measuring severity of illness. Health Service Research, 10(1), 97-106.

Garre, P.P. (1992). Multiattribute utility theory in decision-making.  Nursing Management, 23(5), 33-35.

Hamilton, B.B., & Granger, C.V. (1989). Totaled functional score can be valid [Letter to the editor]. Archives of Physical Medicine and Rehabilitation, 70(12), 861-863.

Hanson, M.E., Kidwell, S., & Ray, D. (1991). Electric utility least-cost planning: Making it work within a multiattribute decision-making framework. Journal of the American Planning Association, 57(1), 34–43.

Herrmann, M. & Code, C. (1996). Weightings of items on the Code-Muller Protocols: The effects of clinical experience of aphasia therapy. Disability and Rehabilitation, 18(10), 509-514.

Hidalgo-Hardeman, O.M. (1993). Evaluating social service delivery configurations. Evaluation Review, 17(6), 603–620.

Huber, G.P. (1974). Multiattribute utility models: a review of field and fieldlike studies. Management Science, 20(10), 1393–1402.

Johnson, R.J. (2006). The impact of assistive technology devices and services on DVR goal achievement. Unpublished masters thesis, University of Wisconsin-Milwaukee.

Kahn, B.E., & Baron, J. (1995). An exploratory study of choice rules favored for high-stakes decisions. Journal of Consumer Psychology, 4(4), 305–328.

Kahn, B.E., & Meyer, R.J. (1991). Consumer multiattribute judgments under attribute-weight uncertainty. Journal of Consumer Research, 17(4), 508–522.

Kaplan, R.M., Atkins, C.J., & Wilson, D.K. (1987). The cost-utility of diet and exercise interventions in non-insulin-dependent diabetes mellitus. Health Promotion, 2(4), 331–340.

Keeney, R.L. (1970). Assessment of multiattribute preferences. Science 168(3938): 1491-1492.

Keeney, R., & Raiffa, H. (1976). Decisions with multiple objectives: Preferences and value tradeoffs. New York: Wiley.

Kemp, S. & Willetts, K. (1995). Rating the value of government-funded services: Comparison of methods. Journal of Economic Psychology, 16(1), 1–21.

Kiresuk, T.J., Smith, A.A., & Cardillo, J.E. (Eds.). (1994). Goal attainment scaling.  Hillsdale, NJ: Lawrence Erlbaum Associates.

Law M., Baptiste S., Carswell A., McColl M.A., Polatajko H., & Pollock N. (1998). The Canadian Occupational Performance Measure (3rd ed). Toronto, Ontario, Canada: CAOT Publications. 

Levin, H.M. (1983). Cost-effectiveness: A primer. Beverly Hills, CA: Sage.

Levine, J. (1996). A multiattribute analysis of goals for intelligent transportation system planning. Transportation Research: Emerging Technologies, 4(2).

Lewis, D.R. (1989). Use of cost-utility decision models in business education. Journal of Education in Business, 64(March), 275–278.

Lewis, D.R., & Johnson, D.R. (2000). Evaluation of participatory decision-making strategies for special education and rehabilitation programs. Washington, DC: American Association on Mental Retardation.

Lewis, D.R., Johnson, D.R., Erickson, R.N., & Bruininks, R.H. (1994). Multiattribute evaluation of program alternatives in special education. Journal of Disability Policy Studies, 5(1), 77–112.

Lewis, D.R., Johnson, D.R., & Mangen, T.  (1998).  Evaluating the multiattribute nature of supported employment. Journal of Applied Research in Intellectual Disability, 11(2), 95–115.

Lewis, D.R., & Kallsen, L.  (1995). Multiattribute evaluations: an aid in reallocation decisions in higher education. Review of Higher Education, 18(4), 437–465.

Lewis, D.R., Johnson, D.R., & Scholl, S.R.  (2003). Assessing state vocational rehabilitation performance in serving individuals with disability. Journal of Intellectual & Developmental Disability, 28 (1), pp.  24–39.

McDaniels, T.L. (1996). A multiattribute index for evaluating environmental impacts of electric utilities. Journal of Environmental Management, 46(10), 57–66.

Merbitz, C., Morris, J., & Grip, J.C. (1989). Ordinal scales and foundation of misinference. Archives of Physical Medicine and Rehabilitation, 70(4), 308-312.

Ottenbacher, K.J.  & Cusick, A. (1990). Goal attainment scaling as a method of clinical service evaluation. American Journal of Occupational Therapy, 44(6), 519-525.

Ottenbacher, K.J. & Cusick, A.  (1993). Discriminative versus evaluative assessment: Some observations on goal attainment scaling. American Journal of Occupational Therapy, 47(4), 349-354.

Pandey, P.C., & Kengpol, A. (1995). Selection of an automated inspection system using multiattribute decision analysis. International Journal of Production Economics, 39(3), 289–298.

Poole, M.S., & DeSanctis, G. (1990). Understanding the use of group decision support systems.  In C. Steinfield & J. Fulk (Eds.), Theoretical approaches to information technologies. 

Rust, K.L., & Smith, R.O. (2005). Assistive technology in the measurement of rehabilitation and health outcomes: A review and analysis of instruments. American Journal of Physical Medicine and Rehabilitation, 84(10), 780-793.

Rust, K.L., Blaser, R.J., Fonner, K., Smith, R. O., Brayton, A., & Januik, M. (2005). Technical report - Assistive technology instrument update and review (Version 1.0). Retrieved from the World Wide Web: http://www.uwm.edu/CHS/r2d2/atoms/archive/technicalreports/fieldscans/tr-fs-ati.html.

Salazar, M.K., & de Moor, C. (1995). An evaluation of mammography beliefs using a decision model. Health Education Quarterly, 22(1), 110–126.

Samuelson, C. (1993). A multiattribute evaluation approach to structural change in resource dilemmas. Organizational Behavior & Human Decision Processes, 55(2), 298–324.

Smith, R.O. (2002). Assistive Technology Outcome Assessment Prototypes: Measuring "Ingo" Variables Of “Outcomes". Proceedings of the RESNA 2002 Annual Conference, RESNA Press, 22, 115-125. 

Smith, R.O. (2005). Integrated Multi-Intervention Paradigm for Assessment and Application of Concurrent Treatments (IMPACT2) Model. Retrieved, from the World Wide Web: http://www.r2d2.uwm.edu/archive/impact2model.html

Tzeng, G., Teng, J., & Hu, C. (1991). Urban environmental evaluation and improvement: application of multiattribute utility and compromise programming. Behaviormetrika, 29, 83–98.

von Winterfeldt, D. and W. Edwards (1986). Decision Analysis and Behavioral Research.  Boston, Cambridge University Press.

Appendix A

Mau literature – general

Barner, J. C., & Thomas III, J. (1998). Tools, information sources, and methods used in deciding on drug availability in HMOs. Am J Health-Syst Pharm, 55, 50-56.

Barron, F. H., & Barrett, B. E. (1996). Decision quality using ranked attribute weights. Management Science, 42(11), 1515-1522.

Boyle, M. H., & Torrance, G. W. (1984). Developing multi attribute health indexes. Medical Care, 22(11), 1045-1057.

Carmone, F. J., Kara, A., & Zanakis, S. H. (1997). A Monte Carlo investigation of incomplete pairwise comparison matrices. European Journal of Operational Research, 102, 538-553.

Carnahan, J. V., Thurston, D. L., & Liu, T. (1994). Fuzzing ratings for multi attribute design decision-making. Journal of Mechanical Design, 116, 511-521.

Carroll, J. S., & Johnson, E. J. (1990). Decision Research: A Field Guide (First ed. Vol. 22): Sage.

Corner, J. L., & Kirkwood, C. W. (1996). The magnitude of errors in proximal multi attribute decision analysis with probabilistically dependent attributes. Management Science, 42(7), 1033-1041.

Deber, R. B., & Goel, V. (1990). Using explicit decision rules to manage issues of justice, risk, and ethics in decision analysis. Medical Decision Making, 10(3), 181-194.

Diederich, A. (1997). Dynamic stochastic models for decision making under time constraints. Journal of Mathematical Psychology, 41, 260-274.

Edland, A. (1994). Time pressure and the application of decision rules: Choices and judgments among multiattribute alternatives. Scandinavian Journal of Psychology, 35, 281-291.

Edwards, W., & Newman, J. R. (1982). Multiattribute Evaluation: Sage Publications Inc.

Fischer, G. W., Luce, M. F., & Jia, J. (2000). Attribute conflict and preference uncertainty: Effects on judgment time and error. Management Science, 46(1), 88-103.

Garre, P. P. (1992). Multiattribute utility theory in decision-making. Nursing Management, 33-35.

Greco, S., Matarazzo, B., & Slowinski, R. (2000). Extension of the rough set approach to multicriteria decision support. INFOR, 38(3), 161-191.

Greco, S., Matarazzo, B., & Slowinski, R. (2000). A new rough set approach to multicriteria and multiattribute classification. INFOR Journal, 38(3), 161-195.

Gustafson, D. H., Cats-Baril, W. J., & Alemi, F. (1992). Systems to Support Health Policy Analysis: Theory, Models and Uses. Ann Arbor, Michigan: Health Administration Press.

Jasper, J. D., & Levin, I. P. (2001). Validating a new process tracing method for decision making. Behavior Research Methods, Instruments & Computers, 33(4), 496-512.

Keeney, R. L. (1970). Assessment of multiattribute preferences. Science, 198, 1491-1492.

Kenney, R. L., & Raiffa, H. (1976). Decisions with multiple objectives: Preferences and value tradeoffs. USA: John Wiley & Sons.

Klein, D. A., & Shortliffe, E. H. (1994). A framework for explaining decision-theoretic advice. Artificial Intelligence, 67, 201-243.

Krischer, J. P. (1980). An annotated bibliography of decision analytic applications to health care. Operations Research, 28(1), 97-113.

Locascio, A., & Thurston, D. L. (1992). Multiattribute optimal design of structural dynamic systems. Design Theory and Methodology, 42, 229-236.

Mechitov, A. I., Moshkovich, H. M., & Olson, D. L. (1994). Problems of decision rule elicitation in a classification task. Decision Support Systems, 12, 115-126.

Miyamoto, J. M., & Wakker, P. (1996). Multiattribute utility theory without expected utility foundations. 44, 313-326.

Miyamoto, J. M., Wakker, P., Bleichrodt, H., & Peters, H. J. M. (1998). The zero-condition: A simplifying assumption in QALY measurement and multiattribute utility. Management Science, 44(6), 839-849.

Moshkovich, H. M., Mechitov, A. I., & Olson, D. L. (2002). Ordinal judgments in multiattribute decision analysis. European Journal of Operational Research, 137, 635-641.

Nygren, T. E. (1997). Framing of task performance strategies: effects on performance in a multiattribute dynamic decision making environment. Human Factors, 39(3), 425-437.

Pearman, A. D. (1993). Establishing dominance in multiattribute decision making using an ordered metric method. Operational Research Society, 44(5), 461-469.

Poyhonen, M., & Hamalainen, R. P. (2001). On the convergence of multiattribute weighting methods. European Journal of Operational Research, 129, 569-585.

Preez, J. P. D. (1994). On statistical testing for intransitivity in multiattribute consumer preference surveys. Operations Research, 42(3), 550-555.

Raiffa, H. (1968). Decision analysis: Introductory lectures on choice under uncertainty: Random House.

Ranyard, R., & Abdel-Nabi, D. (1993). Mental accounting and the process of multiattribute choice. Acta Psychologica, 84, 161-177.

Ringuest, J. L. (1997). Lp-metric sensitivity analysis for single and multi-attribute decision analysis. European Journal of Operational Research, 98, 563-570.

Salazar, M. K. (1991). Comparison of four behavioral theories. AAOHN Journal, 39(3), 128-134.

Soofi, E. S., & Retzer, J. J. (1992). Adjustment of importance weights in multiattribute value models by minimum discrimination information. European Journal of Operational Research, 60, 99-108.

von Winterfeldt, D., & Edwards, W. (1986). Decision Analysis and Behavioral Research. United States of America: Cambridge University Press.

Yang, J.-B. (2001). Rule and utility based evidential reasoning approach for multiattribute decision analysis under uncertainties. European Journal of Operational Research, 131, 31-61.

Yoon, K. P., & Hwang, C.-L. (1995). Multiattribute decision making: An introduction (Vol. 07-104): Sage.

Mau literature – engineering

AbouRizk, S. M., & Chehayeb, N. N. (1995). A hypertext decision support model for contractor prequalification. Microcomputers in Civil Engineering, 10, 111-121.

Almeida, A. T. d., & Bohoris, G. A. (1996). Decision theory in maintenance strategy of standby system with gamma-distribution repair-time. IEEE Transactions on Reliability, 45(2), 216-219.

Baker, R. C., & Talluri, S. (1997). A closer look at the use of data envelopment analysis for technology selection. Computers ind. Engng, 32(1), 101-108.

Ben-David, A., & Jagerman, D. L. (1997). Evaluation of the number of consistent multiattribute classification rules. Engng Applic. Intell., 10(2), 205-211.

Boucher, T. O., Gogus, O., & Wicks, E. M. (1997). A comparison between two multiattribute decision methodologies used in capital investment decision analysis. The Engineering Economist, 42(3), 179-200.

Connors, S. R. (1996). Informing decision makers and identifying niche opportunities for windpower. Energy Policy, 24(2), 165-176.

Corner, J. L., & Kirkwood, C. W. (1996). The magnitude of errors in proximal multiattribute decision analysis with probabilistically dependent attributes. Management Science, 42(7), 1033-1042.

Dong-il, & Hiroshi. (1994). Optimal product-planning for new multiattribute products based on conjoint analysis. Computers ind. Engng, 27(1-4), 11-14.

Dong-il, & Hiroshi. (1995). Optimal pricing and product-planning for new multiattribute products based on conjoint analysis. International Journal of Production Economics, 38, 245-253.

Duarte, B. P. M. (2001). The expected utility theory applied to an industrial decision problem- what technological alternative to implement to treat industrial solid residuals. Computers & Operations Research, 28, 357-380.

Dyer, J. S., Edmunds, T., Butler, J. C., & Jia, J. (1998). A multiattribute utility analysis of alternatives for the disposition of surplus weapons-grade plutonium. OR Practice, 46(6), 749-762.

Jungthrapanich, C., & Benjamin, C. O. (1995). A knowledge-based decision support system for locating a manufacturing facility. IIE Transactions, 27, 789-799.

Kabir, A., & Tabucanon, M. T. (1995). Batch-model assembly line balancing: A multiattribute decision making approach. International Journal of Production Economics, 41, 193-201.

Karmarkar, U. S., & Pitbladdo, R. C. (1997). Quality, class, and competition. Management Science, 43(1), 27-39.

Keeney, R. L., McDaniels, T. L., & Ridge-Cooney, V. L. (1996). Using values in planning wastewater facilities for metropolitan Seattle. Water Resources Bulletin, 32(2), 293-303.

Kim, J. K., & Choi, S. H. (2001). A utility range-based interactive group supposed system for multiattribute decision making. Computers & Operations Research, 28, 485-503.

Kim, K. M., & Krishnamurty, S. (2000). A dominance-based design metric in multiattribute robust design. Research in Engineering Design, 12, 235-248.

Lippiatt, B. C. (1999). Selecting cost-effective green building products: Bees approach. Journal of Construction Engineering and Management, 125(6), 448-455.

Liu, S.-Y., & Chen, J.-G. (1995). Development of a machine troubleshooting expert system via Fuzzy multiattribute decision-making approach. Expert Systems With Applications, 8(1), 187-201.

Locasicio, A., & Thurston, D. L. (1994). Quantifying the house of quality for optimal product design. Design Theory and Methodology, 68, 43-54.

Locasicio, A., & Thurston, D. L. (1998). Transforming the house of quality to a multiobjective optimization formulation. Structural Optimization, 16, 136-146.

Merkhofer, M. W., & Keeney, R. L. (1987). A multiattribute utility analysis of alternative sites for the disposal of nuclear waste. Risk Analysis, 7(2), 173-194.

Mitri, M. (1995). Combining semantic networks with multi-attribute utility models: An evaluative data base indexing method. Expert Systems With Applications, 9(3), 283-294.

Morgan, K. M., DeKay, M. L., Fischbeck, P. S., Granger, M. M., Fischoff, B., & Florig, H. K. (2001). A deliberative method for ranking risk (II): Evaluation of validity and agreement among risk managers. Risk Analysis, 21(5), 923-937.

Moshkovich, H. M., Schellenberger, R. E., & Olson, D. L. (1998). Data influences the result more than preferences: Some lessons from implementation of multiattribute techniques in a real decision task. Decision Support Systems, 22, 73-84.

Moynihan, G. P., & Jethi, R. J. (1995). A decision support system for multiattribute evaluation of automation alternatives. Computers ind. Engng, 29(1-4), 417-179.

Ntuen, C. A., & Chestnut, J. A. (1995). An expert system for selecting manufacturing workers for training. Expert Systems With Applications, 9(3), 309-332.

Pan, j., & Rahman, S. (1998). Multiattribute utility analysis with imprecise information: An enhanced decision support technique for the evaluation of electric generation expansion strategies. Electric Power Systems Research, 46, 101-109.

Pandey, P. C., & Kengpol, A. (1995). Selection of an automated inspection system using multiattribute decision analysis. International Journal of Clinical Monitoring and Computing, 39, 289-298.

Parnell, G. S., Conley, H. W., Jackson, J. A., Lehmkuhl, L. J., & Andrew, J. M. (1998). Foundations 2025: A value model for evaluating future air and space forces. Management Science, 44(10), 1336-1349.

Pena, D. (1999). Methodology for building service quality indices. Annual Quality Congress Transactions, 550-558.

Prinzel, L. J., Freeman, F. G., Scerbo, M. W., Mikulka, P. J., & Pope, A. T. (2000). A closed-loop system for examining psycho physiological measures for adaptive task allocation. The International Journal of Aviation Psychology, 10(4), 393-410.

Ray, T., & Sha, O. P. (1994). Multicriteria optimization model for a containership design. Marine Technology, 31(4), 258-268.

Rennels, G. D., Shortliffe, E. H., & Miller, P. L. (1987). A multiattribute model of artificial intelligence approaches. Choice and Explanation In Medical Management, 7(1), 22-31.

Rudowski, R., East, T. D., & Gardner, R. M. (1996). Current status of mechanical ventilation decision support systems: A review. International Journal of Clinical Monitoring and Computing, 13, 157-166.

Shipp, G. W., & Thakor, N. V. (1984). Multiattribute decision analysis of clinical errors: A case study of computerized arrhythmia detectors. Computers and Biomedical Research, 17, 116-128.

Stading, G., Flores, B., & Olson, D. (2001). Understanding managerial preferences in selecting equipment. Journal of Operations Management, 19, 23-37.

Stanney, K. M., Pet-Edwards, J., Swart, W., Safford, R., & Barth, T. (1994). The design of a systematic methods improvement planning methodology: Part II- A multiattribute utility theory (MAUT) approach. International Journal of Industrial Engineering, 1(4), 275-284.

Tang, X., & Krishnamurity, S. (2000). On decision model development in engineering design. Engineering Validation and Cost Analysis, 3, 131-149.

Thurston, D. L. (1990). Multiattribute utility analysis in design management. IEEE Transactions on Engineering Management, 37(4), 296-301.

Thurston, D. L. (1993). Concurrent engineering in an expert system. IEEE Transactions on Engineering Management, 40(2), 124-134.

Thurston, D. L., & Carnahan, J. V. (1992). Fuzzy ratings and utility analysis in preliminary design evaluation of multiple attributes. Transactions of the ASME, 114, 648-658.

Wangermann, J. P., & Stengel, R. F. (1998). Principled negotiation between intelligent agents: A model for air traffic management. Artificial Intelligence in Engineering, 12, 177-187.

Whitcomb, C. A. (1998). Naval ship design philosophy implementation. Naval Engineers Journal, 49-63.

Yihua, X., Lin, G., Su, P., Tiefu, L., Honghui, Yongxing, Z., et al. (1998). A decision-support system for off-site nuclear emergencies. Health Physics, 74(3), 387-392.

Zhang, X.-D., & Jia, L.-M. (1994). Distributed intelligent railway traffic control based on fuzzy decision making. Fuzzy Sets and Systems, 62, 255-265.

Zopoundidis, C., Despotis, D. K., & Stavropoulou, E. (1995). Multiattribute evaluation of Greek banking performance. Applied Stochastic Models and Data Analysis, 11, 97-107.

Mau literature – health

Baker, J., & Fitzpatrick, K. E. (1985). An integer linear programming model of staff retention and termination based on multiattribute utility theory. Socio-Econ Plan Sci, 19(1), 27-34.

Barr, R. D., Chalmers, D., Pauw, S. D., Furlong, W., Weizman, S., & Feeny, D. (2000). Health-related quality of life in survivors of Wilms' tumor and advanced neuroblastoma: A cross-sectional study. Journal of Clinical Oncology, 18(18).

Bartman, B. A., Rosen, M. J., Bradham, D. D., Weissman, J., Hochberg, M., & Revicki, D. A. (1998). Relationship between health status and utility measures in older claudicants. Quality of Life Research, 7, 63-73.

Bellamy, C. A., Brickley, M. R., & McAndrew, R. (1996). Measurement of patient-derived utility values for periodontal health using a multi-attribute scale. J Clin Periodontal, 23, 805-809.

Bock, G. H. d., Reijneveld, S. A., Houwelingen, J. C. & Knottnerus, J. A. (1999). Multiattribute utility scores for predicting family physicians' decisions regarding sinusitis. Multiattribute Utility Scores, 19(1), 58-65.

Carter, W. B. (1992). Psychology and decision making: Modeling health behavior with multi attribute utility theory. Journal of Dental Education, 56(12), 800-807.

Chapman, G. B., Elstein, A. S., Kuzel, T. M., Nadler, R. B., Sharifi, R., & Bennett, C. L. (1999). A multi-attribute model of prostate cancer patients' preferences for health states. Quality of Life Research, 8, 171-180.

Chatburn, R. L., & Primiano, F. P. (2001). Decision analysis for large capital purchases: How to buy a ventilator. Respiratory Care, 46(10), 1038-1054.

Chinburapa, V., & Larson, L. N. (1992). The importance of side effects and outcomes in differentiating between prescription drug products. Journal of Clinical Pharmacy and Therapeutics, 17, 333-342.

Connelly, D. P., Glaser, J. P., & Chou, D. (1984). A structured approach to evaluating and selecting clinical laboratory information systems. Pathologist, 714-720.

Costet, N., Gales, C. L., Buron, C., Kinkor, F., Mesbah, M., & Chwalow, J. (1998). French cross-cultural adaptation of the Health Utilities Index Mark 2 (HUI2) and 3 (HUI3) classification systems. Quality of Life Research, 7, 245-256.

Eriksen, S., & Keller, L. R. (1993). A Multiattribute-utility-function approach to weighing the risks and benefits of pharmaceutical agents. Medical Decision Making, 13(2), 118-125.

Erikson, P. (1998). Evaluation of a population-based measure of quality of life: the Health and Activity Limitation Index (HALex). Quality of Life Research, 7, 101-114.

Feeny, D., Furlong, W., & Barr, R. D. (1998). Multiattribute approach to the assessment of health-related quality of life: Health Utilities Index. Medical and Pediatric Oncology Supplement, 1, 54-59.

Feeny, D., Furlong, W., Torrance, G. W., Goldsmith, C. H., Zhu, Z., DePauw, S., et al. (2002). Multiattribute and single-attribute utility functions for the Health Utilities Index Mark 3 System. Medical Care, 40(2), 113-128.

Folland, S. T. (1983). Predicting hospital market shares. Inquiry, 20, 34-44.

Fos, P. J., & Zuniga, M. A. (1999). Assessment of primary health care access status: An analytic technique for decision making. Health Care Management Science, 2, 229-238.

Frank, J., Rupprechet, B., & Schmelmer, V. (1997). Knowledge-based assistance for the development of drugs. IEEE Expert, 40-48.

French, M. T., Mauskopf, J. A., Teague, J. L., & Roland, E. J. (1996). Estimating the dollar value of health outcomes from drug-abuse interventions. Medical Care, 34(9), 890-910.

Furlong, W., Feeny, D., Torrance, G. W., & Barr, R. D. (2001). The Health Utilities Index (HUI) system for assessing health-related quality of life in clinical studies. Ann Med, 33, 375-384.

Goldstein, M. K., Clarke, A. E., Michelson, D., Garber, A. M., Bergen, M. R., & Lenert, L. A. (1994). Developing and testing a multimedia presentation of a health-state description. Multimedia Presentation of a Health State, 14(4), 336-344.

Gustafson, D. H., & Holloway, D. C. (1975). A decision theory approach to measuring severity in illness. Health Services Research, 97-106.

Harvey, R. P., Comer, C., Sanders, B., Westley, R., Marsh, W., Shapiro, H., et al. (1996). Model for outcomes assessment of antihistamine use for seasonal allergic rhinitis. J Allergy Clin Immunol, 97(6), 1233-1241.

Hawthorne, G., Richardson, J., & Day, N. A. (2001). A comparison of the Assessment of Quality of Life (AQoL) with four other generic utility instruments. Ann Med, 33, 358-370.

Herrmann, M., & Code, C. (1996). Weightings of items on the Code- Muller Protocols: The effects of clinical experience of aphasia therapy. Disability and Rehabilitation, 18(10), 509-514.

Hodder, S., Edwards, M., Brickley, M., & Shepherd, J. (1997). Multiattribute utility assessment of outcomes of treatment for head and neck cancer. British Journal of Cancer, 75(6), 898-902.

Hunt, M. E., & Ross, L. E. (1990). Naturally occurring retirement communities: A multiattribute examination of desirability factors. The Gerontological Society of America, 667-674.

Jain, N. L., & Kahn, M. G. (1992). Ranking radiotherapy treatment plans using decision-analytic and heuristic techniques. Computers and Biomedical Research, 25, 374-383.

Jain, N. L., & Kahn, M. G. (1994). Objective evaluation of radiation treatment plans. Proc Annu Symp Comput, 134-138.

Johnson, J. A., Ohinmaa, A., Murti, B., Sintonen, H., & Coons, S. J. (2000). Comparison of Finnish and U.S.-based visual analog scale valuations of the EQ-5D measure. Medical Decision Making, 20(3), 281-289.

Krabbe, P. F. M., Stouthard, M. E. A., Essink-Bot, M.-L., & Bonsel, G. J. (1999). The effect of adding a cognitive dimention to the EuroQol Multiattribute Health-Classification System. J Clin Epidemiol, 52(4), 293-301.

Krahn, M., Ritvo, P., Irvine, J., Tomlinson, G., Bezjak, A., Trachtenberg, J., et al. (2000). Construction of the Patient-Oriented Prostate Utility Scale (PORPUS): A multiattribute health state classification system for prostate cancer. Journal of Clinical Epidemiology, 53, 920-930.

Laufer, F. N. (1990). Managerial decision-making in the laboratory. Clinical Laboratory Management Review, 4(6), 425-431.

Linde, L., Edland, A., & Bergstrom, M. (1999). Auditory attention and multiattribute decision-making during a 33 hr. sleep-deprivation period: Mean performance and between-subject dispersions. Ergonomics, 42(5), 696-713.

Lipowski, E. E. (1993). How consumers choose a pharmacy. American Pharmacy, NS33(12), S14-S17.

MacPherson, D. W., & McQueen, R. (1993). Cryptosporidiosis: Multiattribute evaluation of six diagnostic methods. Journal of Clinical Microbiology, 31(2), 198-202.

Marta, E. (1997). Parent-adolescent interactions and psychosocial risk in adolescents: An analysis of communication, support and gender. Journal of Adolescence, 20, 473-487.

Mathias, S. D., Bates, M. M., Pasta, D. J., Cisternas, M. G., Feeny, D., & Patrick, D. L. (1997). Use of the Health Utilities Index with stroke patients and their caregivers. Stroke, 28(10), 1888-1894.

McCoy, S., CBlayney-Chandramouli, J., & Mutnick, A. H. (1998). Using multiple pharmacoeconomic methods to conduct a cost-effectiveness analysis of histamine H2-receptor antagonists. Am J Health-Syst Pharm, 55, s8-s12.

Miyamoto, J. M., & Eraker, S. A. (1988). A multiplicative model of the utility of survival duration and health quality. Journal of Experimental Psychology, 117(1), 3-20.

Neumann, P. J., Sandburg, E. A., Araki, S. S., Kuntz, K. M., Feeny, D., & Weinstein, M. C. (2000). A comparison of HUI2 and HUI3 utility scores in Alzheimer's disease. Medical Decision Making, 20(4), 413-422.

Nord, E. (2001). Health state values from multiattribute utility instruments need correction. Ann Med, 33, 371-374.

Puelz, R. (1991). A selection model for employees confronted with health insurance alternatives. Benefits Quarterly (second Quarter), 18-22.

Revicki, D. A., Leidy, N. K., Brennan-Diemer, F., Sorensen, S., & Togias, A. (1998). Integrating patient preferences into health outcomes assessment. CHEST, 114(4), 998-1007.

Revicki, D. A., Leidy, N. K., Brennan-Diemer, F., Thompson, C., & Togias, A. (1998). Development and preliminary validation of the multiattribute Rhinitis Symptom Utility Index. Quality of Life Research, 7, 693-702.

Revicki, D. A., Simpson, K. N., Wu, A. W., & LaVallee, R. L. (1995). Evaluating the quality of life associated with rifabutin prophylaxis for Mycobacterium avium complex in persons with AIDS: Combining Q-TWIST and multiattribute utility techniques. Quality of Life Research, 4, 309-318.

Ross, L. E., & Mundt, J. C. (1988). Multiattribute modeling analysis of the effects of a low blood alcohol level on pilot performance. Human Factors, 30(3), 293-304.

Salazar, M. K. (1992). A study of breast self examination beliefs. AAOHN Journal, 40(9), 429-437.

Salazar, M. K., & Moor, C. d. (1995). An evaluation of mammography beliefs using a decision model. Health Education Quarterly, 22(1), 110-126.

Schumacher, G. E. (1991). Multiattribute evaluation in formulary decision making as applied to calcium-channel blockers. AJHP, 48, 301-308.

Sculpher, M. J., & O'Brien, B. J. (2000). Income effects of reduced health and health effects of reduced income: Implications for health-state valuation. Medical Decision Making, 20(2), 207-215.

Shipp, G. W., & Thakor, N. V. (1984). Multiattribute decision analysis of clinical errors: A case study of computerized arrythmia detectors. Computer and Biomedical Research, 17, 116-128.

Sintonen, H. (2001). The 15D instrument of health-related quality of life: Properties and applications. Ann Med, 33, 328-336.

Smith, T. A., Dillon, D. M. B., Kotula, R. J., & Mutnick, A. H. (2001). Evaluation of antimicrobial surgical prophylaxis with multiattribute utility theory. Am J Health-Syst Pharm, 58, 251-255.

Stavem, K. (1999). Reliability, validity and responsiveness of two multiattribute utility measures in patients with chronic obstructive pulmonary disease. Quality of Life Research, 8, 45-54.

Stavem, K., Bjornaes, H., & Lossius, M. I. (2001). Properties of the 15D and EQ-5D utility measures in a community sample of people with epilepsy. Epilepsy Research, 44, 179-189.

Thompson, D. C., Rivara, F. P., & Thompson, R. S. (1996). Effectiveness of bicycle safety helmets in preventing head injuries. JAMA, 276(24), 1968-1973.

Torrance, G. W., Feeny, D., Furlong, W., Barr, R. D., Zhang, Y., & Wang, Q. (1996). Multiattribute utility function for a comprehensive health status classification system. Medical Care, 34(7), 702-722.

Trudel, J. G., Rivard, M., Dobkin, P. L., Leclerc, J.-M., & Robaey, P. (1998). Psychometric properties of the Health Utilities Index Mark 2 System in pediatric oncology patients. Quality of Life Research, 7, 421-4322.

Weber, M., & Borcherding, K. (1993). Behavioral influences on weight judgments in multiattribute decision making. European Journal of Operational Research, 67, 1-12.