Project OATS Final Report
II. Chronology of Project
A. Year 1 (1997-1998) Activities
1. National Survey of AT Outcome Instruments
The assessment instrumentation component of the research project included identification and piloting of existing instruments. After a thorough literature search and consultation with experts in the field, the Project OATS team compiled a list of 27 assessment instruments that had potential for measuring assistive technology in the schools. Next, the 27 instruments were reduced to 9 by the Project OATS Team, based on current use in educational settings around the country and similarity to existing special education and therapy assessments normally used for evaluation of children with special needs within the public school system. They included: Lifespace Access Profile, AT Screener, ASNAT, Assessment for AT, PEDI, SFA, QUEST, MPT, SchoolFact and were field tested to identify if any one or combination of them may be useful in AT outcomes measurement in the schools. Special educational professionals administered one of the nine assessments to students within their regular caseload. The special education professionals included 9 speech and language pathologists, 7 educators and 6 occupational therapists. Data were collected to identify assessments most sensitive to assistive technology process and outcome assessment, most generic (exportable to other schools), most feasible to administer, and closest to the construct of predicting and measuring educational outcome. After administering the assessment, each participant completed a questionnaire regarding opinions about the instrument. Next, the participants came together to discuss the instrument in small groups and present their opinions to the others. The major finding from this study was identification of The School Function Assessment (SFA) and SchoolFact assessments as those that best fit the identified criteria for further investigation as a possible AT outcomes instrument in the schools.
B. Year 2 1998-1999 Activities:
1. Development of AT Supplement to the SFA
The SFA was chosen for further investigation over SchoolFact because it was already being used by therapists and educators in the Wisconsin schools. Information from the field testing in year one was combined with scrutiny by the Project OATS Team to determine if in fact the SFA could be used to measure AT outcomes in the schools. A number of potential problems were identified with the SFA. They included:
a) Medical model nature of SFA
b) Difficulty in separating AT from other interventions provided
c) Not enough specific information about AT to determine the outcomes of specific devices.
These problems were discussed and possible solutions identified. The solutions required including modifications to the SFA creating a new supplement to the SFA specific to AT.
2. Assessing the validity of the AT Supplement to the SFA
Traditional test and measurement theory requires the study of the reliability and validity of any new instrument. Even though, the AT Supplement to the SFA is a modification, it is still a new measure and required scrutiny. This following study of the AT supplement to the SFA was restricted because of size to a review of only one important aspect of validity known as consequential validity. The concept of consequential validity grew out of construct validity asking the question: Does an instrument measure what it purports to measure? Literature to support this definition emerged in the education psychology over the past decade. The concept of "Consequential validity" grew out of construct validity and includes the validity of the meaning or interpretation of the scores. The following Project OATS study examined this aspect of the technical adequacy of the AT Supplement to the SFA. It also investigated whether the AT Supplement to the SFA provides more information than the SFA in identifying effectiveness of assistive technology intervention. Participants targeted for inclusion in the study were school practitioners with varying backgrounds in assistive technology and from a variety of locations. As the literature suggested, this study examined the interpretations made by therapists who have reviewed a completed AT Supplement to the SFA or a completed SFA. Analysis of interpretive data and questionnaire responses was conducted by comparing the number of specific interpretations related to assistive technology made by the participants to interpretations made by assistive technology specialists. Results did not support the hypothesis that practitioner's perceptions of the overall quality of assistive technology interpretations are different when using the AT Supplement to the SFA. Results did support the hypothesis that the number of matching interpretations about the outcome of AT implementation between participants and AT experts differ between participants using the AT Supplement to the SFA and those using the SFA.
There are four important implications for practice related to the results of this study.
1) Therapists in both groups may have been overconfident in their ability to identify assistive technology outcome interpretations.
2) Inequality in experience with the instrument may have contributed to the insignificant results found in the first t-test.
3) The results from the second analysis suggest that therapists using the AT Supplement to the SFA will formulate better interpretations based on AT outcome than those using the SFA.
4) The second analysis upholds an important aspect of construct validity of the AT Supplement to the SFA, lending support that the AT Supplement to the SFA may accurately measure the construct of AT outcomes in schools.
3. Focus Group for Database Field Identification
During year 2 a focus group was held to begin identifying fields for the database. A group from the CESA #1 AT taskforce was invited to participate. The question "what are the important factors of successful assistive technology programs for children 3-21" was posed. Participants were given 15 minutes to respond to the question on paper. Next, the group facilitator conducted a round robin to obtain answers from all participants. Finally, the participants grouped the list of answers into major categories. The major categories include: equipment, training, staff competence and staffing issues, staff attitude, IEP planning/assessment and follow up, team process, financial issues, student centered issues, support from administration and program evaluation.
C. Year 3 (1999-2000) Activities:
1. Further Investigation of the AT Supplement to the SFA
A. Can a part of the SFA be used to measure AT outcomes?
The next Project OATS study specifically examined the SFA Task Supports Adaptations Scale of Part II. The Adaptations Scale includes but does not distinguish assistive technology from other adaptations. Also, the SFA Checklist contains adaptation interventions including assistive technology devices, environmental adaptations, and adaptations to the task. Examining the relationship between the Adaptations Scale and the assistive technology specific adaptations of the Checklist helped determine whether the Adaptations Score includes only assistive technology adaptations or includes other constructs. This study focused on examining the relationship between the assistive technology component of the Checklist and the Adaptations Scale of the Task Supports section of the SFA to assess the construct validity of the Adaptations Scale of the SFA as a measure of assistive technology outcome.
Results of this study were mixed. The first analysis demonstrated that there was a significant relationship between the weighted number of assistive technology devices and the Physical Tasks Adaptation Total Raw Score. Interestingly, the second analysis did not show a significant relationship between the number of weighted assistive technology devices and the Adaptations Total Raw Score of the Cognitive/Behavioral Tasks. There are three conclusions that can be made regarding the use of the Task Supports section of the SFA as an outcome instrument for assistive technology interventions. 1) Assistive technology is a significant factor in the Physical Tasks Adaptations Scales, but it is not the only factor. Therefore, the Adaptations Scale also consists of other components. The use of this scale as an assistive technology outcome instrument is questionable. 2) The Adaptations Scale is not applicable to all scoring situations. When a student is not using adaptations and not participating in activities as same aged peers inconsistencies in scoring may occur. 3) The wording in the rating scale and the directions is confusing; often interchanging the words "uses" and "requires."
2. Is the AT Supplement to the SFA sensitive enough to discriminate change based on AT intervention?
A study is currently underway to determine if the scales on the AT Supplement to the SFA are sensitive enough to discriminate change in function based on AT intervention. The study solicited participants from Wisconsin to administer the AT Supplement to the SFA at two points in time with students who received some type of AT intervention. The current data include 26 completed AT Supplement to the SFA forms. While the study is currently in progress, preliminary results indicate that the AT Supplement to the SFA is in fact sensitive enough to detect change due to AT intervention.
3. Database planning
During year three, Project OATS staff used the focus group information as a springboard to began planning the database intended to combine data from the outcomes instrument and information about AT services provided in the school setting. During year three, a database questionnaire was developed for administration locally to AT teams and nationally to special education directors. The database was designed such that it could be administered in questionnaire form over the phone if needed.