Validity evidence of a computer simulation examination to measure decision-making skills in athletic training.

dc.contributor.author Bennett, Jason en_US
dc.contributor.department HPERS en_US
dc.date.accessioned 2014-06-20T15:57:35Z
dc.date.available 2014-06-20T15:57:35Z
dc.date.issued 2002 en_US
dc.description Adviser: William Whitehill. en_US
dc.description.abstract The purpose of the study was to collect validity evidence on a computer simulation program to measure decision-making skills in athletic training. Content related, construct-related, and criterion-related validity data were collected. en_US
dc.description.abstract Twelve program directors of CAAHEP-accredited athletic training education programs served as subject-matter experts (SMEs). SMEs completed a content validity form and score forms for three simulations. Results demonstrated that all simulations had a minimum mean validity score of 6.33 out of a maximum of eight. Based on these results, and the methods of accumulating content validity data, the Computerized Simulation Test in Athletic Training (CSTAT) contained a high level of content validity in measuring decision-making skills in athletic training. en_US
dc.description.abstract Proficiency, efficiency, and omission scores were collected from the CSTAT examination. The proficiency score is the sum of the points assigned to all selected options divided by the maximum score possible. The efficiency score is the proportion of an examinee's selections that are indicated options. The omission score is the proportion of contraindicated options that were selected. en_US
dc.description.abstract Three groups in this study were ATCs (n = 23), senior-level undergraduate students (n = 31), and junior- or sophomore-level undergraduate students (n = 16). ANOVA results established significant group differences (p less than .05) for overall proficiency, efficiency, and omission scores. Post-hoc testing revealed that for overall proficiency, there were only significant differences (p less than .05) between the ATC group and the junior-level group, and the senior-level group and the junior-level group. For overall proficiency, there was no significant difference between ATCs and senior-level students. For overall efficiency and omission, only the junior-level and ATC group were significantly different. ANOVA results for each simulation revealed significant group differences for all simulations except simulation #1 and #5 for all three CSTAT scores. en_US
dc.description.abstract Ten subjects submitted their NATABOC certification exam results. No significant correlations were found between CSTAT and NATABOC written simulation score, but when one outlier scores was removed, two significant correlations (p less than .01) were found. Correlations between efficiency and written simulation score, and omission and written simulation score were significant (p less than .01). It was concluded that the CSTAT examination contained sufficient levels of content-related, construct-related, and criterion-related validity evidence to measure decision-making skills in athletic training. en_US
dc.description.degree D.A. en_US
dc.identifier.uri http://jewlscholar.mtsu.edu/handle/mtsu/3744
dc.publisher Middle Tennessee State University en_US
dc.subject.lcsh Decision making Computer simulation en_US
dc.subject.lcsh Physical education and training Computer simulation en_US
dc.subject.lcsh Health Sciences, Recreation en_US
dc.subject.lcsh Education, Physical en_US
dc.subject.lcsh Education, Technology of en_US
dc.subject.lcsh Education, Tests and Measurements en_US
dc.thesis.degreegrantor Middle Tennessee State University en_US
dc.thesis.degreelevel Doctoral en_US
dc.title Validity evidence of a computer simulation examination to measure decision-making skills in athletic training. en_US
dc.type Dissertation en_US
Files
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
3057597.pdf
Size:
3.76 MB
Format:
Adobe Portable Document Format
Description: