BEYOND SCREENING AND PROGRESS MONITORING: AN EXAMINATION OF THE RELIABILITY AND CONCURRENT VALIDITY OF MAZE COMPREHENSION ASSESSMENTS FOR FOURTH-GRADE STUDENTS

dc.contributor.advisor Elleman, Amy
dc.contributor.author Brasher, Casey Faye
dc.contributor.committeemember Kim, Jwa
dc.contributor.committeemember Holt, Aimee
dc.contributor.department Education en_US
dc.date.accessioned 2017-05-26T17:31:00Z
dc.date.available 2017-05-26T17:31:00Z
dc.date.issued 2017-04-06
dc.description.abstract ABSTRACT
dc.description.abstract Reading comprehension assessments often lack instructional utility because they do not accurately pinpoint why a student has difficulty. The varying formats, directions, and response requirements of comprehension assessments lead to differential measurement of underlying skills and contribute to noted amounts of unshared variance among tests. Maze is an assessment tool used to screen and monitor reading comprehension performance. This type of assessment consists of words deleted throughout the passage replaced with three options, the correct choice, and two distractors. Students are required to select the correct option during the process of reading. Maze emerged as an assessment of reading comprehension to guide teachers in selecting an independent reading level for students. However, the purpose of maze shifted to screening and monitoring reading performance rather than instructional planning. Yet, there is a pressing need for an assessment or system of assessments that can inform instruction for students with reading comprehension weaknesses. The present study examined the validity and reliability of different types of maze assessments (fixed-word deletion, word-feature deletion, and sentence deletion) and a multiple choice assessment. All passages were created from informational news stories. All four assessment conditions demonstrated acceptable to excellent levels of internal consistency. Correlations between conditions analyzed in the study and validated measures of reading comprehension varied significantly. The sentence deletion version of maze demonstrated significant correlations with two of three of the comprehension tests and had a significant correlation with a composite score for reading comprehension. Correlations to reader skills varied across types of maze. The conditions created for this study seemed to tap into a dimension of reading comprehension not measured by validated, standardized comprehension measures. Passage length and genre were suggested as possible reasons for the differences between the assessment conditions analyzed in this study and the validated comprehension tests. Further, a maze task involving sentence deletion emerged as a potential alternative to the way maze assessments are standardly created. Implications for policy and practice are discussed in terms of analyzing student performance across measures when assessing reading comprehension.
dc.description.degree Ph.D.
dc.identifier.uri http://jewlscholar.mtsu.edu/xmlui/handle/mtsu/5291
dc.publisher Middle Tennessee State University
dc.subject Educational tests and measureme
dc.subject Reading assessment
dc.subject.umi Reading instruction
dc.thesis.degreegrantor Middle Tennessee State University
dc.thesis.degreelevel Doctoral
dc.title BEYOND SCREENING AND PROGRESS MONITORING: AN EXAMINATION OF THE RELIABILITY AND CONCURRENT VALIDITY OF MAZE COMPREHENSION ASSESSMENTS FOR FOURTH-GRADE STUDENTS
dc.type Dissertation
Files
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Brasher_mtsu_0170E_10806.pdf
Size:
845.47 KB
Format:
Adobe Portable Document Format
Description: