Skip to Content Skip to Menu

Interpreting Candidate Exam Results

This page provides information about how exam scores are derived and reported to Educator Preparation Programs (EPPs).

Each test taker's performance on an exam is evaluated against an established level of performance represented by a passing score.

  • Committees of Texas educators participate in standard-setting studies to recommend a passing score for each exam. The committees are composed of Texas educators from public and charter schools, university and EPP faculty, education service center staff, content experts, and representatives from professional educator organizations.
  • TEA presents the recommendation to the Commissioner for consideration. The Commissioner makes the final determination regarding the passing score for each exam. This score reflects the appropriate level of knowledge and skills required for effective performance by a beginning teacher in Texas public schools.

The Texas Examinations of Educator Standards (TExES) and Texas Examinations for Master Teachers (TExMaT) series use total scaled scores to summarize and report test taker performance. A total scaled score results from a conversion of the numerical raw score achieved on the exam (e.g., 50 of 80 exam questions answered correctly) to a score in the predetermined range used for the exams. Total scaled scores are used to provide a common reference scale for reporting exam results from different forms of an exam. The scoring scale currently in use for these exams is a 100 to 300 scale, with the passing score set at 240. Note that the TExMaT Master Reading Teacher exam has an additional passing requirement; a minimum score of 3 on the case study assignment is required to achieve a passing score.

The Texas Assessment of Sign Communication (TASC) and Texas Assessment of Sign Communication–American Sign Language (TASC–ASL) have been designed to elicit a representative sample of candidates' signed communication proficiency. The responses are scored on a five-point scale using a holistic scoring process. The principle underlying the holistic scoring process is that performance during the interview will be evaluated on the basis of overall sign communication proficiency. Trained, calibrated scorers view each recorded interview and, working in collaboration, rate a candidate's proficiency. Performance at a Level C or higher (Level A or B) is required to pass the TASC and TASC–ASL.

One useful feature of the scoring process is that it yields domain-level and competency-level* raw scores as well as total exam scaled scores. While total exam scaled scores are of use in determining test takers' pass/fail status and indicating in general how well the test taker did on the overall exam, domain-level and competency-level raw scores, when interpreted with care, may permit a more detailed analysis of the test taker's strengths and needs in relation to exam content.

Domains are composed of groups of competencies that address generally similar content. Domain-level and competency-level raw scores permit test takers to consider their performance on groups of selected-response questions that appeared on the exam form. However, test takers (and their faculty advisors) should exercise caution in evaluating performance based on these raw scores. Among the issues with these raw scores for individual domains or competencies are:

  • These scores are based on fewer exam questions than total exam scores and are inherently less stable than total exam scores. Domain-level raw scores (which may be based on as few as four exam questions) are more stable than competency-level raw scores (which may be based on as few as one or two scored exam questions), but both levels of raw scores are less stable than total exam scores and neither can support strong inferences or conclusions.
  • Both competencies and domains vary in difficulty. A high score on a competency or domain with relatively difficult exam questions is not equivalent to a similarly high score on a competency or domain with relatively easy exam questions.
  • These scores cannot easily be converted to pass/fail scores or scaled scores; for this reason, test takers cannot conclude that they know more about the content covered in one domain than another merely because the raw percent of questions they answered correctly in one domain is higher than that in another domain.

Still, these raw scores do provide some information and candidates should consider domain-level and competency-level performance when they are looking for specific areas of content that they need to study more. A consideration of domain-level and competency-level scores may suggest areas of content to review, additional course work to take, or independent reading to pursue before retaking an exam.

*Competency-level raw scores are not provided in results for the following examinations: Bilingual Target Language Proficiency Test (BTLPT) Spanish (190), Languages Other Than English (LOTE) examinations (610–613), TASC, and TASC–ASL. Domain-level raw scores and constructed-response specific raw scores are provided for the BTLPT Spanish and LOTE exams. TASC and TASC–ASL test takers receive only a holistic score.

Authorized users will have access to candidates' exam results through the Pearson score reporting website edReports. Once logged in to the website, users will use ResultsAnalyzer® to view reports and generate custom reports that include up-to-date results.

Authorized users will receive an email inviting them to establish an account on edReports. Once an account is established, users will be able to launch the ResultsAnalyzer® software tool.

With ResultsAnalyzer®, users will have the capability to:

  • view, analyze, reorganize, download, and print reports based on exam results data;
  • instantly access candidate, exam, and program data;
  • generate custom examination performance reports, domain and competency summaries, pass rate analyses, and retake analyses based on demographic filters;
  • customize data queries to align with program goals and areas of interest;
  • aggregate performance data across testing program years; and
  • export data to Microsoft Excel® or other report software and print graphics.

Data will be updated in ResultsAnalyzer® on each score report release date if any candidates in the program tested during that reporting date's corresponding testing period.

How to Create Customized Reports

Refer to the Quick Reference Guide to ResultsAnalyzer®  to learn how to generate common candidate-level and exam-level reports.

Support for ResultsAnalyzer®

  • An online user guide, ResultsAnalyzer®: Getting Started, is available within the ResultsAnalyzer® application on the score reporting website. The guide provides instructions on using the application, viewing reports, and generating custom reports.
  • Online training sessions will be provided to support EPPs in using the tool, interpreting data, and locating available resources.
  • Assistance for ResultsAnalyzer® is available by emailing es-raproductsupport@pearson.com.

When candidates access their score reports in their testing accounts, they are directed to the Scores section of the program website. This section provides information about the interpretation of scores on the various exams required for Texas educator certification.