Graduate Record Examination

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Graduate Record Examination or GRE is a commercially-run standardized test that is an admissions requirement for many graduate schools principally in the United States, but also in other English speaking countries. Created and administered by Educational Testing Service (or ETS) in 1949,[1] the exam is primarily focused on testing abstract thinking skills in the areas of mathematics, vocabulary, and analytical writing. The GRE is typically a computer-based exam that is administered by select qualified testing centers; however, paper-based exams are offered in areas of the world that lack the technological requirements.

In the graduate school admissions process, the level of emphasis that is placed upon GRE scores varies widely between schools and even departments within schools. The importance of a GRE score can range from being an important selection factor to being a mere admission formality.

Critics of the GRE have argued that the exam format is so rigid that it effectively tests only how well a student can conform to a standardized test taking procedure.[2] ETS responded by announcing plans in 2006 to radically redesign the test structure starting in the fall of 2007; however, the company has since announced, "Plans for launching an entirely new test all at once were dropped, and ETS decided to introduce new question types and improvements gradually over time." The new questions have been gradually introduced since November 2007.[3]

In the United States, the cost of the general test is $140 as of July 1, 2008, although ETS will reduce the fee under certain circumstances. ETS erases all test records that are older than 5 years, although graduate program policies on the admittance of scores older than 5 years will vary.

Contents

[edit] Structure

The exam consists of four sections. The first section is a writing section, while the other three are multiple-choice style. One of the multiple choice style exams will test verbal skills, another will test quantitative skills and a third exam will be a experimental section that is not included in the reported score. Test takers do not know which of the three multiple-choice sections is the experimental section. The entire test procedure takes about 4 hours.[4]

[edit] Analytical writing section

The analytical writing section consists of two different essays, an "issue task" and an "argument task". The writing section is graded on a scale of 0-6, in half-point increments. The essays are written on a computer using a word processing program specifically designed by ETS. The program allows only basic computer functions and does not contain a spell-checker or other advanced features. Each essay is scored by at least two readers on a six-point holistic scale. If the two scores are within one point, the average of the scores is taken. If the two scores differ by more than a point, a third reader examines the response.

[edit] Issue task

The test taker will be able to choose between two topics upon which to write an essay. The time allowed for this essay is 45 minutes.[5]

[edit] Argument task

The test taker will be given an "argument" and will be asked to write an essay that explains why one "side" of the argument is superior. Typically, the task requires that the taker identify and critique the logical fallacies of the argument. The time allotted for this essay is 30 minutes.[6]

[edit] Verbal section

One graded multiple-choice section is always a verbal section, consisting of analogies, antonyms, sentence completions, and reading comprehension passages. Multiple-choice response sections are graded on a scale of 200-800, in 10-point increments. This section primarily tests vocabulary, and average scores in this section are substantially lower than those in the quantitative section.[7] In a typical examination, this section may consist of 30 questions, and 30 minutes may be allotted to complete the section.[8]

[edit] Quantitative section

The quantitative section, the other multiple-choice section, consists of problem solving and quantitative comparison questions that test high-school level mathematics. Multiple-choice response sections are graded on a scale of 200-800, in 10-point increments. In a typical examination, this section may consist of 28 questions, and test takers may be given 45 minutes to complete the section.[9]

[edit] Experimental section

The experimental section, which can be either a verbal, quantitative, or analytical writing task, contains new questions that ETS is considering for future use. Although the experimental section does not count toward the test-taker's score, it is unidentified and appears identical to the real (scored) part of the test. As test takers have no clear way of knowing which section is experimental, they are forced to complete this section, or risk seriously damaging their final scores.[10]

If the experimental section appears as an analytical writing question (essay), if an "issue" type question is presented, a choice between two topics will not be given. This coupled with the fact that the true analytical writing section is the first test given can help the test-taker to deduce which is the experimental section and the taker can thus lower the importance of that section.[citation needed]

[edit] Scoring

[edit] Computerized adaptive testing

The common (Verbal and Quantitative) multiple-choice portions of the exam currently use computer-adaptive testing (CAT) methods that automatically change the difficulty of questions as the test taker proceeds with the exam, depending on the number of correct or incorrect answers that are given. The test taker is not allowed to go back and change the answers to previous questions, and some type of answer must be given before the next question is presented.

The first question that is given in a multiple-choice section is considered to be an "average level" question that half of the GRE test takers will answer correctly. If the question is answered correctly, then subsequent questions become more difficult. If the question is answered incorrectly, then subsequent questions become easier, until a question is answered correctly.[11] This approach to administration yields scores that are of similar accuracy while using approximately half as many items.[12] However, this effect is moderated with the GRE because it has a fixed length; true CATs are variable-length, where the test will stop itself once it has zeroed in on a candidate's ability level.

The actual scoring of the test is done with item response theory (IRT). While CAT is associated with IRT, IRT is actually used to score non-CAT exams. The GRE subject tests, which are administered in the traditional paper-and-pencil format, use the same IRT scoring algorithm. The difference that CAT provides is that items are dynamically selected so that the test taker only sees items of appropriate difficulty. Besides the psychometric benefits, this has the added benefit of not wasting the examinee's time by administering items that are far too hard or easy. This occurs in fixed-form testing.

[edit] Scaled score percentiles

The percentiles of the current test are as follows:[13]

Scaled score Verbal Reasoning % Quantitative Reasoning %
800 99 94
780 99 90
760 99 86
740 99 82
720 98 77
700 97 72
680 96 68
660 94 63
640 91 58
620 89 53
600 85 49
580 81 44
560 76 40
540 71 35
520 65 31
500 60 28
480 55 24
460 49 21
440 43 18
420 37 15
400 31 13
380 26 11
360 21 9
340 15 7
320 10 5
300 6 4
280 3 3
260 1 2
240 1 1
220 0 1
200 0 0
mean 465 584
Analytical Writing score Writing %
6 96
5.5 88
5 73
4.5 54
4 33
3.5 18
3 7
2.5 2
2 1
1.5 0
1 0
0.5 0
mean 4.1

Comparisons for "Intended Graduate Major" are "limited to those who earned their college degrees up to two years prior to the test date." ETS provides no score data for "non-traditional" students who have been out of school more than two years, although its own report "RR-99-16" indicated that 22% of all test takers in 1996 were over the age of 30.

[edit] Use in admissions

Many graduate schools in English-speaking countries (especially in the United States) require GRE results as part of the admissions process. The GRE is a standardized test intended to measure the abilities of all graduates in tasks of general academic nature, regardless of their fields of specialization. The GRE is supposed to measure the extent to which undergraduate education has developed an individual's verbal and quantitative skills in abstract thinking.

Unlike other standardized admissions tests (such as the SAT, LSAT, and MCAT), the use and weight of GRE scores vary considerably not only from school to school, but from department to department, and from program to program too. Programs in liberal arts topics may only consider the applicant's verbal score to be of interest, while mathematics and science programs may only consider quantitative ability; however, since most applicants to mathematics, science, or engineering graduate programs all have high quantitative scores, the verbal score can become a deciding factor even in these programs. Some schools use the GRE in admissions decisions, but not in funding decisions; others use the GRE for the selection of scholarship and fellowship candidates, but not for admissions. In some cases, the GRE may be a general requirement for graduate admissions imposed by the university, while particular departments may not consider the scores at all. Graduate schools will typically provide information about how the GRE is considered in admissions and funding decisions, and the average scores of previously admitted students. The best way to find out how a particular school or program evaluates a GRE score in the admissions process is to contact the person in charge of graduate admissions for the specific program in question (and not the graduate school in general).

Programs that involve significant expository writing require the submission of a prepared writing sample that is considered more useful in determining writing ability than the analytical writing section; however, the writing scores of foreign students are sometimes given more scrutiny and are used as an indicator of overall comfort with and mastery of conversational English.

[edit] GRE Subject Tests

In addition to the General Test, there are also eight GRE Subject Tests testing knowledge in the specific areas of Biochemistry, Cell and Molecular Biology, Biology, Chemistry, Computer Science, Literature in English, Mathematics, Physics, and Psychology. In the past, subject tests were also offered in the areas of Economics, Revised Education, Engineering, Geology, History, Music, Political Science, and Sociology. In April 1998, the Revised Education and Political Science exams were discontinued. In April 2000, the History and Sociology exams were discontinued, and the other four were discontinued in April 2001.[2]

[edit] Preparation

A variety of resources are available for those wishing to prepare for the GRE. Upon registration, ETS provides preparation software called PowerPrep, which contains two practice tests of retired questions, as well as further practice questions and review material. Since the software replicates both the test format and the questions used, it can be useful to predict the actual GRE scores. ETS does not license their past questions to any other company, making them the only source for official retired material. ETS used to publish the "BIG BOOK" which contained a number of actual GRE questions; however, this publishing was abandoned. Several companies provide courses, books and other unofficial preparation materials.

ETS has claimed that content of the GRE is "un-coachable"; however, many test preparation companies like Kaplan, Princeton Review, IMS Learning Resources, VISU etc claim that the test format is so rigid that familiarizing oneself with the test's organization, timing, specific foci, and the use of process of elimination is the best way to increase a GRE score.[14]

[edit] Testing locations

While the general and subject tests are held at many undergraduate institutions, the computer-based general test is only held at test centers with appropriate technological accommodations. Students in major cities in the United States, or those attending large U.S. universities, will usually find a nearby test center, while those in more isolated areas may have to travel a few hours to an urban or university location. Many industrialized countries also have test centers, but at times test-takers must cross country borders.

[edit] Criticism

Test takers complain about the strict test center rules. For instance, test takers may not use pens or bring their own scrap paper. Paper and pencils are provided at the testing center. Food and drink are prohibited in the test centers, as are chewing gum, jackets, and hats.

[edit] Bias

Critics have claimed that the computer-adaptive methodology may discourage some test takers, because the question difficulty changes with performance.[citation needed] For example, if the test-taker is presented with remarkably easy questions half way into the exam, they may infer that they are not performing well, which will influence their abilities as the exam continues, even though question difficulty is subjective. By contrast standard testing methods may discourage students by giving them more difficult items earlier on.

Critics have also stated that the computer-adaptive method of placing more weight on the first several questions is biased against test takers who typically perform poorly at the beginning of a test due to stress or confusion before becoming more comfortable as the exam continues.[15] Of course standard fixed-form tests could equally be said to be "biased" against students with less testing stamina since they would need to be approximately twice the length of an equivalent computer adaptive test to obtain a similar level of precision.[16]

The GRE has also been subjected to the same racial bias criticisms that have been lodged against other admissions tests. In 1998, the Journal of Blacks in Higher Education noted that the mean score for black test-takers in 1996 was 389 on the verbal section, 409 on the quantitative section, and 423 on the analytic, while white test-takers averaged 496, 538, and 564, respectively.[17] Note that simple mean score differences do not constitute evidence of bias unless the populations are known to be equal in ability, and insisting that group score difference are direct evidence of a bad test is an extreme position.[18] A more effective, accepted, and empirical approach is the analysis of differential test functioning, which examines the differences in item response theory curves for subgroups; the best approach for this is the DFIT framework. [19]

There is also a bias towards those students who have the financial resources to take privately-owned test-taking classes. These classes do typically result in better scores;[citation needed] however, many such companies and tutors focus solely on how to use the test's format to one's advantage, and not how to actually learn the material on the exam.

[edit] Weak predictor of graduate school performance

The GREs are criticized for not being a true measure of whether a student will be successful in graduate school.

While the verbal section tests vocabulary and verbal reasoning, the vocabulary employed is not specifically relevant to any particular area of study, and (in the case of analogies and antonyms) is presented without context. The quantitative portion of the test covers topics that are far too elementary for any program in the fields of mathematics or science, as well as being irrelevant for the study of most liberal arts topics. The Analytic Writing section (derived from ETS' unpopular Writing Assessment Test) may be less useful in assessing writing ability than a prepared writing sample, or than a "personal statement" or "statement of purpose" relevant to the appropriate field (which is also required for admissions by many programs).

Robert Sternberg of Tufts University found that the GRE general test was weakly predictive of success in graduate studies in psychology. The weak predictability may be related to the mathematics portion of the GRE general test because a good foundation of mathematics is important in understanding advanced statistics. However, in some branches of psychology, the application of statistics is only a small part.

The mathematical portion of the GRE general test is the only area of the GRE general test that may have predictive ability in the natural sciences. The natural sciences require a strong foundation in mathematics for success in both core courses and in statistical analysis related to research. However, it is not clear whether the GRE accurately assesses mathematical skills required for success in graduate school.

The ETS published a report ("What is the Value of the GRE?") that points out the predictive value of the GRE on a student's index of success at the graduate level.[20]

[edit] Validity

A meta-analysis of the GRE's validity in predicting graduate school success found a correlation of .3 to .4 between the GRE and both first year and overall graduate GPA. The correlation between GRE score and graduate school completion rates ranged from .11 (for the now defunct analytical section) to .39 (for the GRE subject test).[21]

However, a lack of correlation appears when the analysis is extended to review of the literature in the past, and such a review appeared in a 1985 issue of the journal Research in Higher Education. Over eighty pages in length, it is one of the most exhaustive literature reviews on the question of test validity. The author Leonard Baird focused on studies completed between 1966 and 1984, reported in any of nineteen highly-regarded scholarly journals. In study after study many of the reported correlation coefficients were zero or near zero, and some studies even showed significant negative coefficients. Most striking, many of these negative correlations appear in the studies concerning the relationship between test scores and the number of publications and citations for graduates of PhD programs. For instance: "Clark and Centra studied two samples of doctoral recipients… The resulting sample consisted of 239 chemists, 142 historians, and 221 psychologists, all of whom had at least one GRE score. In chemistry, the correlation of number of articles and book chapters with GRE-verbal was -.02; with GRE-quantitative it was -.01; and with GRE-advanced it was .15… For all historians, these correlations were -.24, -.14, and .00. For all psychologists, the correlations were -.05, -.02, and .02. Clark and Centra also examined the distribution of number of publications by GRE scores. The distributions were essentially flat, with no particular trend. In fact, the largest number of publications was reported by the lowest scoring groups in all three fields." [22]

However, it should be noted that the GRE has been substantially revised since the publication of this 1985 study. Moreover, the GRE does not claim to predict lifetime professional success. It is designed to correlate with graduate school factors, as mentioned previously.

[edit] Plans for the revised GRE

In 2006, ETS announced plans to enact significant changes in the format of the GRE. Planned changes for the revised GRE included a longer testing time, a departure from computer-adaptive testing, a new grading scale, and an enhanced focus on reasoning skills and critical thinking for both the quantitative and qualitative sections.[23]

On April 2, 2007, ETS announced the decision to cancel plans for revising the GRE.[24] The announcement cited concerns over the ability to provide clear and equal access to the new test after the planned change as an explanation for the cancellation. They did state, however, that they do plan "to implement many of the planned test content improvements in the future", although exact details regarding those changes have not yet been announced.

Changes to the GRE took effect on November 1, 2007, as ETS started to include new types of questions in the exam. The changes mostly center on "fill in the blank" type answers for both the mathematics and vocabulary sections that require the test-taker to fill in the blank directly, without being able to choose from a multiple choice list of answers. ETS currently plans to introduce two of these new types of questions in each qualitative or vocabulary section, while the majority of questions will presented in the regular format. [25]

On January 2008, the Reading Comprehension within the verbal sections has been reformatted, passages' "line numbers will be replaced with highlighting when necessary in order to focus the test taker on specific information in the passage" to "help students more easily find the pertinent information in reading passages."[26]

[edit] GRE prior to October 2002

Prior to October 2002, the GRE had a separate Analytical Ability section which tested candidates on logical and analytical reasoning abilities. This section has now been replaced by the Analytical Writing portion.

[edit] References

  1. ^ Alternative Admissions and Scholarship Selection Measures in Higher Education.
  2. ^ Princeton Review, Cracking the GRE, 2007 edition p. 19 # ISBN 0375765514, ISBN-13 978-0375765513. 2006
  3. ^ GRE General Test to Include New Questions
  4. ^ GRE Test Content
  5. ^ GRE Test Content
  6. ^ GRE Test Content
  7. ^ PowerScore GRE Preparation. Retrieved February 4, 2007, from PowerScore GRE Preparation.
  8. ^ GRE Test Content
  9. ^ GRE Test Content
  10. ^ GRE Test Content
  11. ^ Princeton Review, Cracking the GRE, 2007 edition p. 19 # ISBN 0375765514, ISBN-13 978-0375765513. 2006
  12. ^ Weiss, D.J., & Kingsbury, G.G.(1984). Application of computerized adaptive testing to educational problems. Journal of Educational Measurement, 21, 361-375.
  13. ^ GRE: Guide to the Use of Scores 2007-08. Retrieved October 25, 2007, from Guide to the Use of Scores 2007-08.
  14. ^ Princeton Review, Cracking the GRE, 2007 edition p. 19 # ISBN 0375765514, ISBN-13 978-0375765513. 2006
  15. ^ "Testing service cancels February GRE"
  16. ^ Weiss, D.J., & Kingsbury, G.G.(1984). Application of computerized adaptive testing to educational problems. Journal of Educational Measurement, 21, 361-375.
  17. ^ "Estimating the Effect a Ban on Racial Preferences Would Have on African- American Admissions to the Nation's Leading Graduate Schools." The Journal of Blacks in Higher Education, No. 19. (Spring, 1998), pp. 80–82.
  18. ^ The Achievement Gap: Test Bias or School Structures? National Association of Test Directors 2004 Symposia [1]
  19. ^ Oshima, T. C., & Morris, S. B. (2008). Raju's Differential Functioning of Items and Tests (DFIT). Educational Measurement: Issues and Practice, 27(3), 43-50.
  20. ^ http://www.ets.org/Media/Tests/GRE/pdf/grevalue.pdf
  21. ^ http://web.uvic.ca/psyc/lindsay/teaching/499/readings/kuncel.pdf
  22. ^ Leonard L. Baird, "Do Grades and Tests Predict Adult Accomplishment?" Research in Higher Education 23, no. 1, 1985, page 25.
  23. ^ Comparison Chart of GRE Changes
  24. ^ Plans for the Revised GRE Cancelled
  25. ^ GRE General Test to Include New Question Types in November
  26. ^ Revisions to the Computer-based GRE General Test in 2008

[edit] See also

[edit] External links

Personal tools