Test that attempts to measure innate intellectual ability, rather than acquired ability.
It is now generally believed that a child's ability in an intelligence test can be affected by his or her environment, cultural background, and teaching. There is scepticism about the accuracy of intelligence tests, but they are still widely used as a diagnostic tool when children display learning difficulties. ‘Sight and sound’ intelligence tests, developed by Christopher Brand in 1981, avoid cultural bias and the pitfalls of improvement by practice. Subjects are shown a series of lines being flashed on a screen at increasing speed, and are asked to identify in each case the shorter of a pair; and when two notes are relayed over headphones, they are asked to identify which is the higher. There is a close correlation between these results and other intelligence test scores.
Workers in this field have included Francis Galton, Alfred Binet (1857-1911), Cyril Burt, and Hans Eysenck. Binet devised the first intelligence test in 1905. The concept of intelligence quotient (IQ) was adopted by US psychologist Lewis Terman in 1915. The IQ is calculated according to the formula: IQ = MA/CA x 100 in which MA is ‘mental age’ (the age at which an average child is able to perform given tasks) and CA is ‘chronological age’, hence an average person has an IQ of 100. Intelligence tests were first used on a large scale in World War I in 1917 for 2 million drafted men in the USA. They were widely used in UK education as part of the eleven-plus selection procedures, on the assumption that inborn intelligence was unalterable. Most psychologists now accept a much broader definition of intelligence, including spatial, creative, and problem-solving abilities which are often highly sought after in adult life but not measured by conventional intelligence tests.
We're sorry this article wasn't helpful. Tell us how we can improve.