Intelligence

~~~

Intelligence has been defined in many different ways such as in terms of one’s capacity for logic, abstract thought, understanding, self-awareness, communication, learning, emotional knowledge, memory, planning, creativity and problem solving. It can also be more generally described as the ability to perceive and/or retain knowledge or information and apply it to itself or other instances of knowledge or information creating referable understanding models of any size, density, or complexity, due to any conscious or subconscious imposed will or instruction to do so.

Intelligence is most widely studied in humans, but has also been observed in non-human animals and in plants. Artificial intelligence is the simulation of intelligence in machines.

Within the discipline of psychology, various approaches to human intelligence have been adopted. The psychometric approach is especially familiar to the general public, as well as being the most researched and by far the most widely used in practical settings.[1]

History of the term

Main article: Nous

Intelligence derives from the Latin verb intelligere, to comprehend or perceive. A form of this verb, intellectus, became the medieval technical term for understanding, and a translation for the Greek philosophical term nous. This term was however strongly linked to the metaphysical and cosmological theories of teleological scholasticism, including theories of the immortality of the soul, and the concept of the Active Intellect (also known as the Active Intelligence). This entire approach to the study of nature was strongly rejected by the early modern philosophers such as Francis Bacon, Thomas Hobbes, John Locke, and David Hume, all of whom preferred the word “understanding” in their English philosophical works.[2][3] Hobbes for example, in his Latin De Corpore, used “intellectus intelligit” (translated in the English version as “the understanding understandeth”) as a typical example of a logical absurdity.[4] The term “intelligence” has therefore become less common in English language philosophy, but it has later been taken up (with the scholastic theories which it now implies) in more contemporary psychology.

Definitions

The definition of intelligence is controversial. Some groups of psychologists have suggested the following definitions:

  1. From “Mainstream Science on Intelligence” (1994), an editorial statement by fifty-two researchers:

    A very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings—”catching on,” “making sense” of things, or “figuring out” what to do.[5]

  2. From “Intelligence: Knowns and Unknowns” (1995), a report published by the Board of Scientific Affairs of the American Psychological Association:

    Individuals differ from one another in their ability to understand complex ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning, to overcome obstacles by taking thought. Although these individual differences can be substantial, they are never entirely consistent: a given person’s intellectual performance will vary on different occasions, in different domains, as judged by different criteria. Concepts of “intelligence” are attempts to clarify and organize this complex set of phenomena. Although considerable clarity has been achieved in some areas, no such conceptualization has yet answered all the important questions, and none commands universal assent. Indeed, when two dozen prominent theorists were recently asked to define intelligence, they gave two dozen, somewhat different, definitions.[6][7]

Besides those definitions, psychology and learning researchers also have suggested definitions of intelligence such as:

Researcher Quotation
Alfred Binet Judgment, otherwise called “good sense,” “practical sense,” “initiative,” the faculty of adapting one’s self to circumstances … auto-critique.[8]
David Wechsler The aggregate or global capacity of the individual to act purposefully, to think rationally, and to deal effectively with his environment.[9]
Lloyd Humphreys “…the resultant of the process of acquiring, storing in memory, retrieving, combining, comparing, and using in new contexts information and conceptual skills.”[10]
Cyril Burt Innate general cognitive ability[11]
Howard Gardner To my mind, a human intellectual competence must entail a set of skills of problem solving — enabling the individual to resolve genuine problems or difficulties that he or she encounters and, when appropriate, to create an effective product — and must also entail the potential for finding or creating problems — and thereby laying the groundwork for the acquisition of new knowledge.[12]
Linda Gottfredson The ability to deal with cognitive complexity.[13]
Sternberg & Salter Goal-directed adaptive behavior.[14]
Reuven Feuerstein The theory of Structural Cognitive Modifiability describes intelligence as “the unique propensity of human beings to change or modify the structure of their cognitive functioning to adapt to the changing demands of a life situation.”[15]
Charles Spearman “…all branches of intellectual activity have in common one fundamental function, whereas the remaining or specific elements of the activity seem in every case to be wholly different from that in all the others.”[16]

What is considered intelligent varies with culture. For example, when asked to sort, the Kpelle people take a functional approach. A Kpelle participant stated “the knife goes with the orange because it cuts it.” When asked how a fool would sort, they sorted linguistically, putting the knife with other implements and the orange with other foods, which is the style considered intelligent in other cultures.[17]

Human intelligence

Main article: Human intelligence

Human intelligence is the intellectual capacity of humans, which is characterized by perception, consciousness, self-awareness, and volition. Through their intelligence humans possess the cognitive abilities to learn, form concepts, understand, and reason, including the capacities to recognize patterns, comprehend ideas, plan, problem solve, and use language to communicate. Intelligence enables humans to experience and think.

Animal and plant intelligence

The common chimpanzee can use tools. This chimpanzee is using a stick to get food.

Although humans have been the primary focus of intelligence researchers, scientists have also attempted to investigate animal intelligence, or more broadly, animal cognition. These researchers are interested in studying both mental ability in a particular species, and comparing abilities between species. They study various measures of problem solving, as well as numerical and verbal reasoning abilities. Some challenges in this area are defining intelligence so that it has the same meaning across species (e.g. comparing intelligence between literate humans and illiterate animals), and also operationalizing a measure that accurately compares mental ability across different species and contexts.

Wolfgang Köhler‘s research on the intelligence of apes is an example of research in this area. Stanley Coren’s book, The Intelligence of Dogs is a notable book on the topic of dog intelligence.[18] (See also: Dog intelligence.) Non-human animals particularly noted and studied for their intelligence include chimpanzees, bonobos (notably the language-using Kanzi) and other great apes, dolphins, elephants and to some extent parrots, rats and ravens.

Cephalopod intelligence also provides important comparative study. Cephalopods appear to exhibit characteristics of significant intelligence, yet their nervous systems differ radically from those of backboned animals. Vertebrates such as mammals, birds, reptiles and fish have shown a fairly high degree of intellect that varies according to each species. The same is true with arthropods.

It has been argued that plants should also be classified as being in some sense intelligent based on their ability to sense the environment and adjust their morphology, physiology and phenotype accordingly.[19][20]

Artificial intelligence

Artificial intelligence (or AI) is both the intelligence of machines and the branch of computer science which aims to create it, through “the study and design of intelligent agents[21] or “rational agents”, where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success.[22] Achievements in artificial intelligence include constrained and well-defined problems such as games, crossword-solving and optical character recognition and a few more general problems such as autonomous cars.[23] General intelligence or strong AI has not yet been achieved and is a long-term goal of AI research.

Among the traits that researchers hope machines will exhibit are reasoning, knowledge, planning, learning, communication, perception, and the ability to move and manipulate objects.[21][22] In the field of artificial intelligence there is no consensus on how closely the brain should be simulated.

See also

Impact of health on intelligence

Health can affect intelligence in various ways. Conversely, intelligence can affect health. Health effects on intelligence have been described as being among the most important factors in the origins of human group differences in IQ test scores and other measures of cognitive ability.[1] Several factors can lead to significant cognitive impairment, particularly if they occur during pregnancy and childhood when the brain is growing and the blood–brain barrier is less effective. Such impairment may sometimes be permanent, sometimes be partially or wholly compensated for by later growth.

Developed nations have implemented several health policies regarding nutrients and toxins known to influence cognitive function. These include laws requiring fortification of certain food products and laws establishing safe levels of pollutants (e.g. lead, mercury, and organochlorides). Comprehensive policy recommendations targeting reduction of cognitive impairment in children have been proposed.[2][3]

Improvements in nutrition (often involving specific micronutrients) due to in public policy changes have been implicated in IQ increases in many nations (as part of the overall Flynn effect), such as efforts fighting iodine deficiency in the U.S.[4]

Nutrition

Malnutrition may occur during several periods of growth, such as pregnancy, during breastfeeding, infancy, or childhood. It may also happen due to deficiencies of different nutrients, such as micronutrients, protein or energy. This may cause different effects.

Timing

Some observers have argued that malnutrition during the first six months of life harms cognitive development much more than malnutrition later in life. However, a study from the Philippines argues that malnutrition in the second year of life may have a larger negative impact than malnutrition in the first year of life.[5]

Intrauterine growth retardation

Undernutrition during pregnancy, and other factors, may cause intrauterine growth retardation (IUGR), which is one cause of low birth weight. However, it has been suggested that in IUGR the brain may be selectively spared. Brain growth is usually less affected than whole body weight or length. Several studies from developed nations have found that with the exception of extreme intrauterine growth retardation also affecting brain growth, and hypoxic injury, IUGR seems to have little or no measurable effect on mental performance and behavior in adolescence or adulthood. For example, acute undernutrition for a few months during the Dutch famine of 1944 caused a decrease in mean birthweight in certain areas. This was later associated with a change in performance on IQ tests for 18–19 years old Dutch males draftees from these areas compared to control areas. The subjects were exposed to famine prenatally but not after birth. During the famine, births decreased more among those with lower Socioeconomic status (SES), whereas after the famine, there was a compensatory increase in births among those with lower SES. Since SES correlates with IQ, this may have hidden an effect caused by the undernutrition.[6]

Breastfeedin

Studies often find higher IQ in children and adults who were breastfed.[3][7] It has also been proposed that the omega-3 fatty acids that are found in high doses in breast milk, and that are known to be essential constituents of brain tissues, could at least partially account for an increase in IQ.

Recently, however, the longstanding belief that breastfeeding causes an increase in the IQ of offspring was challenged in a 2006 paper published in the British Medical Journal. The results indicated that mother’s IQ, not breastfeeding, explained the differences in the IQ scores of offspring measured between ages 5 and 14. The results of this study argued that prior studies had not controlled for the mother’s IQ. Since mother’s IQ was predictive of whether a child was breastfed, the study concluded that “breast feeding [itself] has little or no effect on intelligence in children.” Instead, it was the mother’s IQ that had a significant correlation with the IQ of her offspring, whether the offspring was breastfed or was not breastfed.[8] Another study found that breastfeeding had a positive effect on cognitive development at 24 months of age even after controlling for parental IQ.[9]

A potential resolution to these different interpretations was proposed in a study showing that breastfeeding was linked to raised IQ (as much as 7 points when not controlling for maternal IQ) if the infants had an SNP coding for a “C” rather than G base within the FADS2 gene. Those with the “G” version showed no IQ advantage, suggesting a biochemical interaction of child’s genes on the effect of breast feeding.[10][11] Other studies have failed to replicate any correlation between the FADS2 gene,[12] breastfeeding and IQ, while others show a negative effect on IQ when combining bottledfeeding, and the “G” version of FADS2 .[13]

Infancy

Two studies in Chile on 18-year-old high-school graduates found that nutritional status during the first year of life affected IQ, scholastic achievement, and brain volume.[14] [15]

Micronutrients and vitamin deficiencies

Micronutrient deficiencies (e.g. in iodine and iron) influence the development of intelligence and remain a problem in the developing world. For example, iodine deficiency causes a fall, in average, of 12 IQ points.[16]

Policy recommendations to increase availability of micronutrient supplements have been made and justified in part by the potential to counteract intelligence-related developmental problems. For example, the Copenhagen consensus, states that lack of both iodine and iron has been implicated in impaired brain development, and this can affect enormous numbers of people: it is estimated that 2 billion people (one-third of the total global population) are affected by iodine deficiency, including 285 million 6- to 12-year-old children. In developing countries, it is estimated that 40% of children aged four and under suffer from anaemia because of insufficient iron in their diets.[17]

A joint statement on vitamin and mineral deficiencies says that the severity of such deficiencies “means the impairment of hundreds of millions of growing minds and the lowering of national IQs.”[18]

Overall, studies investigating whether cognitive function in already iron-deficient children can be improved with iron supplements have produced mixed results, possibly because deficiency in critical growth periods may cause irreversible damage. However, several studies with better design have shown substantial benefits. One way to prevent iron deficiency is to give specific supplementation to children, for example as tablets. However, this is costly, distribution mechanisms are often ineffective, and compliance is low. Fortification of staple foods (cereals, flour, sugar, salt) to deliver micronutrients to children on a large scale is probably the most sustainable and affordable option, even though commitment from governments and the food industry is needed.[19] Developed nations fortify several foods with various micronutrients.[20]

Additional vitamin-mineral supplementation may have an effect also in the developed world. A study giving such supplementation to “working class,” primarily Hispanic, 6–12-year-old children in the United States for 3 months found an average increase of 2 to 3 IQ points. Most of this can be explained by the very large increase of a subgroup of the children, presumably because these were not adequately nourished unlike the majority. The study suggests that parents of schoolchildren whose academic performance is substandard would be well advised to seek a nutritionally oriented physician for assessment of their children’s nutritional status as a possible etiology.[21]

More speculatively, other nutrients may prove important in the future. Fish oil supplement to pregnant and lactating mothers has been linked to increased cognitive ability in one study.[22] Vitamin B12 and folate may be important for cognitive function in old age.[23]

Another study found that pregnant women who consumed 340 grams of low-mercury containing fish with fatty acids per week have benefits that outweigh the risks for mercury poisoning. They were less likely to have children with low verbal IQ, motor coordination and behavioral problems. However, foods containing high amounts of mercury, such as shark, swordfish, king mackerel and tilefish, might cause mental retardation.[24][25][26][27][28][29]

Protein and energy malnutrition

One study from a developing country, Guatemala, found that poor growth during infancy, rather than low birth weight, was negatively related to adolescent performance on cognitive and achievement tests.[30] A later related very long term study looked at the effect of giving 6–24-month-old children in Guatemala a high protein-energy drink as a dietary supplement. A significantly positive and fairly substantial effects was found on increasing the probability of attending school and of passing the first grade, increasing the grade attained by age 13, increasing completed schooling attainment, and for adults aged 25–40 increasing IQ test scores.[31]

Stunting

31% of children under the age of 5 in the developing world are moderately (height-for-age is below minus 2 standard deviations) or severely stunted (below minus 3 standard deviations).[32] The prevalence was even higher previously since the worldwide prevalence of stunting is declining by about half of a percentage point each year.[33] A study on stunted children aged 9–24 months in Jamaica found that when aged 17–18 years they had significantly poorer scores than a non-stunted group on cognitive and educational tests and psychosocial functioning. Giving a nutritional supplementation (1 kg milk based formula each week) to these already stunted children had no significant effect on later scores, but psychosocial stimulation (weekly play sessions with mother and child) had a positive effect.[34][35]

Toxins

Industrial chemicals

Certain toxins, such as lead, mercury, arsenic, toluene, and PCB are well-known causes of neuro-developmental disorders. Recognition of these risks has led to evidence-based programmes of prevention, such as elimination of lead additives in petrol. Although these prevention campaigns are highly successful, most were initiated only after substantial delays.[36]

Policies to manage lead differ between nations, particularly between the developed and developing world. Use of leaded gasoline has been reduced or eliminated in most developed nations, and lead levels in US children have been substantially reduced by policies relating to lead reduction.[37] Even slightly elevated lead levels around the age of 24 months are associated with intellectual and academic performance deficits at age 10 years.[38]

Certain, at least previously, widely used organochlorides, such as dioxins, DDT, and PCB, have been associated with cognitive deficits.[39]

A Lancet review identified 201 chemicals with the ability to cause clinical neurotoxic effects in human adults, as described in the peer-reviewed scientific literature. Most of them are commonly used. Many additional chemicals have been shown to be neurotoxic in laboratory models. The article notes that children are more vulnerable and argues that new, precautionary approaches that recognise the unique vulnerability of the developing brain are needed for testing and control of chemicals in order to avoid the previous substantial before starting restrictions on usage.[40] An appendix listed further industrial chemicals considered to be neurotoxic.[41]

Alcohol and drugs

Fetal alcohol exposure, causing Fetal alcohol syndrome, is one of the leading known causes of mental retardation in the Western world.[42]

Current cannabis use was found to be significantly correlated in a dose-dependent manner with a decline in IQ scores, during the effect of the use. However, no such decline was seen in subjects who had formerly been heavy cannabis users and had stopped taking the drug. The authors concluded that cannabis does not have a long-term effect on intelligence. However this is contradicted by the long term longitudinal study, carried out by Otago and Duke universities,which found that regular use of marijuana in teenage years affects IQ in adulthood even when the use stops. The drop in IQ was 8 points. Adults smoking marijuana had no lasting effect on IQ.[43] Effects on fetal development are minimal when compared with the well-documented adverse effects of tobacco or alcohol use.[44]

Maternal tobacco smoking during pregnancy is associated with increased activity, decreased attention, and diminished intellectual abilities.[45] However, a recent study finds that maternal tobacco smoking has no direct causal effect on the child’s IQ. Adjusting for maternal cognitive ability as measured by IQ and education eliminated the association between lower IQ and tobacco smoking.[46] But another study instead looking at the relationship between environmental tobacco smoke exposure, measured with a blood biomarker, and cognitive abilities among U.S. children and adolescents 6–16 years of age, found an inverse association between exposure and cognitive ability among children even at extremely low levels of exposure. The study controlled for sex, race, region, poverty, parent education and marital status, ferritin, and blood lead concentration.[47]

Healthcare during pregnancy and childbirth

Healthcare during pregnancy and childbirth, access to which is often governed by policy, also influences cognitive development. Preventable causes of low intelligence in children include infectious diseases such as meningitis, parasites, and cerebral malaria, prenatal drug and alcohol exposure, newborn asphyxia, low birth weight, head injuries, and endocrine disorders. A direct policy focus on determinants of childhood cognitive ability has been urged.[2]

Stress

A recent theory suggests that early childhood stress may affect the developing brain and cause negative effects.[48] Exposure to violence in childhood has been associated with lower school grades[49] and lower IQ in children of all races.[50] A group of largely African American urban first-grade children and their caregivers were evaluated using self-report, interview, and standardized tests, including IQ tests. The study reported that exposure to violence and trauma-related distress in young children were associated with substantial decrements in IQ and reading achievement. Exposure to Violence or Trauma lead to a 7.5-point (SD, 0.5) decrement in IQ and a 9.8-point (SD, 0.66) decrement in reading achievement.[49]

Violence may have a negative impact on IQ, or IQ may be protective against violence.[50] The causal mechanism and direction of causation is unknown.[49] Neighborhood risk has been related to lower school grades for African-American adolescents in another study from 2006.[51] Violence may also be more prevalent in the homes of parents with lower IQ’s. These parents could have genetically produced children with lower IQ’s.

Infectious diseases

A 2010 study by Eppig, Fincher and Thornhill found a close correlation between the infectious disease burden in a country and the average IQ of its population. The researchers found that when disease was controlled for, IQ showed no correlation with other variables such as educational and nutritional levels. Since brain development requires a very high proportion of all the body’s energy in newborns and children, the researchers argue that fighting infection reduces children’s IQ potential. The Eppig research may help to explain the Flynn effect, the rise in intelligence noted in rich countries.[52] They also tested other hypotheses as well, including genetic explanations, concluding that infectious disease was “the best predictor”.[53] Christopher Hassall and Thomas Sherratt repeated the analysis, and concluded “that infectious disease may be the only really important predictor of average national IQ”.[53]

In order to mitigate the effects of education on IQ, Eppig, Fincher & Thornhill (2010) repeated their analysis across the United States where standardized and compulsory education exists.[53] The correlation between infectious disease and average IQ was confirmed, and they concluded that the “evidence suggests that infectious disease is a primary cause of the global variation in human intelligence”.[53]

Tropical infectious diseases

Malaria affects 300–500 million persons each year, mostly children under age five in Africa, causing widespread anemia during a period of rapid brain development and also direct brain damage from cerebral malaria to which children are more vulnerable.[54] A 2006 systematic review found that Plasmodium falciparum infection causes cognitive deficits in both the short- and long-term.[55] Policies aimed at malaria reduction may have cognitive benefits. It has been suggested that the future economic and educational development of Africa critically depends on the eradication of malaria.

Roundworms infect hundreds of millions of people. There is evidence that high intensities of worms in the intestines can affect mental performance,[56] but a systematic review in 2000 and a 2009 update found that there was insufficient evidence to show that deworming treatments improve cognitive performance or school performance in children.[57][58]

HIV infection in children in sub-Saharan Africa affects their motor development, but there is insufficient evidence to show a slowing of language development.[59]

Effects of other diseases

There are numerous diseases affecting the central nervous system which can cause cognitive impairment. Many of these are associated with aging. Some common examples include Alzheimer’s disease and Multi-infarct dementia. Many diseases may be neurological or psychiatric and may primarily affect the brain. Others may affect many other organs, like HIV, Hashimoto’s thyroiditis causing hypothyroidism, or cancer.

Major depression, affecting about 16% of the population on at least one occasion in their lives and the leading cause of disability in North America, may give symptoms similar to dementia. Patients treated for depression score higher on IQ tests than before treatment.[60][61]

Myopia and hyperopia

A 2008 literature review writes that studies in several nations have found a relationship between myopia and higher IQ and between myopia and school achievement. Several, but not all, studies have found hyperopia to be associated with lower IQ and school achievements. A common explanation for myopia is near-work. Regarding the relationship to IQ, several explanations have been proposed. One is that the myopic child is better adapted at reading, and reads and studies more, which increases intelligence. The reverse explanation is that the intelligent and studious child reads more which causes myopia. Another is that the myopic child have an advantage at IQ testing which is near work because of less eye strain. Still another explanation is that pleiotropic gene(s) affect the size of both brain and eyes simultaneously.[62] A study of Chinese schoolchildren found that after controlling for age, gender, school, parental myopia, father’s education, and books read per week, myopia was still associated with high nonverbal IQ. Nonverbal IQ was a more important explanation than books read per week.[63]

Other associations

Long working hours (55 vs. 40) was associated with increased scores on cognitive tests in a 5 year study on midlife British civil servants.[64]

See also

g factor (psychometrics)

The g factor (short for “general factor”) is a construct developed in psychometric investigations of cognitive abilities. It is a variable that summarizes positive correlations among different cognitive tasks, reflecting the fact that an individual’s performance at one type of cognitive task tends to be comparable to his or her performance at other kinds of cognitive tasks. The g factor typically accounts for 40 to 50 percent of the between-individual variance in IQ test performance, and IQ scores are frequently regarded as estimates of individuals’ standing on the g factor.[1] The terms IQ, general intelligence, general cognitive ability, general mental ability, or simply intelligence are often used interchangeably to refer to the common core shared by cognitive tests.[2]

The existence of the g factor was originally proposed by the English psychologist Charles Spearman in the early years of the 20th century. He observed that children’s performance ratings across seemingly unrelated school subjects were positively correlated, and reasoned that these correlations reflected the influence of an underlying general mental ability that entered into performance on all kinds of mental tests. Spearman suggested that all mental performance could be conceptualized in terms of a single general ability factor, which he labeled g, and a large number of narrow task-specific ability factors. Today’s factor models of intelligence typically represent cognitive abilities as a three-level hierarchy, where there are a large number of narrow factors at the bottom of the hierarchy, a handful of broad, more general factors at the intermediate level, and at the apex a single factor, referred to as the g factor, which represents the variance common to all cognitive tasks.

Traditionally, research on g has concentrated on psychometric investigations of test data, with a special emphasis on factor analytic approaches. However, empirical research on the nature of g has also drawn upon experimental cognitive psychology and mental chronometry, brain anatomy and physiology, quantitative and molecular genetics, and primate evolution.[3] While the existence of g as a statistical regularity is well-established and uncontroversial, there is no consensus as to what causes the positive correlations between tests.

Behavioral genetic research has established that the construct of g is highly heritable. It has a number of other biological correlates, including brain size. It is also a significant predictor of individual differences in many social outcomes, particularly in education and employment. The most widely accepted contemporary theories of intelligence incorporate the g factor.[4] However, critics of g have contended that an emphasis on g is misplaced and entails a devaluation of other important abilities.

Mental testing

Spearman’s correlation matrix for six measures of school performance. All the correlations are positive, a phenomenon referred as the positive manifold. The bottom row shows the g loadings of each performance measure.[5]
Classics French English Math Pitch Music
Classics
French .83
English .78 .67
Math .70 .67 .64
Pitch discrimination .66 .65 .54 .45
Music .63 .57 .51 .51 .40
g .958 .882 .803 .750 .673 .646
Subtest intercorrelations in a sample of Scottish subjects who completed the WAIS-R battery. The subtests are Vocabulary, Similarities, Information, Comprehension, Picture arrangement, Block design, Arithmetic, Picture completion, Digit span, Object assembly, and Digit symbol. The bottom row shows the g loadings of each subtest.[6]
V S I C PA BD A PC DSp OA DS
V
S .67
I .72 .59
C .70 .58 .59
PA .51 .53 .50 .42
BD .45 .46 .45 .39 .43
A .48 .43 .55 .45 .41 .44
PC .49 .52 .52 .46 .48 .45 .30
DSp .46 .40 .36 .36 .31 .32 .47 .23
OA .32 .40 .32 .29 .36 .58 .33 .41 .14
DS .32 .33 .26 .30 .28 .36 .28 .26 .27 .25
g .83 .80 .80 .75 .70 .70 .68 .68 .56 .56 .48

Mental tests may be designed to measure different aspects of cognition. Specific domains assessed by tests include mathematical skill, verbal fluency, spatial visualization, and memory, among others. However, individuals who excel at one type of test tend to excel at other kinds of tests, too, while those who do poorly on one test tend to do so on all tests, regardless of the tests’ contents.[7] The English psychologist Charles Spearman was the first to describe this phenomenon.[8] In a famous research paper published in 1904,[9] he observed that children’s performance measures across seemingly unrelated school subjects were positively correlated. This finding has since been replicated numerous times. The consistent finding of universally positive correlation matrices of mental test results (or the “positive manifold”), despite large differences in tests’ contents, has been described as “arguably the most replicated result in all psychology.”[10] Zero or negative correlations between tests suggest the presence of sampling error or restriction of the range of ability in the sample studied.[11]

Using factor analysis or related statistical methods, it is possible to compute a single common factor that can be regarded as a summary variable characterizing the correlations between all the different tests in a test battery. Spearman referred to this common factor as the general factor, or simply g. (By convention, g is always printed as a lower case italic.) Mathematically, the g factor is a source of variance among individuals, which entails that one cannot meaningfully speak of any one individual’s mental abilities consisting of g or other factors to any specified degrees. One can only speak of an individual’s standing on g (or other factors) compared to other individuals in a relevant population.[11][12][13]

Different tests in a test battery may correlate with (or “load onto”) the g factor of the battery to different degrees. These correlations are known as g loadings. An individual test taker’s g factor score, representing his or her relative standing on the g factor in the total group of individuals, can be estimated using the g loadings. Full-scale IQ scores from a test battery will usually be highly correlated with g factor scores, and they are often regarded as estimates of g. For example, the correlations between g factor scores and full-scale IQ scores from Wechsler‘s tests have been found to be greater than .95.[1][11][14] The terms IQ, general intelligence, general cognitive ability, general mental ability, or simply intelligence are frequently used interchangeably to refer to the common core shared by cognitive tests.[2]

The g loadings of mental tests are always positive and usually range between .10 and .90, with a mean of about .60 and a standard deviation of about .15. Raven’s Progressive Matrices is among the tests with the highest g loadings, around .80. Tests of vocabulary and general information are also typically found to have high g loadings.[15][16] However, the g loading of the same test may vary somewhat depending on the composition of the test battery.[17]

The complexity of tests and the demands they place on mental manipulation are related to the tests’ g loadings. For example, in the forward digit span test the subject is asked to repeat a sequence of digits in the order of their presentation after hearing them once at a rate of one digit per second. The backward digit span test is otherwise the same except that the subject is asked to repeat the digits in the reverse order to that in which they were presented. The backward digit span test is more complex than the forward digit span test, and it has a significantly higher g loading. Similarly, the g loadings of arithmetic computation, spelling, and word reading tests are lower than those of arithmetic problem solving, text composition, and reading comprehension tests, respectively.[11][18]

Test difficulty and g loadings are distinct concepts that may or may not be empirically related in any specific situation. Tests that have the same difficulty level, as indexed by the proportion of test items that are failed by test takers, may exhibit a wide range of g loadings. For example, tests of rote memory have been shown to have the same level of difficulty but considerably lower g loadings than many tests that involve reasoning.[18][19]

Theories

While the existence of g as a statistical regularity is well-established and uncontroversial among experts, there is no consensus as to what causes the positive intercorrelations. Several explanations have been proposed.[20]

Mental energy or efficiency

Charles Spearman reasoned that correlations between tests reflected the influence of a common causal factor, a general mental ability that enters into performance on all kinds of mental tasks. However, he thought that the best indicators of g were those tests that reflected what he called the eduction of relations and correlates, which included abilities such as deduction, induction, problem solving, grasping relationships, inferring rules, and spotting differences and similarities. Spearman hypothesized that g was equivalent with “mental energy”. However, this was more of a metaphorical explanation, and he remained agnostic about the physical basis of this energy, expecting that future research would uncover the exact physiological nature of g.[21]

Following Spearman, Arthur Jensen maintained that all mental tasks tap into g to some degree. According to Jensen, the g factor represents a “distillate” of scores on different tests rather than a summation or an average of such scores, with factor analysis acting as the distillation procedure.[16] He argued that g cannot be described in terms of the item characteristics or information content of tests, pointing out that very dissimilar mental tasks may have nearly equal g loadings. David Wechsler similarly contended that g is not an ability at all but rather some general property of the brain. Jensen hypothesized that g corresponds to individual differences in the speed or efficiency of the neural processes associated with mental abilities.[22] He also suggested that given the associations between g and elementary cognitive tasks, it should be possible to construct a ratio scale test of g that uses time as the unit of measurement.[23]

Sampling theory

The so-called sampling theory of g, originally developed by E.L. Thorndike and Godfrey Thomson, proposes that the existence of the positive manifold can be explained without reference to a unitary underlying capacity. According to this theory, there are a number of uncorrelated mental processes, and all tests draw upon different samples of these processes. The intercorrelations between tests are caused by an overlap between processes tapped by the tests.[24][25] Thus, the positive manifold arises due to a measurement problem, an inability to measure more fine-grained, presumably uncorrelated mental processes.[13]

It has been shown that it is not possible to distinguish statistically between Spearman’s model of g and the sampling model; both are equally able to account for intercorrelations among tests.[26] The sampling theory is also consistent with the observation that more complex mental tasks have higher g loadings, because more complex tasks are expected to involve a larger sampling of neural elements and therefore have more of them in common with other tasks.[27]

Some researchers have argued that the sampling model invalidates g as a psychological concept, because the model suggests that g factors derived from different test batteries simply reflect the shared elements of the particular tests contained in each battery rather than a g that is common to all tests. Similarly, high correlations between different batteries could be due to them measuring the same set of abilities rather than the same ability.[28]

Critics have argued that the sampling theory is incongruent with certain empirical findings. Based on the sampling theory, one might expect that related cognitive tests share many elements and thus be highly correlated. However, some closely related tests, such as forward and backward digit span, are only modestly correlated, while some seemingly completely dissimilar tests, such as vocabulary tests and Raven’s matrices, are consistently highly correlated. Another problematic finding is that brain damage frequently leads to specific cognitive impairments rather than a general impairment one might expect based on the sampling theory.[13][29]

Mutualism

The “mutualism” model of g proposes that cognitive processes are initially uncorrelated, but that the positive manifold arises during individual development due to mutual beneficial relations between cognitive processes. Thus there is no single process or capacity underlying the positive correlations between tests. During the course of development, the theory holds, any one particularly efficient process will benefit other processes, with the result that the processes will end up being correlated with one another. Thus similarly high IQs in different persons may stem from quite different initial advantages that they had.[13][30] Critics have argued that the observed correlations between the g loadings and the heritability coefficients of subtests are problematic for the mutualism theory.[31]

Factor structure of cognitive abilities

An illustration of Spearman’s two-factor intelligence theory. Each small oval is a hypothetical mental test. The blue areas correspond to test-specific variance (s), while the purple areas represent the variance attributed to g.

Factor analysis is a family of mathematical techniques that can be used to represent correlations between intelligence tests in terms of a smaller number of variables known as factors. The purpose is to simplify the correlation matrix by using hypothetical underlying factors to explain the patterns in it. When all correlations in a matrix are positive, as they are in the case of IQ, factor analysis will yield a general factor common to all tests. The general factor of IQ tests is referred to as the g factor, and it typically accounts for 40 to 50 percent of the variance in IQ test batteries.[32]

Charles Spearman developed factor analysis in order to study correlations between tests. Initially, he developed a model of intelligence in which variations in all intelligence test scores are explained by only two kinds of variables: first, factors that are specific to each test (denoted s); and second, a g factor that accounts for the positive correlations across tests. This is known as Spearman’s two-factor theory. Later research based on more diverse test batteries than those used by Spearman demonstrated that g alone could not account for all correlations between tests. Specifically, it was found that even after controlling for g, some tests were still correlated with each other. This led to the postulation of group factors that represent variance that groups of tests with similar task demands (e.g., verbal, spatial, or numerical) have in common in addition to the shared g variance.[33]

An illustration of John B. Carroll‘s three stratum theory, an influential contemporary model of cognitive abilities. The broad abilities recognized by the model are fluid intelligence (Gf), crystallized intelligence (Gc), general memory and learning (Gy), broad visual perception (Gv), broad auditory perception (Gu), broad retrieval ability (Gr), broad cognitive speediness (Gs), and processing speed (Gt). Carroll regarded the broad abilities as different “flavors” of g.

Through factor rotation, it is, in principle, possible to produce an infinite number of different factor solutions that are mathematically equivalent in their ability to account for the intercorrelations among cognitive tests. These include solutions that do not contain a g factor. Thus factor analysis alone cannot establish what the underlying structure of intelligence is. In choosing between different factor solutions, researchers have to examine the results of factor analysis together with other information about the structure of cognitive abilities.[34]

There are many psychologically relevant reasons for preferring factor solutions that contain a g factor. These include the existence of the positive manifold, the fact that certain kinds of tests (generally the more complex ones) have consistently larger g loadings, the substantial invariance of g factors across different test batteries, the impossibility of constructing test batteries that do not yield a g factor, and the widespread practical validity of g as a predictor of individual outcomes. The g factor, together with group factors, best represents the empirically established fact that, on average, overall ability differences between individuals are greater than differences among abilities within individuals, while a factor solution with orthogonal factors without g obscures this fact. Moreover, g appears to be the most heritable component of intelligence.[35] Research utilizing the techniques of confirmatory factor analysis has also provided support for the existence of g.[34]

A g factor can be computed from a correlation matrix of test results using several different methods. These include exploratory factor analysis, principal components analysis (PCA), and confirmatory factor analysis. Different factor-extraction methods produce highly consistent results, although PCA has sometimes been found to produce inflated estimates of the influence of g on test scores.[17][36]

There is a broad contemporary consensus that cognitive variance between people can be conceptualized at three hierarchical levels, distinguished by their degree of generality. At the lowest, least general level there are a large number of narrow first-order factors; at a higher level, there are a relatively small number – somewhere between five and ten – of broad (i.e., more general) second-order factors (or group factors); and at the apex, there is a single third-order factor, g, the general factor common to all tests.[37][38][39] The g factor usually accounts for the majority of the total common factor variance of IQ test batteries.[40] Contemporary hierarchical models of intelligence include the three stratum theory and the Cattell–Horn–Carroll theory.[41]

“Indifference of the indicator”

Spearman proposed the principle of the indifference of the indicator, according to which the precise content of intelligence tests is unimportant for the purposes of identifying g, because g enters into performance on all kinds of tests. Any test can therefore be used as an indicator of g. Following Spearman, Arthur Jensen more recently argued that a g factor extracted from one test battery will always be the same, within the limits of measurement error, as that extracted from another battery, provided that the batteries are large and diverse.[42] According to this view, every mental test, no matter how distinctive, calls on g to some extent. Thus a composite score of a number of different tests will load onto g more strongly than any of the individual test scores, because the g components cumulate into the composite score, while the uncorrelated non-g components will cancel each other out. Theoretically, the composite score of an infinitely large, diverse test battery would, then, be a perfect measure of g.[43]

In contrast, L.L. Thurstone argued that a g factor extracted from a test battery reflects the average of all the abilities called for by the particular battery, and that g therefore varies from one battery to another and “has no fundamental psychological significance.”[44] Along similar lines, John Horn argued that g factors are meaningless because they are not invariant across test batteries, maintaining that correlations between different ability measures arise because it is difficult to define a human action that depends on just one ability.[45][46]

To show that different batteries reflect the same g, one must administer several test batteries to the same individuals, extract g factors from each battery, and show that the factors are highly correlated. This can be done within a confirmatory factor analysis framework.[20] Wendy Johnson and colleagues have published two such studies.[47][48] The first found that the correlations between g factors extracted from three different batteries were .99, .99, and 1.00, supporting the hypothesis that g factors from different batteries are the same and that the identification of g is not dependent on the specific abilities assessed. The second study found that g factors derived from four of five test batteries correlated at between .95–1.00, while the correlations ranged from .79 to .96 for the fifth battery, the Cattell Culture Fair Intelligence Test (the CFIT). They attributed the somewhat lower correlations with the CFIT battery to its lack of content diversity for it contains only matrix-type items, and interpreted the findings as supporting the contention that g factors derived from different test batteries are the same provided that the batteries are diverse enough. The results suggest that the same g can be consistently identified from different test batteries.[37][49]

Population distribution

The form of the population distribution of g is unknown, because g cannot be measured on a ratio scale. (The distributions of scores on typical IQ tests are roughly normal, but this is achieved by construction, i.e., by normalizing the raw scores.) It has been argued that there are nevertheless good reasons for supposing that g is normally distributed in the general population, at least within a range of ±2 standard deviations from the mean. In particular, g can be thought of as a composite variable that reflects the additive effects of a large number of independent genetic and environmental influences, and such a variable should, according to the central limit theorem, follow a normal distribution.[50]

Spearman’s law of diminishing returns

A number of researchers have suggested that the proportion of variation accounted for by g may not be uniform across all subgroups within a population. Spearman’s law of diminishing returns (SLDR), also termed the ability differentiation hypothesis, predicts that the positive correlations among different cognitive abilities are weaker among more intelligent subgroups of individuals. More specifically, SLDR predicts that the g factor will account for a smaller proportion of individual differences in cognitive tests scores at higher scores on the g factor.

SLDR was originally proposed by Charles Spearman,[51] who reported that the average correlation between 12 cognitive ability tests was .466 in 78 normal children, and .782 in 22 “defective” children. Detterman and Daniel rediscovered this phenomenon in 1989.[52] They reported that for subtests of both the WAIS and the WISC, subtest intercorrelations decreased monotonically with ability group, ranging from approximately an average intercorrelation of .7 among individuals with IQs less than 78 to .4 among individuals with IQs greater than 122.[53]

SLDR has been replicated in a variety of child and adult samples who have been measured using broad arrays of cognitive tests. The most common approach has been to divide individuals into multiple ability groups using an observable proxy for their general intellectual ability, and then to either compare the average interrelation among the subtests across the different groups, or to compare the proportion of variation accounted for by a single common factor, in the different groups.[54] However, as both Deary et al. (1996).[54] and Tucker-Drob (2009)[55] have pointed out, dividing the continuous distribution of intelligence into an arbitrary number of discrete ability groups is less than ideal for examining SLDR. Tucker-Drob (2009)[55] extensively reviewed the literature on SLDR and the various methods by which it had been previously tested, and proposed that SLDR could be most appropriately captured by fitting a common factor model that allows the relations between the factor and its indicators to be nonlinear in nature. He applied such a factor model to a nationally representative data of children and adults in the United States and found consistent evidence for SLDR. For example, Tucker-Drob (2009) found that a general factor accounted for approximately 75% of the variation in seven different cognitive abilities among very low IQ adults, but only accounted for approximately 30% of the variation in the abilities among very high IQ adults.

Practical validity

The practical validity of g as a predictor of educational, economic, and social outcomes is more far-ranging and universal than that of any other known psychological variable. The validity of g is greater the greater the complexity of the task.[56][57]

A test’s practical validity is measured by its correlation with performance on some criterion external to the test, such as college grade-point average, or a rating of job performance. The correlation between test scores and a measure of some criterion is called the validity coefficient. One way to interpret a validity coefficient is to square it to obtain the variance accounted by the test. For example, a validity coefficient of .30 corresponds to 9 percent of variance explained. This approach has, however, been criticized as misleading and uninformative, and several alternatives have been proposed. One arguably more interpretable approach is to look at the percentage of test takers in each test score quintile who meet some agreed-upon standard of success. For example, if the correlation between test scores and performance is .30, the expectation is that 67 percent of those in the top quintile will be above-average performers, compared to 33 percent of those in the bottom quintile.[58][59]

Academic achievement

The predictive validity of g is most conspicuous in the domain of scholastic performance. This is apparently because g is closely linked to the ability to learn novel material and understand concepts and meanings.[56]

In elementary school, the correlation between IQ and grades and achievement scores is between .60 and .70. At more advanced educational levels, more students from the lower end of the IQ distribution drop out, which restricts the range of IQs and results in lower validity coefficients. In high school, college, and graduate school the validity coefficients are .50–.60, .40–.50, and .30–.40, respectively. The g loadings of IQ scores are high, but it is possible that some of the validity of IQ in predicting scholastic achievement is attributable to factors measured by IQ independent of g. According to research by Robert L. Thorndike, 80 to 90 percent of the predictable variance in scholastic performance is due to g, with the rest attributed to non-g factors measured by IQ and other tests.[60]

Achievement test scores are more highly correlated with IQ than school grades. This may be because grades are more influenced by the teacher’s idiosyncratic perceptions of the student.[61] In a longitudinal English study, g scores measured at age 11 correlated with all the 25 subject tests of the national GCSE examination taken at age 16. The correlations ranged from .77 for the mathematics test to .42 for the art test. The correlation between g and a general educational factor computed from the GCSE tests was .81.[62]

Research suggests that the SAT, widely used in college admissions, is primarily a measure of g. A correlation of .82 has been found between g scores computed from an IQ test battery and SAT scores. In a study of 165,000 students at 41 U.S. colleges, SAT scores were found to be correlated at .47 with first-year college grade-point average after correcting for range restriction in SAT scores (when course difficulty is held constant, i.e., if all students attended the same set of classes, the correlation rises to .55).[58][63]

Job attainment and performance

There is a high correlation of .90 to .95 between the prestige rankings of occupations, as rated by the general population, and the average general intelligence scores of people employed in each occupation. At the level of individual employees, the association between job prestige and g is lower – one large U.S. study reported a correlation of .65 (.72 corrected for attenuation). Mean level of g thus increases with perceived job prestige. It has also been found that the dispersion of general intelligence scores is smaller in more prestigious occupations than in lower level occupations, suggesting that higher level occupations have minimum g requirements.[64][65]

Research indicates that tests of g are the best single predictors of job performance, with an average validity coefficient of .55 across several meta-analyses of studies based on supervisor ratings and job samples. The average meta-analytic validity coefficient for performance in job training is .63.[66] The validity of g in the highest complexity jobs (professional, scientific, and upper management jobs) has been found to be greater than in the lowest complexity jobs, but g has predictive validity even for the simplest jobs. Research also shows that specific aptitude tests tailored for each job provide little or no increase in predictive validity over tests of general intelligence. It is believed that g affects job performance mainly by facilitating the acquisition of job-related knowledge. The predictive validity of g is greater than that of work experience, and increased experience on the job does not decrease the validity of g.[56][64]

Income

The correlation between income and g, as measured by IQ scores, averages about .40 across studies. The correlation is higher at higher levels of education and it increases with age, stabilizing when people reach their highest career potential in middle age. Even when education, occupation and socioeconomic background are held constant, the correlation does not vanish.[67]

Other correlates

The g factor is reflected in many social outcomes. Many social behavior problems, such as dropping out of school, chronic welfare dependency, accident proneness, and crime, are negatively correlated with g independent of social class of origin.[68] Health and mortality outcomes are also linked to g, with higher childhood test scores predicting better health and mortality outcomes in adulthood (see Cognitive epidemiology).[69]

Genetic and environmental determinants

Main article: Heritability of IQ

Heritability is the proportion of phenotypic variance in a trait in a population that can be attributed to genetic factors. The heritability of g has been estimated to fall between 40 to 80 percent using twin, adoption, and other family study designs as well as molecular genetic methods. It has been found to increase linearly with age. For example, a large study involving more than 11,000 pairs of twins from four countries reported the heritability of g to be 41 percent at age nine, 55 percent at age twelve, and 66 percent at age seventeen. Other studies have estimated that the heritability is as high as 80 percent in adulthood, although it may decline in old age. Most of the research on the heritability of g has been conducted in the USA and Western Europe, but studies in Russia (Moscow), the former East Germany, Japan, and rural India have yielded similar estimates of heritability as Western studies.[37][70][71][72]

Behavioral genetic research has also established that the shared (or between-family) environmental effects on g are strong in childhood, but decline thereafter and are negligible in adulthood. This indicates that the environmental effects that are important to the development of g are unique and not shared between members of the same family.[71]

The genetic correlation is a statistic that indicates the extent to which the same genetic effects influence two different traits. If the genetic correlation between two traits is zero, the genetic effects on them are independent, whereas a correlation of 1.0 means that the same set of genes explains the heritability of both traits (regardless of how high or low the heritability of each is). Genetic correlations between specific mental abilities (such as verbal ability and spatial ability) have been consistently found to be very high, close to 1.0. This indicates that genetic variation in cognitive abilities is almost entirely due to genetic variation in whatever g is. It also suggests that what is common among cognitive abilities is largely caused by genes, and that independence among abilities is largely due to environmental effects. Thus it has been argued that when genes for intelligence are identified, they will be “generalist genes”, each affecting many different cognitive abilities.[71][73][74]

The g loadings of mental tests have been found to correlate with their heritabilities, with correlations ranging from moderate to perfect in various studies. Thus the heritability of a mental test is usually higher the larger its g loading is.[31]

Much research points to g being a highly polygenic trait influenced by a large number of common genetic variants, each having only small effects. Another possibility is that heritable differences in g are due to individuals having different “loads” of rare, deleterious mutations, with genetic variation among individuals persisting due to mutation–selection balance.[74][75]

A number of candidate genes have been reported to be associated with intelligence differences, but the effect sizes have been small and almost none of the findings have been replicated. No individual genetic variants have been conclusively linked to intelligence in the normal range so far. Many researchers believe that very large samples will be needed to reliably detect individual genetic polymorphisms associated with g.[37][75] However, while genes influencing variation in g in the normal range have proven difficult to find, a large number of single-gene disorders with mental retardation among their symptoms have been discovered.[76]

Several studies suggest that tests with larger g loadings are more affected by inbreeding depression lowering test scores. There is also evidence that tests with larger g loadings are associated with larger positive heterotic effects on test scores. Inbreeding depression and heterosis suggest the presence of genetic dominance effects for g.[77]

Neuroscientific findings

g has a number of correlates in the brain. Studies using magnetic resonance imaging (MRI) have established that g and total brain volume are moderately correlated (r~.3–.4). External head size has a correlation of ~.2 with g. MRI research on brain regions indicates that the volumes of frontal, parietal and temporal cortices, and the hippocampus are also correlated with g, generally at .25 or more, while the correlations, averaged over many studies, with overall grey matter and overall white matter have been found to be .31 and .27, respectively. Some but not all studies have also found positive correlations between g and cortical thickness. However, the underlying reasons for these associations between the quantity of brain tissue and differences in cognitive abilities remain largely unknown.[2]

Most researchers believe that intelligence cannot be localized to a single brain region, such as the frontal lobe. It has been suggested that intelligence could be characterized as a small-world network. For example, high intelligence could be dependent on unobstructed transfer of information between the involved brain regions along white matter fibers. Brain lesion studies have found small but consistent associations indicating that people with more white matter lesions tend to have lower cognitive ability. Research utilizing NMR spectroscopy has discovered somewhat inconsistent but generally positive correlations between intelligence and white matter integrity, supporting the notion that white matter is important for intelligence.[2]

Some research suggests that aside from the integrity of white matter, also its organizational efficiency is related to intelligence. The hypothesis that brain efficiency has a role in intelligence is supported by functional MRI research showing that more intelligent people generally process information more efficiently, i.e., they use fewer brain resources for the same task than less intelligent people.[2]

Small but relatively consistent associations with intelligence test scores include also brain activity, as measured by EEG records or event-related potentials, and nerve conduction velocity.[78][79]

Other biological associations

Height is correlated with intelligence (r~.2), but this correlation has not generally been found within families (i.e., among siblings), suggesting that it results from cross-assortative mating for height and intelligence. Myopia is known to be associated with intelligence, with a correlation of around .2 to .25, and this association has been found within families, too.[80]

There is some evidence that a g factor underlies the abilities of nonhuman animals, too. Several studies suggest that a general factor accounts for a substantial percentage of covariance in cognitive tasks given to such animals as rats, mice, and rhesus monkeys.[79][81]

Group similarities and differences

Cross-cultural studies indicate that the g factor can be observed whenever a battery of diverse, complex cognitive tests is administered to a human sample. The factor structure of IQ tests has also been found to be consistent across sexes and ethnic groups in the U.S. and elsewhere.[79] The g factor has been found to be the most invariant of all factors in cross-cultural comparisons. For example, when the g factors computed from an American standardization sample of Wechsler’s IQ battery and from large samples who completed the Japanese translation of the same battery were compared, the congruence coefficient was .99, indicating virtual identity. Similarly, the congruence coefficient between the g factors obtained from white and black standardization samples of the WISC battery in the U.S. was .995, and the variance in test scores accounted for by g was highly similar for both groups.[82]

Most studies suggest that there are negligible differences in the mean level of g between the sexes, and that sex differences in cognitive abilities are to be found in more narrow domains. For example, males generally outperform females in spatial tasks, while females generally outperform males in verbal tasks. Another difference that has been found in many studies is that males show more variability in both general and specific abilities than females, with proportionately more males at both the low end and the high end of the test score distribution.[83]

Consistent differences between racial and ethnic groups in g have been found, particularly in the U.S. A 2001 meta-analysis of millions of subjects indicated that there is a 1.1 standard deviation gap in the mean level of g between white and black Americans, favoring the former. The mean score of Hispanic Americans was found to be .72 standard deviations below that of non-Hispanic whites.[84] In contrast, Americans of East Asian descent generally slightly outscore white Americans.[85] Several researchers have suggested that the magnitude of the black-white gap in cognitive ability tests is dependent on the magnitude of the test’s g loading, with tests showing higher g loadings producing larger gaps (see Spearman’s hypothesis).[86] It has also been claimed that racial and ethnic differences similar to those found in the U.S. can be observed globally.[87]

Relation to other psychological constructs

Elementary cognitive tasks

Main article: Mental chronometry

An illustration of the Jensen box, an apparatus for measuring choice reaction time.

Elementary cognitive tasks (ECTs) also correlate strongly with g. ECTs are, as the name suggests, simple tasks that apparently require very little intelligence, but still correlate strongly with more exhaustive intelligence tests. Determining whether a light is red or blue and determining whether there are four or five squares drawn on a computer screen are two examples of ECTs. The answers to such questions are usually provided by quickly pressing buttons. Often, in addition to buttons for the two options provided, a third button is held down from the start of the test. When the stimulus is given to the subject, they remove their hand from the starting button to the button of the correct answer. This allows the examiner to determine how much time was spent thinking about the answer to the question (reaction time, usually measured in small fractions of second), and how much time was spent on physical hand movement to the correct button (movement time). Reaction time correlates strongly with g, while movement time correlates less strongly.[88] ECT testing has allowed quantitative examination of hypotheses concerning test bias, subject motivation, and group differences. By virtue of their simplicity, ECTs provide a link between classical IQ testing and biological inquiries such as fMRI studies.

Working memory

One theory holds that g is identical or nearly identical to working memory capacity. Among other evidence for this view, some studies have found factors representing g and working memory to be perfectly correlated. However, in a meta-analysis the correlation was found to be considerably lower.[89] One criticism that has been made of studies that identify g with working memory is that “we do not advance understanding by showing that one mysterious concept is linked to another.”[90]

Piagetian tasks

Psychometric theories of intelligence aim at quantifying intellectual growth and identifying ability differences between individuals and groups. In contrast, Jean Piaget‘s theory of cognitive development seeks to understand qualitative changes in children’s intellectual development. Piaget designed a number of tasks to verify hypotheses arising from his theory. The tasks were not intended to measure individual differences, and they have no equivalent in psychometric intelligence tests.[91][92] For example, in one of the best-known Piagetian conservation tasks a child is asked if the amount of water in two identical glasses is the same. After the child agrees that the amount is the same, the investigator pours the water from one of the glasses into a glass of different shape so that the amount appears different although it remains the same. The child is then asked if the amount of water in the two glasses is the same or different.

Notwithstanding the different research traditions in which psychometric tests and Piagetian tasks were developed, the correlations between the two types of measures have been found to be consistently positive and generally moderate in magnitude. A common general factor underlies them. It has been shown that it is possible to construct a battery consisting of Piagetian tasks that is as good a measure of g as standard IQ tests.[91][93]

Personality

The traditional view in psychology is that there is no meaningful relationship between personality and intelligence, and that the two should be studied separately. Intelligence can be understood in terms of what an individual can do, or what his or her maximal performance is, while personality can be thought of in terms of what an individual will typically do, or what his or her general tendencies of behavior are. Research has indicated that correlations between measures of intelligence and personality are small, and it has thus been argued that g is a purely cognitive variable that is independent of personality traits. In a 2007 meta-analysis the correlations between g and the “Big Five” personality traits were found to be as follows:

  • conscientiousness -.04
  • agreeableness .00
  • extraversion .02
  • openness .22
  • emotional stability .09

The same meta-analysis found a correlation of .20 between self-efficacy and g.[94][95][96]

Some researchers have argued that the associations between intelligence and personality, albeit modest, are consistent. They have interpreted correlations between intelligence and personality measures in two main ways. The first perspective is that personality traits influence performance on intelligence tests. For example, a person may fail to perform at a maximal level on an IQ test due to his or her anxiety and stress-proneness. The second perspective considers intelligence and personality to be conceptually related, with personality traits determining how people apply and invest their cognitive abilities, leading to knowledge expansion and greater cognitive differentiation.[94][97]

Creativity

Some researchers believe that there is a threshold level of g below which socially significant creativity is rare, but that otherwise there is no relationship between the two. It has been suggested that this threshold is at least one standard deviation above the population mean. Above the threshold, personality differences are believed to be important determinants of individual variation in creativity.[98][99]

Others have challenged the threshold theory. While not disputing that opportunity and personal attributes other than intelligence, such as energy and commitment, are important for creativity, they argue that g is positively associated with creativity even at the high end of the ability distribution. The longitudinal Study of Mathematically Precocious Youth has provided evidence for this contention. It has showed that individuals identified by standardized tests as intellectually gifted in early adolescence accomplish creative achievements (for example, securing patents or publishing literary or scientific works) at several times the rate of the general population, and that even within the top 1 percent of cognitive ability, those with higher ability are more likely to make outstanding achievements. The study has also suggested that the level of g acts as a predictor of the level of achievement, while specific cognitive ability patterns predict the realm of achievement.[100][101]

Challenges

Gf-Gc theory

Raymond Cattell, a student of Charles Spearman’s, rejected the unitary g factor model and divided g into two broad, relatively independent domains: fluid intelligence (Gf) and crystallized intelligence (Gc). Gf is conceptualized as a capacity to figure out novel problems, and it is best assessed with tests with little cultural or scholastic content, such as Raven’s matrices. Gc can be thought of as consolidated knowledge, reflecting the skills and information that an individual acquires and retains throughout his or her life. Gc is dependent on education and other forms of acculturation, and it is best assessed with tests that emphasize scholastic and cultural knowledge.[2][41][102] Gf can be thought to primarily consist of current reasoning and problem solving capabilities, while Gc reflects the outcome of previously executed cognitive processes.[103]

The rationale for the separation of Gf and Gc was to explain individuals’ cognitive development over time. While Gf and Gc have been found to be highly correlated, they differ in the way they change over a lifetime. Gf tends to peak at around age 20, slowly declining thereafter. In contrast, Gc is stable or increases across adulthood. A single general factor has been criticized as obscuring this bifurcated pattern of development. Cattell argued that Gf reflected individual differences in the efficiency of the central nervous system. Gc was, in Cattell’s thinking, the result of a person “investing” his or her Gf in learning experiences throughout life.[2][28][41][104]

Cattell, together with John Horn, later expanded the Gf-Gc model to include a number of other broad abilities, such as Gq (quantitative reasoning) and Gv (visual-spatial reasoning). While all the broad ability factors in the extended Gf-Gc model are positively correlated and thus would enable the extraction of a higher order g factor, Cattell and Horn maintained that it would be erroneous to posit that a general factor underlies these broad abilities. They argued that g factors computed from different test batteries are not invariant and would give different values of g, and that the correlations among tests arise because it is difficult to test just one ability at a time.[2][45][105]

However, several researchers have suggested that the Gf-Gc model is compatible with a g-centered understanding of cognitive abilities. For example, John B. Carroll‘s three-stratum model of intelligence includes both Gf and Gc together with a higher-order g factor. Based on factor analyses of many data sets, some researchers have also argued that Gf and g are one and the same factor and that g factors from different test batteries are substantially invariant provided that the batteries are large and diverse.[41][106][107]

Theories of uncorrelated abilities

Several theorists have proposed that there are intellectual abilities that are uncorrelated with each other. Among the earliest was L.L. Thurstone who created a model of primary mental abilities representing supposedly independent domains of intelligence. However, Thurstone’s tests of these abilities were found to produce a strong general factor. He argued that the lack of independence among his tests reflected the difficulty of constructing “factorially pure” tests that measured just one ability. Similarly, J.P. Guilford proposed a model of intelligence that comprised up to 180 distinct, uncorrelated abilities, and claimed to be able to test all of them. Later analyses have shown that the factorial procedures Guilford presented as evidence for his theory did not provide support for it, and that the test data that he claimed provided evidence against g did in fact exhibit the usual pattern of intercorrelations after correction for statistical artifacts.[108][109]

More recently, Howard Gardner has developed the theory of multiple intelligences. He posits the existence of nine different and independent domains of intelligence, such as mathematical, linguistic, spatial, musical, bodily-kinesthetic, meta-cognitive, and existential intelligences, and contends that individuals who fail in some of them may excel in others. According to Gardner, tests and schools traditionally emphasize only linguistic and logical abilities while neglecting other forms of intelligence. While popular among educationalists, Gardner’s theory has been much criticized by psychologists and psychometricians. One criticism is that the theory does violence to both scientific and everyday usages of the word “intelligence.” Several researchers have argued that not all of Gardner’s intelligences fall within the cognitive sphere. For example, Gardner contends that a successful career in professional sports or popular music reflects bodily-kinesthetic intelligence and musical intelligence, respectively, even though one might usually talk of athletic and musical skills, talents, or abilities instead. Another criticism of Gardner’s theory is that many of his purportedly independent domains of intelligence are in fact correlated with each other. Responding to empirical analyses showing correlations between the domains, Gardner has argued that the correlations exist because of the common format of tests and because all tests require linguistic and logical skills. His critics have in turn pointed out that not all IQ tests are administered in the paper-and-pencil format, that aside from linguistic and logical abilities, IQ test batteries contain also measures of, for example, spatial abilities, and that elementary cognitive tasks (for example, inspection time and reaction time) that do not involve linguistic or logical reasoning correlate with conventional IQ batteries, too.[62][110][111][112]

Robert Sternberg, working with various colleagues, has also suggested that intelligence has dimensions independent of g. He argues that there are three classes of intelligence: analytic, practical, and creative. According to Sternberg, traditional psychometric tests measure only analytic intelligence, and should be augmented to test creative and practical intelligence as well. He has devised several tests to this effect. Sternberg equates analytic intelligence with academic intelligence, and contrasts it with practical intelligence, defined as an ability to deal with ill-defined real-life problems. Tacit intelligence is an important component of practical intelligence, consisting of knowledge that is not explicitly taught but is required in many real-life situations. Assessing creativity independent of intelligence tests has traditionally proved difficult, but Sternberg and colleagues have claimed to have created valid tests of creativity, too. The validation of Sternberg’s theory requires that the three abilities tested are substantially uncorrelated and have independent predictive validity. Sternberg has conducted many experiments which he claims confirm the validity of his theory, but several researchers have disputed this conclusion. For example, in his reanalysis of a validation study of Sternberg’s STAT test, Nathan Brody showed that the predictive validity of the STAT, a test of three allegedly independent abilities, was almost solely due to a single general factor underlying the tests, which Brody equated with the g factor.[113][114]

Flynn’s model

James Flynn has argued that intelligence should be conceptualized at three different levels: brain physiology, cognitive differences between individuals, and social trends in intelligence over time. According to this model, the g factor is a useful concept with respect to individual differences but its explanatory power is limited when the focus of investigation is either brain physiology, or, especially, the effect of social trends on intelligence. Flynn has criticized the notion that cognitive gains over time, or the Flynn effect, are “hollow” if they cannot be shown to be increases in g. He argues that the Flynn effect reflects shifting social priorities and individuals’ adaptation to them. To apply the individual differences concept of g to the Flynn effect is to confuse different levels of analysis. On the other hand, according to Flynn, it is also fallacious to deny, by referring to trends in intelligence over time, that some individuals have “better brains and minds” to cope with the cognitive demands of their particular time. At the level of brain physiology, Flynn has emphasized both that localized neural clusters can be affected differently by cognitive exercise, and that there are important factors that affect all neural clusters.[115]

Other criticisms

Perhaps the most famous critique of the construct of g is that of the paleontologist and biologist Stephen Jay Gould‘s, presented in his 1981 book The Mismeasure of Man. He argued that psychometricians have fallaciously reified the g factor as a physical thing in the brain, even though it is simply the product of statistical calculations (i.e., factor analysis). He further noted that it is possible to produce factor solutions of cognitive test data that do not contain a g factor yet explain the same amount of information as solutions that yield a g. According to Gould, there is no rationale for preferring one factor solution to another, and factor analysis therefore does not lend support to the existence of an entity like g. More generally, Gould criticized the g theory for abstracting intelligence as a single entity and for ranking people “in a single series of worthiness”, arguing that such rankings are used to justify the oppression of disadvantaged groups.[34][116]

Many researchers have criticized Gould’s arguments. For example, they have rejected the accusation of reification, maintaining that the use of extracted factors such as g as potential causal variables whose reality can be supported or rejected by further investigations constitutes a normal scientific practice that in no way distinguishes psychometrics from other sciences. Critics have also suggested that Gould did not understand the purpose of factor analysis, and that he was ignorant of relevant methodological advances in the field. While different factor solutions may be mathematically equivalent in their ability to account for intercorrelations among tests, solutions that yield a g factor are psychologically preferable for several reasons extrinsic to factor analysis, including the phenomenon of the positive manifold, the fact that the same g can emerge from quite different test batteries, the widespread practical validity of g, and the linkage of g to many biological variables.[34][35][117]

John Horn and John McArdle have argued that the modern g theory, as espoused by, for example, Arthur Jensen, is unfalsifiable, because the existence of a common factor like g follows tautologically from positive correlations among tests. They contrasted the modern hierarchical theory of g with Spearman’s original two-factor theory which was readily falsifiable (and indeed was falsified).[28]

See also

Theory of multiple intelligences

The theory of multiple intelligences is a theory of intelligence that differentiates it into specific (primarily sensory) “modalities”, rather than seeing intelligence as dominated by a single general ability. This model was proposed by Howard Gardner in his 1983 book Frames of Mind: The Theory of Multiple Intelligences. Gardner articulated seven criteria for a behavior to be considered an intelligence.[1] These were that the intelligences showed: potential for brain isolation by brain damage, place in evolutionary history, presence of core operations, susceptibility to encoding (symbolic expression), a distinct developmental progression, the existence of savants, prodigies and other exceptional people, and support from experimental psychology and psychometric findings.

Gardner chose eight abilities that he held to meet these criteria:[2] musical–rhythmic, visual–spatial, verbal–linguistic, logical–mathematical, bodily–kinesthetic, interpersonal, intrapersonal, and naturalistic. He later suggested that existential and moral intelligence may also be worthy of inclusion.[3] Although the distinction between intelligences has been set out in great detail, Gardner opposes the idea of labeling learners to a specific intelligence. Each individual possesses a unique blend of all the intelligences. Gardner firmly maintains that his theory of multiple intelligences should “empower learners”, not restrict them to one modality of learning.[4]

Gardner argues intelligence is categorized into three primary or overarching categories, those of which are formulated by the abilities. According to Gardner, intelligence is: 1) The ability to create an effective product or offer a service that is valued in a culture, 2) a set of skills that make it possible for a person to solve problems in life, and 3) the potential for finding or creating solutions for problems, which involves gathering new knowledge.[5]

According to a 2006 study many of Gardner’s “intelligences” correlate with the g factor, supporting the idea of a single dominant type of intelligence. According to the study, each of the domains proposed by Gardner involved a blend of g, cognitive abilities other than g, and, in some cases, non-cognitive abilities or personality characteristics.[6] Empirical support for non-g intelligences is lacking or very poor. Despite this the ideas of multiple non-g intelligences are very attractive to many due to the suggestion that everyone can be smart in some way.[7] Cognitive neuroscience research does not support the theory of multiple intelligences.

Intelligence modalities

Musical–rhythmic and harmonic

Main article: Musicality

This area has to do with sensitivity to sounds, rhythms, tones, and music. People with a high musical intelligence normally have good pitch and may even have absolute pitch, and are able to sing, play musical instruments, and compose music. They have sensitivity to rhythm, pitch, meter, tone, melody or timbre.[8][9]

Visual–spatial

This area deals with spatial judgment and the ability to visualize with the mind’s eye. Spatial ability is one of the three factors beneath g in the hierarchical model of intelligence.[9]

Verbal–linguistic

People with high verbal-linguistic intelligence display a facility with words and languages. They are typically good at reading, writing, telling stories and memorizing words along with dates.[9] Verbal ability is one of the most g-loaded abilities.[10] This type of intelligence is measured with the Verbal IQ in WAIS-III.

Logical–mathematical

Further information: Reason

This area has to do with logic, abstractions, reasoning, numbers and critical thinking.[9] This also has to do with having the capacity to understand the underlying principles of some kind of causal system.[8] Logical reasoning is closely linked to fluid intelligence and to general intelligence (g factor).[11]

Bodily–kinesthetic

Further information: Gross motor skill and Fine motor skill

The core elements of the bodily-kinesthetic intelligence are control of one’s bodily motions and the capacity to handle objects skillfully.[9] Gardner elaborates to say that this also includes a sense of timing, a clear sense of the goal of a physical action, along with the ability to train responses.

People who have high bodily-kinesthetic intelligence should be generally good at physical activities such as sports, dance, acting, and making things.

Gardner believes that careers that suit those with high bodily-kinesthetic intelligence include: athletes, dancers, musicians, actors, builders, police officers, and soldiers. Although these careers can be duplicated through virtual simulation, they will not produce the actual physical learning that is needed in this intelligence.[12]

Interpersonal

Main article: Social skills

This area has to do with interaction with others.[9] In theory, individuals who have high interpersonal intelligence are characterized by their sensitivity to others’ moods, feelings, temperaments and motivations, and their ability to cooperate in order to work as part of a group. According to Gardner in How Are Kids Smart: Multiple Intelligences in the Classroom, “Inter- and Intra- personal intelligence is often misunderstood with being extroverted or liking other people…”[13] Those with high interpersonal intelligence communicate effectively and empathize easily with others, and may be either leaders or followers. They often enjoy discussion and debate.

Gardner believes that careers that suit those with high interpersonal intelligence include sales persons, politicians, managers, teachers, counselors and social workers.[14]

Intrapersonal

Further information: Introspection

This area has to do with introspective and self-reflective capacities. This refers to having a deep understanding of the self; what one’s strengths/ weaknesses are, what makes one unique, being able to predict one’s own reactions/emotions.

Naturalistic

This area has to do with nurturing and relating information to one’s natural surroundings.[9] Examples include classifying natural forms such as animal and plant species and rocks and mountain types. This ability was clearly of value in our evolutionary past as hunters, gatherers, and farmers; it continues to be central in such roles as botanist or chef.[8] This sort of ecological receptiveness is deeply rooted in a “sensitive, ethical, and holistic understanding” of the world and its complexities–including the role of humanity within the greater ecosphere.[15]

Existential

Some proponents of multiple intelligence theory proposed spiritual or religious intelligence as a possible additional type. Gardner did not want to commit to a spiritual intelligence, but suggested that an “existential” intelligence may be a useful construct.[16] The hypothesis of an existential intelligence has been further explored by educational researchers.[17]

Critical reception

Gardner argues that there is a wide range of cognitive abilities, but that there are only very weak correlations among them. For example, the theory postulates that a child who learns to multiply easily is not necessarily more intelligent than a child who has more difficulty on this task. The child who takes more time to master multiplication may best learn to multiply through a different approach, may excel in a field outside mathematics, or may be looking at and understanding the multiplication process at a fundamentally deeper level.

Intelligence tests and psychometrics have generally found high correlations between different aspects of intelligence, rather than the low correlations which Gardner’s theory predicts, supporting the prevailing theory of general intelligence rather than multiple intelligences (MI).[18] The theory has been widely criticized by mainstream psychology for its lack of empirical evidence, and its dependence on subjective judgement.[19]

Definition of intelligence

One major criticism of the theory is that it is ad hoc: that Gardner is not expanding the definition of the word “intelligence”, but rather denies the existence of intelligence as traditionally understood, and instead uses the word “intelligence” where other people have traditionally used words like “ability” and “aptitude“. This practice has been criticized by Robert J. Sternberg,[20][21] Eysenck,[22] and Scarr.[23] White (2006) points out that Gardner’s selection and application of criteria for his “intelligences” is subjective and arbitrary, and that a different researcher would likely have come up with different criteria.[24]

Defenders of MI theory argue that the traditional definition of intelligence is too narrow, and thus a broader definition more accurately reflects the differing ways in which humans think and learn.[25]

Some criticisms arise from the fact that Gardner has not provided a test of his multiple intelligences. He originally defined it as the ability to solve problems that have value in at least one culture, or as something that a student is interested in. He then added a disclaimer that he has no fixed definition, and his classification is more of an artistic judgment than fact:

Ultimately, it would certainly be desirable to have an algorithm for the selection of an intelligence, such that any trained researcher could determine whether a candidate’s intelligence met the appropriate criteria. At present, however, it must be admitted that the selection (or rejection) of a candidate’s intelligence is reminiscent more of an artistic judgment than of a scientific assessment.[26]

Generally, linguistic and logical-mathematical abilities are called intelligences, but artistic, musical, athletic, etc. abilities are not. Gardner argues this causes the former to be needlessly aggrandized. Certain critics balk at this widening of the definition, saying that it ignores “the connotation of intelligence … [which] has always connoted the kind of thinking skills that makes one successful in school.”[27]

Gardner writes “I balk at the unwarranted assumption that certain human abilities can be arbitrarily singled out as intelligence while others cannot.”[28] Critics hold that given this statement, any interest or ability can be redefined as “intelligence”. Thus, studying intelligence becomes difficult, because it diffuses into the broader concept of ability or talent. Gardner’s addition of the naturalistic intelligence and conceptions of the existential and moral intelligences are seen as fruits of this diffusion. Defenders of the MI theory would argue that this is simply a recognition of the broad scope of inherent mental abilities, and that such an exhaustive scope by nature defies a one-dimensional classification such as an IQ value.

The theory and definitions have been critiqued by Perry D. Klein as being so unclear as to be tautologous and thus unfalsifiable. Having a high musical ability means being good at music while at the same time being good at music is explained by having a high musical ability.[29]

Neo-Piagetian criticism

Andreas Demetriou suggests that theories which overemphasize the autonomy of the domains are as simplistic as the theories that overemphasize the role of general intelligence and ignore the domains. He agrees with Gardner that there are indeed domains of intelligence that are relevantly autonomous of each other.[30] Some of the domains, such as verbal, spatial, mathematical, and social intelligence are identified by most lines of research in psychology. In Demetriou’s theory, one of the neo-Piagetian theories of cognitive development, Gardner is criticized for underestimating the effects exerted on the various domains of intelligences by the various subprocesses that define overall processing efficiency, such as speed of processing, executive functions, working memory, and meta-cognitive processes underlying self-awareness and self-regulation. All of these processes are integral components of general intelligence that regulate the functioning and development of different domains of intelligence.[31]

The domains are to a large extent expressions of the condition of the general processes, and may vary because of their constitutional differences but also differences in individual preferences and inclinations. Their functioning both channels and influences the operation of the general processes.[32][33] Thus, one cannot satisfactorily specify the intelligence of an individual or design effective intervention programs unless both the general processes and the domains of interest are evaluated.[34][35]

IQ tests

Gardner argues that IQ tests only measure linguistic and logical-mathematical abilities. He argues the importance of assessing in an “intelligence-fair” manner. While traditional paper-and-pen examinations favour linguistic and logical skills, there is a need for intelligence-fair measures that value the distinct modalities of thinking and learning that uniquely define each intelligence.[9]

Psychologist Alan S. Kaufman points out that IQ tests have measured spatial abilities for 70 years.[36] Modern IQ tests are greatly influenced by the Cattell-Horn-Carroll theory which incorporates a general intelligence but also many more narrow abilities. While IQ tests do give an overall IQ score, they now also give scores for many more narrow abilities.[36]

Lack of empirical evidence

According to a 2006 study many of Gardner’s “intelligences” correlate with the g factor, supporting the idea of a single dominant type of intelligence. According to the study, each of the domains proposed by Gardner involved a blend of g, of cognitive abilities other than g, and, in some cases, of non-cognitive abilities or of personality characteristics.[6]

Linda Gottfredson (2006) has argued that thousands of studies support the importance of intelligence quotient (IQ) in predicting school and job performance, and numerous other life outcomes. In contrast, empirical support for non-g intelligences is lacking or very poor. She argued that despite this the ideas of multiple non-g intelligences are very attractive to many due to the suggestion that everyone can be smart in some way.[7]

A critical review of MI theory argues that there is little empirical evidence to support it:

To date there have been no published studies that offer evidence of the validity of the multiple intelligences. In 1994 Sternberg reported finding no empirical studies. In 2000 Allix reported finding no empirical validating studies, and at that time Gardner and Connell conceded that there was “little hard evidence for MI theory” (2000, p. 292). In 2004 Sternberg and Grigerenko stated that there were no validating studies for multiple intelligences, and in 2004 Gardner asserted that he would be “delighted were such evidence to accrue”,[37] and admitted that “MI theory has few enthusiasts among psychometricians or others of a traditional psychological background” because they require “psychometric or experimental evidence that allows one to prove the existence of the several intelligences.”[37][38]

The same review presents evidence to demonstrate that cognitive neuroscience research does not support the theory of multiple intelligences:

… the human brain is unlikely to function via Gardner’s multiple intelligences. Taken together the evidence for the intercorrelations of subskills of IQ measures, the evidence for a shared set of genes associated with mathematics, reading, and g, and the evidence for shared and overlapping “what is it?” and “where is it?” neural processing pathways, and shared neural pathways for language, music, motor skills, and emotions suggest that it is unlikely that each of Gardner’s intelligences could operate “via a different set of neural mechanisms” (1999, p. 99). Equally important, the evidence for the “what is it?” and “where is it?” processing pathways, for Kahneman’s two decision-making systems, and for adapted cognition modules suggests that these cognitive brain specializations have evolved to address very specific problems in our environment. Because Gardner claimed that the intelligences are innate potentialities related to a general content area, MI theory lacks a rationale for the phylogenetic emergence of the intelligences.[38]

The theory of multiple intelligences has often been conflated with learning styles. Gardner has denied that multiple intelligences are learning styles and agrees that the idea of learning styles is incoherent and lacking in empirical evidence.[39] The theory of multiple intelligences is often cited as an example of pseudoscience because it lacks empirical evidence or falsifiability.[40][41]

Use in education

Gardner defines an intelligence as “biopsychological potential to process information that can be activated in a cultural setting to solve problems or create products that are of value in a culture.”[42] According to Gardner, there are more ways to do this than just through logical and linguistic intelligence. Gardner believes that the purpose of schooling “should be to develop intelligences and to help people reach vocational and avocational goals that are appropriate to their particular spectrum of intelligences. People who are helped to do so, [he] believe[s], feel more engaged and competent and therefore more inclined to serve society in a constructive way.”[a]

Gardner contends that IQ tests focus mostly on logical and linguistic intelligence. Upon doing well on these tests, the chances of attending a prestigious college or university increase, which in turn creates contributing members of society.[43] While many students function well in this environment, there are those who do not. Gardner’s theory argues that students will be better served by a broader vision of education, wherein teachers use different methodologies, exercises and activities to reach all students, not just those who excel at linguistic and logical intelligence. It challenges educators to find “ways that will work for this student learning this topic”.[44]

James Traub‘s article in The New Republic notes that Gardner’s system has not been accepted by most academics in intelligence or teaching.[45] Gardner states that “while Multiple Intelligences theory is consistent with much empirical evidence, it has not been subjected to strong experimental tests … Within the area of education, the applications of the theory are currently being examined in many projects. Our hunches will have to be revised many times in light of actual classroom experience.”[46]

George Miller, a prominent cognitive psychologist, wrote in The New York Times Book Review that Gardner’s argument consisted of “hunch and opinion”. Jerome Bruner called Gardner’s “intelligences” “at best useful fictions,” and Charles Murray and Richard J. Herrnstein in The Bell Curve (1994) called Gardner’s theory “uniquely devoid of psychometric or other quantitative evidence.”[47]

Thomas Armstrong argues that Waldorf education engages all of Gardner’s original seven intelligences.[b] In spite of its lack of general acceptance in the psychological community, Gardner’s theory has been adopted by many schools, where it is often used to underpin discussion about learning styles,[48] and hundreds of books have been written about its applications in education.[49] Gardner himself has said he is “uneasy” with the way his theory has been used in education.[50]

See also

Intelligence quotient

Intelligence quotient
Diagnostics
Raven Matrix.svg

An example of one kind of IQ test item, modeled after items in the Raven’s Progressive Matrices test
ICD-9-CM 94.01
MedlinePlus 001912

An intelligence quotient, or IQ, is a score derived from one of several standardized tests designed to assess human intelligence. The abbreviation “IQ” was coined by the psychologist William Stern for the German term Intelligenz-quotient, his term for a scoring method for intelligence tests he advocated in a 1912 book.[1] When current IQ tests are developed, the median raw score of the norming sample is defined as IQ 100 and scores each standard deviation (SD) up or down are defined as 15 IQ points greater or less, although this was not always so historically.[2] By this definition, approximately 95 percent of the population scores an IQ between 70 and 130, which is within two standard deviations of the mean.

IQ scores have been shown to be associated with such factors as morbidity and mortality,[3][4] parental social status,[5] and, to a substantial degree, biological parental IQ. While the heritability of IQ has been investigated for nearly a century, there is still debate about the significance of heritability estimates[6][7] and the mechanisms of inheritance.[8]

IQ scores are used as predictors of educational achievement, special needs, job performance and income. They are also used to study IQ distributions in populations and the correlations between IQ and other variables. Raw scores on IQ tests for many populations have been rising at an average rate that scales to three IQ points per decade since the early 20th century, a phenomenon called the Flynn effect. Investigation of different patterns of increases in subtest scores can also inform current research on human intelligence.

History

Early history

French psychologist Alfred Binet was one of the key developers of what later became known as the Stanford–Binet test.

The English statistician Francis Galton made the first attempt at creating a standardized test for rating a person’s intelligence. A pioneer of psychometrics and the application of statistical methods to the study of human diversity and the study of inheritance of human traits, he believed that intelligence was largely a product of heredity (by which he did not mean genes, although he did develop several pre-Mendelian theories of particulate inheritance).[9][10][11] He hypothesized that there should exist a correlation between intelligence and other desirable traits like good reflexes, muscle grip, and head size.[12] He set up the first mental testing centre in the world in 1882 and he published “Inquiries into Human Faculty and Its Development” in 1883, in which he set out his theories. After gathering data on a variety of physical variables, he was unable to show any such correlation, and he eventually abandoned this research.[13][14]

French psychologist Alfred Binet, together with Victor Henri and Théodore Simon had more success in 1905, when they published the Binet-Simon test, which focused on verbal abilities. It was intended to identify mental retardation in school children,[13] but in specific contradistinction to claims made by psychiatrists that these children were “sick” (not “slow”) and should therefore be removed from school and cared-for in asylums.[15] The score on the Binet-Simon scale would reveal the child’s mental age. For example, a six-year-old child who passed all the tasks usually passed by six-year-olds—but nothing beyond—would have a mental age that matched his chronological age, 6.0. (Fancher, 1985). Binet thought that intelligence was multifaceted, but came under the control of practical judgement.

In Binet’s view, there were limitations with the scale and he stressed what he saw as the remarkable diversity of intelligence and the subsequent need to study it using qualitative, as opposed to quantitative, measures (White, 2000). American psychologist Henry H. Goddard published a translation of it in 1910. American psychologist Lewis Terman at Stanford University revised the Binet-Simon scale, which resulted in the Stanford-Binet Intelligence Scales (1916). It became the most popular test in the United States for decades.[13][16][17][18]

General factor (g)

An illustration of Spearman’s two-factor intelligence theory. Each small oval is a hypothetical mental test. The blue areas correspond to test-specific variance (s), while the purple areas represent the variance attributed to g.

Main article: g factor

The many different kinds of IQ tests use a wide variety of methods. Some tests are visual, some are verbal, some tests only use abstract-reasoning problems, and some tests concentrate on arithmetic, spatial imagery, reading, vocabulary, memory or general knowledge.

The British psychologist Charles Spearman in 1904 made the first formal factor analysis of correlations between the tests. He observed that children’s performance ratings across seemingly unrelated school subjects were positively correlated, and reasoned that these correlations reflected the influence of an underlying general mental ability that entered into performance on all kinds of mental tests. He suggested that all mental performance could be conceptualized in terms of a single general ability factor and a large number of narrow task-specific ability factors. Spearman named it g for “general factor” and labeled the specific factors or abilities for specific tasks s. In any collection of IQ tests, by definition the test that best measures g is the one that has the highest correlations with all the others. Most of these g-loaded tests typically involve some form of abstract reasoning. Therefore, Spearman and others have regarded g as the essence of intelligence.

This argument still accepted in principle by many psychometricians. Today’s factor models of intelligence typically represent cognitive abilities as a three-level hierarchy, where there are a large number of narrow factors at the bottom of the hierarchy, a handful of broad, more general factors at the intermediate level, and at the apex a single factor, referred to as the g factor, which represents the variance common to all cognitive tasks.

However, this view is not universally accepted; other factor analyses of the data, with different results, are possible. Some psychometricians regard g as a statistical artifact. One of the most commonly used measures of g is Raven’s Progressive Matrices, which is a test of visual reasoning.[2][13]

The war years in the United States

During World War I, a way was needed to evaluate and assign recruits. This led to the rapid development of several mental tests. The testing generated controversy and much public debate in the United States. Nonverbal or “performance” tests were developed for those who could not speak English or were suspected of malingering.[13] After the war, positive publicity on army psychological testing helped to make psychology a respected field.[19] Subsequently, there was an increase in jobs and funding in psychology in the United States.[20] Group intelligence tests were developed and became widely used in schools and industry.[21]

L.L. Thurstone argued for a model of intelligence that included seven unrelated factors (verbal comprehension, word fluency, number facility, spatial visualization, associative memory, perceptual speed, reasoning, and induction). While not widely used, it influenced later theories.[13]

David Wechsler produced the first version of his test in 1939. It gradually became more popular and overtook the Binet in the 1960s. It has been revised several times, as is common for IQ tests, to incorporate new research. One explanation is that psychologists and educators wanted more information than the single score from the Binet. Wechsler’s 10+ subtests provided this. Another is Binet focused on verbal abilities, while the Wechsler also included nonverbal abilities. The Binet has also been revised several times and is now similar to the Wechsler in several aspects, but the Wechsler continues to be the most popular test in the United States.[13]

Cattell–Horn–Carroll theory

Psychologist Raymond Cattell defined fluid and crystallized intelligence and authored the Cattell Culture Fair III IQ test.

Raymond Cattell (1941) proposed two types of cognitive abilities in a revision of Spearman’s concept of general intelligence. Fluid intelligence (Gf) was hypothesized as the ability to solve novel problems by using reasoning, and crystallized intelligence (Gc) was hypothesized as a knowledge-based ability that was very dependent on education and experience. In addition, fluid intelligence was hypothesized to decline with age, while crystallized intelligence was largely resistant. The theory was almost forgotten, but was revived by his student John L. Horn (1966) who later argued Gf and Gc were only two among several factors, and he eventually identified 9 or 10 broad abilities. The theory continued to be called Gf-Gc theory.[13]

John B. Carroll (1993), after a comprehensive reanalysis of earlier data, proposed the three stratum theory, which is a hierarchical model with three levels. The bottom stratum consists of narrow abilities that are highly specialized (e.g., induction, spelling ability). The second stratum consists of broad abilities. Carroll identified eight second-stratum abilities. Carroll accepted Spearman’s concept of general intelligence, for the most part, as a representation of the uppermost, third stratum.[22][23]

In 1999, a merging of the Gf-Gc theory of Cattell and Horn with Carroll’s Three-Stratum theory has led to the Cattell–Horn–Carroll theory. It has greatly influenced many of the current broad IQ tests.[13]

It is argued that this reflects much of what is known about intelligence from research. A hierarchy of factors is used; g is at the top. Under it are 10 broad abilities that in turn are subdivided into 70 narrow abilities. The broad abilities are:[13]

  • Fluid intelligence (Gf) includes the broad ability to reason, form concepts, and solve problems using unfamiliar information or novel procedures.
  • Crystallized intelligence (Gc) includes the breadth and depth of a person’s acquired knowledge, the ability to communicate one’s knowledge, and the ability to reason using previously learned experiences or procedures.
  • Quantitative reasoning (Gq) is the ability to comprehend quantitative concepts and relationships and to manipulate numerical symbols.
  • Reading and writing ability (Grw) includes basic reading and writing skills.
  • Short-term memory (Gsm) is the ability to apprehend and hold information in immediate awareness, and then use it within a few seconds.
  • Long-term storage and retrieval (Glr) is the ability to store information and fluently retrieve it later in the process of thinking.
  • Visual processing (Gv) is the ability to perceive, analyze, synthesize, and think with visual patterns, including the ability to store and recall visual representations.
  • Auditory processing (Ga) is the ability to analyze, synthesize, and discriminate auditory stimuli, including the ability to process and discriminate speech sounds that may be presented under distorted conditions.
  • Processing speed (Gs) is the ability to perform automatic cognitive tasks, particularly when measured under pressure to maintain focused attention.
  • Decision/reaction time/speed (Gt)reflects the immediacy with which an individual can react to stimuli or a task (typically measured in seconds or fractions of seconds; it is not to be confused with Gs, which typically is measured in intervals of 2–3 minutes). See Mental chronometry.

Modern tests do not necessarily measure all of these broad abilities. For example, Gq and Grw may be seen as measures of school achievement and not IQ.[13] Gt may be difficult to measure without special equipment.

g was earlier often subdivided into only Gf and Gc, which were thought to correspond to the nonverbal or performance subtests and verbal subtests in earlier versions of the popular Wechsler IQ test. More recent research has shown the situation to be more complex.[13]

Modern comprehensive IQ tests no longer give a single score. Although they still give an overall score, they now also give scores for many of these more restricted abilities, identifying particular strengths and weaknesses of an individual.[13]

Other theories

J.P. Guilford‘s Structure of Intellect (1967) model used three dimensions which when combined yielded a total of 120 types of intelligence. It was popular in the 1970s and early 1980s, but faded owing to both practical problems and theoretical criticisms.[13]

Alexander Luria‘s earlier work on neuropsychological processes led to the PASS theory (1997). It argued that only looking at one general factor was inadequate for researchers and clinicians who worked with learning disabilities, attention disorders, intellectual disability, and interventions for such disabilities. The PASS model covers four kinds of processes (planning process, attention/arousal process, simultaneous processing, and successive processing). The planning processes involve decision making, problem solving, and performing activities and requires goal setting and self-monitoring. The attention/arousal process involves selectively attending to a particular stimulus, ignoring distractions, and maintaining vigilance. Simultaneous processing involves the integration of stimuli into a group and requires the observation of relationships. Successive processing involves the integration of stimuli into serial order. The planning and attention/arousal components comes from structures located in the frontal lobe, and the simultaneous and successive processes come from structures located in the posterior region of the cortex.[24][25][26] It has influenced some recent IQ tests, and been seen as a complement to the Cattell-Horn-Carroll theory described above.[13]

Current tests

Normalized IQ distribution with mean 100 and standard deviation 15.

There are a variety of individually administered IQ tests in use in the English-speaking world.[27][28] The most commonly used individual IQ test series is the Wechsler Adult Intelligence Scale for adults and the Wechsler Intelligence Scale for Children for school-age test-takers. Other commonly used individual IQ tests (some of which do not label their standard scores as “IQ” scores) include the current versions of the Stanford-Binet, Woodcock-Johnson Tests of Cognitive Abilities, the Kaufman Assessment Battery for Children, the Cognitive Assessment System, and the Differential Ability Scales.

Approximately 95% of the population have scores within two standard deviations (SD) of the mean. If one SD is 15 points, as is common in almost all modern tests, then 95% of the population are within a range of 70 to 130, and 98% are below 131. Alternatively, two-thirds of the population have IQ scores within one SD of the mean; i.e. within the range 85–115.

IQ scales are ordinally scaled.[29][30][31][32][33] While one standard deviation is 15 points, and two SDs are 30 points, and so on, this does not imply that mental ability is linearly related to IQ, such that IQ 50 means half the cognitive ability of IQ 100. In particular, IQ points are not percentage points.

The correlation between IQ test results and achievement test results is about 0.7.[13][34]

Reliability and validity

Psychometricians generally regard IQ tests as having high statistical reliability.[35][36] A high reliability implies that—although test-takers may have varying scores when taking the same test on differing occasions, and they may have varying scores when taking different IQ tests at the same age—the scores generally agree with one another and across time. Like all statistical quantities, any particular estimate of IQ has an associated standard error that measures uncertainty about the estimate. For modern tests, the standard error of measurement is about three points. Clinical psychologists generally regard IQ scores as having sufficient statistical validity for many clinical purposes.[13][37][38]

IQ scores can differ to some degree for the same individual on different IQ tests. (IQ score table data and pupil pseudonyms adapted from description of KABC-II norming study cited in Kaufman 2009.[13])
Pupil KABC-II WISC-III WJ-III
Asher 90 95 111
Brianna 125 110 105
Colin 100 93 101
Danica 116 127 118
Elpha 93 105 93
Fritz 106 105 105
Georgi 95 100 90
Hector 112 113 103
Imelda 104 96 97
Jose 101 99 86
Keoku 81 78 75
Leo 116 124 102

Flynn effect

Main article: Flynn effect

Since the early 20th century, raw scores on IQ tests have increased in most parts of the world.[39][40][41] When a new version of an IQ test is normed, the standard scoring is set so performance at the population median results in a score of IQ 100. The phenomenon of rising raw score performance means if test-takers are scored by a constant standard scoring rule, IQ test scores have been rising at an average rate of around three IQ points per decade. This phenomenon was named the Flynn effect in the book The Bell Curve after James R. Flynn, the author who did the most to bring this phenomenon to the attention of psychologists.[42][43]

Researchers have been exploring the issue of whether the Flynn effect is equally strong on performance of all kinds of IQ test items, whether the effect may have ended in some developed nations, whether there are social subgroup differences in the effect, and what possible causes of the effect might be.[44] A 1998 textbook, IQ and Human Intelligence, by N. J. Mackintosh, noted that before Flynn published his major papers, many psychologists mistakenly believed that there were dysgenic trends gradually reducing the level of intelligence in the general population. They also believed that no environmental factor could possibly have a strong effect on IQ. Mackintosh noted that Flynn’s observations have prompted much new research in psychology and “demolish some long-cherished beliefs, and raise a number of other interesting issues along the way.”[40]

Age

IQ can change to some degree over the course of childhood.[45] However, in one longitudinal study, the mean IQ scores of tests at ages 17 and 18 were correlated at r=0.86 with the mean scores of tests at ages five, six, and seven and at r=0.96 with the mean scores of tests at ages 11, 12, and 13.[36]

For decades practitioners’ handbooks and textbooks on IQ testing have reported IQ declines with age after the beginning of adulthood. However, later researchers pointed out this phenomenon is related to the Flynn effect and is in part a cohort effect rather than a true aging effect.

A variety of studies of IQ and aging have been conducted since the norming of the first Wechsler Intelligence Scale drew attention to IQ differences in different age groups of adults. Current consensus is that fluid intelligence generally declines with age after early adulthood, while crystallized intelligence remains intact. Both cohort effects (the birth year of the test-takers) and practice effects (test-takers taking the same form of IQ test more than once) must be controlled to gain accurate data. It is unclear whether any lifestyle intervention can preserve fluid intelligence into older ages.[46]

The exact peak age of fluid intelligence or crystallized intelligence remains elusive. Cross-sectional studies usually show that especially fluid intelligence peaks at a relatively young age (often in the early adulthood) while longitudinal data mostly show that intelligence is stable until the mid adulthood or later. Subsequently, intelligence seems to decline slowly.[47]

Genetics and environment

Environmental and genetic factors play a role in determining IQ. Their relative importance has been the subject of much research and debate.

Heritability

Heritability is defined as the proportion of variance in a trait which is attributable to genotype within a defined population in a specific environment. A number of points must be considered when interpreting heritability.[48] Heritability measures the proportion of ‘variation’ in a trait that can be attributed to genes, and not the proportion of a trait caused by genes. The value of heritability can change if the impact of environment (or of genes) in the population is substantially altered. A high heritability of a trait does not mean environmental effects, such as learning, are not involved. Since heritability increases during childhood and adolescence, one should be cautious drawing conclusions regarding the role of genetics and environment from studies where the participants are not followed until they are adults.

Studies have found the heritability of IQ in adult twins to be 0.7 to 0.8 and in children twins 0.45 in the Western world.[36][49][50] It may seem reasonable to expect genetic influences on traits like IQ should become less important as one gains experiences with age. However, the opposite occurs. Heritability measures in infancy are as low as 0.2, around 0.4 in middle childhood, and as high as 0.8 in adulthood.[51] One proposed explanation is that people with different genes tend to reinforce the effects of those genes, for example by seeking out different environments.[36] Debate is ongoing about whether these heritability estimates are too high, owing to inadequate consideration of various factors—such as the environment being relatively more important in families with low socioeconomic status, or the effect of the maternal (fetal) environment.

Recent research suggests that molecular genetics of psychology and social science requires approaches that go beyond the examination of candidate genes.[52]

Shared family environment

Family members have aspects of environments in common (for example, characteristics of the home). This shared family environment accounts for 0.25–0.35 of the variation in IQ in childhood. By late adolescence, it is quite low (zero in some studies). The effect for several other psychological traits is similar. These studies have not looked at the effects of extreme environments, such as in abusive families.[36][53][54][55]

Non-shared family environment and environment outside the family[edit]

Although parents treat their children differently, such differential treatment explains only a small amount of nonshared environmental influence. One suggestion is that children react differently to the same environment because of different genes. More likely influences may be the impact of peers and other experiences outside the family.[36][54]

Individual genes

A very large proportion of the over 17,000 human genes are thought to have an effect on the development and functionality of the brain.[56] While a number of individual genes have been reported to be associated with IQ, none have a strong effect. Deary and colleagues (2009) reported that no finding of a strong gene effect on IQ has been replicated.[57] Most reported associations of genes with intelligence are false positive results.[58] Recent findings of gene associations with normally varying intelligence differences in adults continue to show weak effects for any one gene;[59] likewise in children.[60]

Gene-environment interaction

David Rowe reported an interaction of genetic effects with socioeconomic status, such that the heritability was high in high-SES families, but much lower in low-SES families.[61] This has been replicated in infants,[62] children [63] and adolescents [64] in the US, though not outside the US, for instance a reverse result was reported in the UK.[65]

Dickens and Flynn (2001) have argued that genes for high IQ initiate environment-shaping feedback, as genetic effects cause bright children to seek out more stimulating environments that further increase IQ. In their model, environment effects decay over time (the model could be adapted to include possible factors, like nutrition in early childhood, that may cause permanent effects). The Flynn effect can be explained by a generally more stimulating environment for all people. The authors suggest that programs aiming to increase IQ would be most likely to produce long-term IQ gains if they caused children to persist in seeking out cognitively demanding experiences.[66][67]

Interventions

In general, educational interventions, as those described below, have shown short-term effects on IQ, but long-term follow-up is often missing. For example, in the US very large intervention programs such as the Head Start Program have not produced lasting gains in IQ scores. More intensive, but much smaller projects such as the Abecedarian Project have reported lasting effects, often on socioeconomic status variables, rather than IQ.[36]

A placebo controlled double-blind experiment found that vegetarians who took 5 grams of creatine per day for six weeks showed a significant improvement on two separate tests of fluid intelligence, Raven’s Progressive Matrices, and the backward digit span test from the WAIS. The treatment group was able to repeat longer sequences of numbers from memory and had higher overall IQ scores than the control group. The researchers concluded that “supplementation with creatine significantly increased intelligence compared with placebo.”[68] A subsequent study found that creatine supplements improved cognitive ability in the elderly.[69] However, a study on young adults (0.03 g/kg/day for six weeks, e.g., 2 g/day for 150-pound individual) failed to find any improvements.[70]

Recent studies have shown that training in using one’s working memory may increase IQ. A study on young adults published in April 2008 by a team from the Universities of Michigan and Bern supports the possibility of the transfer of fluid intelligence from specifically designed working memory training.[71] Further research will be needed to determine nature, extent and duration of the proposed transfer. Among other questions, it remains to be seen whether the results extend to other kinds of fluid intelligence tests than the matrix test used in the study, and if so, whether, after training, fluid intelligence measures retain their correlation with educational and occupational achievement or if the value of fluid intelligence for predicting performance on other tasks changes. It is also unclear whether the training is durable of extended periods of time.[72]

Music

Musical training in childhood has been found to correlate with higher than average IQ.[73][74] A 2004 study indicated that 6 year-old children who received musical training (voice or piano lessons) had an average increase in IQ of 7.0 points while children who received alternative training (e.g. drama) or no training had an average increase in IQ of 4.3 points (which may be a consequence of the children entering grade school) as indicated by full scale IQ. Children were tested using Wechsler Intelligence Scale for Children–Third Edition, Kaufman Test of Educational Achievement and Parent Rating Scale of the Behavioral Assessment System for Children.[73]

Listening to classical music was reported to increase IQ; specifically spatial ability. In 1994 Frances Rauscher and Gorden Shaw reported that college students who listened to 10 minutes of Mozart’s Sonata for Two Pianos, showed an increase in IQ of 8 to 9 points on the spatial subtest on the Stanford-Binet Intelligence Scale.[75] The phenomenon was coined the Mozart effect.

Multiple attempted replications (e.g.[76]) have shown that this is at best a short-term effect (lasting no longer than 10 to 15 minutes), and is not related to IQ-increase.[77]

Music lessons

In 2004, Schellenberg devised an experiment to test his hypothesis that music lessons can enhance the IQ of children. He had 144 samples of 6 year old children which were put into 4 groups; keyboard lessons, vocal lessons, drama lessons or no lessons at all, for 36 weeks. The samples’ IQ was measured both before and after the lessons had taken place using the Wechsler Intelligence Scale for Children–Third Edition, Kaufman Test of Educational Achievement and Parent Rating Scale of the Behavioral Assessment System for Children. All four groups had increases in IQ, most likely resulted by the entrance of grade school. The notable difference with the two music groups compared to the two controlled groups was a slightly higher increase in IQ. The children in the control groups on average had an increase in IQ of 4.3 points, while the increase in IQ of the music groups was 7.0 points. Though the increases in IQ were not dramatic, one can still conclude that musical lessons do have a positive effect for children, if taken at a young age. It is hypothesized that improvements in IQ occur after musical lessons because the music lessons encourage multiple experiences which generates progression in a wide range of abilities for the children. Testing this hypothesis however, has proven difficult.[78]

Another test also performed by Schellenberg tested the effects of musical training in adulthood. He had two groups of adults, one group whom were musically trained and another group who were not. He administered tests of intelligence quotient and emotional intelligence to the trained and non-trained groups and found that the trained participants had an advantage in IQ over the untrained subjects even with gender, age, environmental issues (e.g. income, parent’s education) held constant. The two groups, however, score similarly in the emotional intelligence test. The test results (like the previous results) show that there is a positive correlation between musical training and IQ, but it is not evident that musical training has a positive effect on emotional intelligence.[79]

Brain anatomy

Several neurophysiological factors have been correlated with intelligence in humans, including the ratio of brain weight to body weight and the size, shape and activity level of different parts of the brain. Specific features that may affect IQ include the size and shape of the frontal lobes, the amount of blood and chemical activity in the frontal lobes, the total amount of gray matter in the brain, the overall thickness of the cortex and the glucose metabolic rate.

Health

Health is important in understanding differences in IQ test scores and other measures of cognitive ability. Several factors can lead to significant cognitive impairment, particularly if they occur during pregnancy and childhood when the brain is growing and the blood–brain barrier is less effective. Such impairment may sometimes be permanent, sometimes be partially or wholly compensated for by later growth.[citation needed] A cohort study confers the relationship between familial inbreeding and modest cognitive impairments among children, providing the evidence for inbreeding depression on intellectual behaviors on comparing with environmental and socioeconomic variables.[80]

Since about 2010 researchers such as Eppig, Hassel and MacKenzie have found a very close and consistent link between IQ scores and infectious diseases, especially in the infant and preschool populations and the mothers of these children.[81] They have postulated that fighting infectious diseases strains the child’s metabolism and prevents full brain development. Hassel postulated that it is by far the most important factor in determining population IQ. However they also found that subsequent factors such as good nutrition, regular quality schooling can offset early negative effects to some extent.

Developed nations have implemented several health policies regarding nutrients and toxins known to influence cognitive function. These include laws requiring fortification of certain food products and laws establishing safe levels of pollutants (e.g. lead, mercury, and organochlorides). Improvements in nutrition, and in public policy in general, have been implicated in worldwide IQ increases.[citation needed]

Cognitive epidemiology is a field of research that examines the associations between intelligence test scores and health. Researchers in the field argue that intelligence measured at an early age is an important predictor of later health and mortality differences.

Social correlations

School performance

The American Psychological Association‘s report “Intelligence: Knowns and Unknowns” states that wherever it has been studied, children with high scores on tests of intelligence tend to learn more of what is taught in school than their lower-scoring peers. The correlation between IQ scores and grades is about .50. This means that the explained variance is 25%. Achieving good grades depends on many factors other than IQ, such as “persistence, interest in school, and willingness to study” (p. 81).[36]

It has been found that IQ’s correlation with school performance depends on the IQ measurement used. For undergraduate students, the Verbal IQ as measured by WAIS-R has been found to correlate significantly (0.53) with the GPA of the last 60 hours. In contrast, Performance IQ correlation with the same GPA was only 0.22 in the same study.[82]

Some measures of educational aptitude correlate highly with IQ tests – for instance, Frey and Detterman (2004) reported a correlation of 0.82 between g (general intelligence factor) and SAT scores;[83] another research found a correlation of 0.81 between g and GCSE scores, with the explained variance ranging “from 58.6% in Mathematics and 48% in English to 18.1% in Art and Design”.[84]

Job performance

According to Schmidt and Hunter, “for hiring employees without previous experience in the job the most valid predictor of future performance is general mental ability.”[85] The validity of IQ as a predictor of job performance is above zero for all work studied to date, but varies with the type of job and across different studies, ranging from 0.2 to 0.6.[86] The correlations were higher when the unreliability of measurement methods was controlled for.[36] While IQ is more strongly correlated with reasoning and less so with motor function,[87] IQ-test scores predict performance ratings in all occupations.[85] That said, for highly qualified activities (research, management) low IQ scores are more likely to be a barrier to adequate performance, whereas for minimally-skilled activities, athletic strength (manual strength, speed, stamina, and coordination) are more likely to influence performance.[85] It is largely through the quicker acquisition of job-relevant knowledge that higher IQ mediates job performance.

In establishing a causal direction to the link between IQ and work performance, longitudinal studies by Watkins and others suggest that IQ exerts a causal influence on future academic achievement, whereas academic achievement does not substantially influence future IQ scores.[88] Treena Eileen Rohde and Lee Anne Thompson write that general cognitive ability, but not specific ability scores, predict academic achievement, with the exception that processing speed and spatial ability predict performance on the SAT math beyond the effect of general cognitive ability.[89]

The US military has minimum enlistment standards at about the IQ 85 level. There have been two experiments with lowering this to 80 but in both cases these men could not master soldiering well enough to justify their costs.[90]

In 2000 the New London, CT police department turned away a recruit for having an IQ above 125, under the argument that those with overly-high IQs will become bored and exhibit high turnover in the job. This policy has been challenged as discriminatory, but was upheld by the 2nd U.S. Circuit Court of Appeals in New York.[91]

The American Psychological Association’s report “Intelligence: Knowns and Unknowns” states that since the explained variance is 29%, [it says 25% in earlier paragraph] other individual characteristics such as interpersonal skills, aspects of personality etc. are probably of equal or greater importance, but at this point there are no equally reliable instruments to measure them.[36]

Income

While it has been suggested that “in economic terms it appears that the IQ score measures something with decreasing marginal value. It is important to have enough of it, but having lots and lots does not buy you that much.”,[92][93] large scale longitudinal studies indicate an increase in IQ translates into an increase in performance at all levels of IQ: i.e., that ability and job performance are monotonically linked at all IQ levels.[94] Charles Murray, coauthor of The Bell Curve, found that IQ has a substantial effect on income independently of family background.[95]

The link from IQ to wealth is much less strong than that from IQ to job performance. Some studies indicate that IQ is unrelated to net worth.[96][97]

The American Psychological Association’s 1995 report Intelligence: Knowns and Unknowns stated that IQ scores accounted for (explained variance) about a quarter of the social status variance and one-sixth of the income variance. Statistical controls for parental SES eliminate about a quarter of this predictive power. Psychometric intelligence appears as only one of a great many factors that influence social outcomes.[36]

Some studies claim that IQ only accounts for (explains) a sixth of the variation in income because many studies are based on young adults, many of whom have not yet reached their peak earning capacity, or even their education. On pg 568 of The g Factor, Arthur Jensen claims that although the correlation between IQ and income averages a moderate 0.4 (one sixth or 16% of the variance), the relationship increases with age, and peaks at middle age when people have reached their maximum career potential. In the book, A Question of Intelligence, Daniel Seligman cites an IQ income correlation of 0.5 (25% of the variance).

A 2002 study[98] further examined the impact of non-IQ factors on income and concluded that an individual’s location, inherited wealth, race, and schooling are more important as factors in determining income than IQ.

Crime

The American Psychological Association’s 1995 report Intelligence: Knowns and Unknowns stated that the correlation between IQ and crime was −0.2. It was −0.19 between IQ scores and number of juvenile offenses in a large Danish sample; with social class controlled, the correlation dropped to −0.17. A correlation of 0.20 means that the explained variance is 4%. It is important to realize that the causal links between psychometric ability and social outcomes may be indirect. Children with poor scholastic performance may feel alienated. Consequently, they may be more likely to engage in delinquent behavior, compared to other children who do well.[36]

In his book The g Factor (1998), Arthur Jensen cited data which showed that, regardless of race, people with IQs between 70 and 90 have higher crime rates than people with IQs below or above this range, with the peak range being between 80 and 90.

The 2009 Handbook of Crime Correlates stated that reviews have found that around eight IQ points, or 0.5 SD, separate criminals from the general population, especially for persistent serious offenders. It has been suggested that this simply reflects that “only dumb ones get caught” but there is similarly a negative relation between IQ and self-reported offending. That children with conduct disorder have lower IQ than their peers “strongly argues” for the theory.[99]

A study of the relationship between US county-level IQ and US county-level crime rates found that higher average IQs were associated with lower levels of property crime, burglary, larceny rate, motor vehicle theft, violent crime, robbery, and aggravated assault. These results were not “confounded by a measure of concentrated disadvantage that captures the effects of race, poverty, and other social disadvantages of the county.”[100][101]

Other correlations

In addition, IQ and its correlation to health, violent crime, gross state product, and government effectiveness are the subject of a 2006 paper in the publication Intelligence. The paper breaks down IQ averages by U.S. states using the federal government’s National Assessment of Educational Progress math and reading test scores as a source.[102]

The American Psychological Association’s 1995 report Intelligence: Knowns and Unknowns stated that the correlations for most “negative outcome” variables are typically smaller than 0.20, which means that the explained variance is less than 4%.[36]

Tambs et al.[103][better source needed] found that occupational status, educational attainment, and IQ are individually heritable; and further found that “genetic variance influencing educational attainment … contributed approximately one-fourth of the genetic variance for occupational status and nearly half the genetic variance for IQ.” In a sample of U.S. siblings, Rowe et al.[104] report that the inequality in education and income was predominantly due to genes, with shared environmental factors playing a subordinate role.

A recent USA study that reviewed political views and intelligence has shown that the mean adolescent intelligence of young adults who identify themselves as “very liberal” is 106.4, while that of those who identify themselves as “very conservative” is 94.8.[105] Two other studies conducted in the UK reached similar conclusions.[106][107]

There are also other correlations such as those between religiosity and intelligence and fertility and intelligence.

Real-life accomplishments

Average adult combined IQs associated with real-life accomplishments by various tests[108][109]
Accomplishment IQ Test/study Year
MDs, JDs, and PhDs 125+ WAIS-R 1987
College graduates 112 KAIT 2000
K-BIT 1992
115 WAIS-R
1–3 years of college 104 KAIT
K-BIT
105–110 WAIS-R
Clerical and sales workers 100–105
High school graduates, skilled workers (e.g., electricians, cabinetmakers) 100 KAIT
WAIS-R
97 K-BIT
1–3 years of high school (completed 9–11 years of school) 94 KAIT
90 K-BIT
95 WAIS-R
Semi-skilled workers (e.g. truck drivers, factory workers) 90–95
Elementary school graduates (completed eighth grade) 90
Elementary school dropouts (completed 0–7 years of school) 80–85
Have 50/50 chance of reaching high school 75
Average IQ of various occupational groups:[110]
Accomplishment IQ Test/study Year
Professional and technical 112
Managers and administrators 104
Clerical workers, sales workers, skilled workers, craftsmen, and foremen 101
Semi-skilled workers (operatives, service workers, including private household) 92
Unskilled workers 87
Type of work that can be accomplished:[108]
Accomplishment IQ Test/study Year
Adults can harvest vegetables, repair furniture 60
Adults can do domestic work 50

There is considerable variation within and overlap among these categories. People with high IQs are found at all levels of education and occupational categories. The biggest difference occurs for low IQs with only an occasional college graduate or professional scoring below 90.[13]

Group differences

Among the most controversial issues related to the study of intelligence is the observation that intelligence measures such as IQ scores vary between ethnic and racial groups and sexes. While there is little scholarly debate about the existence of some of these differences, their causes remain highly controversial both within academia and in the public sphere.

Sex

Main article: Sex and psychology

Most IQ tests are constructed so that there are no overall score differences between females and males.[111][112] Popular IQ batteries such as the WAIS and the WISC-R are also constructed in order to eliminate sex differences.[113] In a paper presented at the International Society for Intelligence Research in 2002, it was pointed out that because test constructors and the Educational Testing Service (which developed the SAT) often eliminate items showing marked sex differences in order to reduce the perception of bias, the “true sex” difference is masked. Items like the MRT and RT tests that show a male advantage in IQ are often removed.[114]

Race and ethnicity

Main article: Race and intelligence

The 1996 Task Force investigation on Intelligence sponsored by the American Psychological Association concluded that there are significant variations in IQ across races.[36] The problem of determining the causes underlying this variation relates to the question of the contributions of “nature and nurture” to IQ. Psychologists such as Alan S. Kaufman[115] and Nathan Brody[116] and statisticians such as Bernie Devlin[117] argue that there are insufficient data to conclude that this is because of genetic influences. Perhaps the most notable researcher arguing for a strong genetic influence on these average score differences was Arthur Jensen who restarted the debate in 1969 with his paper “How Much Can We Boost IQ and Scholastic Achievement?“, but many others e.g. Philippe Rushton, Richard Lynn, are also hereditarians (those who think that genetics plays some role in race differences in IQ/g). In contrast, other researchers such as Richard Nisbett argue that environmental factors can explain all of the average group differences.[118]

Public policy

In the United States, certain public policies and laws regarding military service,[119] [120] education, public benefits,[121] capital punishment,[122] and employment incorporate an individual’s IQ into their decisions. However, in the case of Griggs v. Duke Power Co. in 1971, for the purpose of minimizing employment practices that disparately impacted racial minorities, the U.S. Supreme Court banned the use of IQ tests in employment, except when linked to job performance via a job analysis. Internationally, certain public policies, such as improving nutrition and prohibiting neurotoxins, have as one of their goals raising, or preventing a decline in, intelligence.

A diagnosis of intellectual disability is in part based on the results of IQ testing. Borderline intellectual functioning is a categorization where a person has below average cognitive ability (an IQ of 71–85), but the deficit is not as severe as intellectual disability (70 or below).

In the United Kingdom, the eleven plus exam which incorporated an intelligence test has been used from 1945 to decide, at eleven years of age, which type of school a child should go to. They have been much less used since the widespread introduction of comprehensive schools.

Criticism and views

Relation with intelligence

See also: Intelligence

IQ is the most researched attempt at measuring intelligence and by far the most widely used in practical setting. However, although IQ attempts to measure some notion of intelligence, it may fail to act as an accurate measure of “intelligence” in its broadest sense. IQ tests only examine particular areas embodied by the broadest notion of “intelligence”, failing to account for certain areas which are also associated with “intelligence” such as creativity or emotional intelligence.

There are critics such as Keith Stanovich who do not dispute the stability of IQ test scores or the fact that they predict certain forms of achievement rather effectively. They do argue, however, that to base a concept of intelligence on IQ test scores alone is to ignore many important aspects of mental ability.[5][123]

Criticism of g

Some scientists dispute IQ entirely. In The Mismeasure of Man (1996), paleontologist Stephen Jay Gould criticized IQ tests and argued that they were used for scientific racism. He argued that g was a mathematical artifact and criticized:

…the abstraction of intelligence as a single entity, its location within the brain, its quantification as one number for each individual, and the use of these numbers to rank people in a single series of worthiness, invariably to find that oppressed and disadvantaged groups—races, classes, or sexes—are innately inferior and deserve their status.(pp. 24–25)

Arthur Jensen responded:

…what Gould has mistaken for “reification” is neither more nor less than the common practice in every science of hypothesizing explanatory models to account for the observed relationships within a given domain. Well known examples include the heliocentric theory of planetary motion, the Bohr atom, the electromagnetic field, the kinetic theory of gases, gravitation, quarks, Mendelian genes, mass, velocity, etc. None of these constructs exists as a palpable entity occupying physical space.[124]

Psychologist Peter Schönemann was also a persistent critic of IQ, calling it “the IQ myth”. He argued that g is a flawed theory and that the high heritability estimates of IQ are based on false assumptions.[125][126]

Another significant critic of G as the main measure of human cognitive abilities is Robert Sternberg who has argued that reducing the concept of intelligence to the measure of G does not fully account for the different skills and knowledge types that produce success in human society.[127]

Jensen has rejected the criticism by Gould and also argued that even if g were replaced by a model with several intelligences this would change the situation less than expected. He argues that all tests of cognitive ability would continue to be highly correlated with one another and there would still be a black-white gap on cognitive tests.[128]

Test bias

The American Psychological Association’s report Intelligence: Knowns and Unknowns stated that in the United States IQ tests as predictors of social achievement are not biased against African Americans since they predict future performance, such as school achievement, similarly to the way they predict future performance for Caucasians.[36] While agreeing that IQ tests predict performance equally well for all racial groups (Except Asian Americans), Nicholas Mackintosh also points out that there may still be a bias inherent in IQ testing if the education system is also systematically biased against African Americans, in which case educational performance may in fact also be an underestimation of African American children’s cognitive abilities.[129] Earl Hunt points out that while this may be the case that would not be a bias of the test, but of society.[130]

However, IQ tests may well be biased when used in other situations. A 2005 study stated that “differential validity in prediction suggests that the WAIS-R test may contain cultural influences that reduce the validity of the WAIS-R as a measure of cognitive ability for Mexican American students,”[131] indicating a weaker positive correlation relative to sampled white students. Other recent studies have questioned the culture-fairness of IQ tests when used in South Africa.[132][133] Standard intelligence tests, such as the Stanford-Binet, are often inappropriate for autistic children; the alternative of using developmental or adaptive skills measures are relatively poor measures of intelligence in autistic children, and may have resulted in incorrect claims that a majority of autistic children are mentally retarded.[134]

Outdated methodology

See also: Psychometrics

According to a 2006 article by the National Center for Biotechnology Information, contemporary psychological researches often did not reflect substantial recent developments in psychometrics and “bears an uncanny resemblance to the psychometric state of the art as it existed in the 1950s.”[135]

“Intelligence: Knowns and Unknowns”

In response to the controversy surrounding The Bell Curve, the American Psychological Association‘s Board of Scientific Affairs established a task force in 1995 to write a report on the state of intelligence research which could be used by all sides as a basis for discussion, “Intelligence: Knowns and Unknowns“. The full text of the report is available through several websites.[36][136]

In this paper the representatives of the association regret that IQ-related works are frequently written with a view to their political consequences: “research findings were often assessed not so much on their merits or their scientific standing as on their supposed political implications”.

The task force concluded that IQ scores do have high predictive validity for individual differences in school achievement. They confirm the predictive validity of IQ for adult occupational status, even when variables such as education and family background have been statistically controlled. They stated that individual differences in intelligence are substantially influenced by both genetics and environment.

The report stated that a number of biological factors, including malnutrition, exposure to toxic substances, and various prenatal and perinatal stressors, result in lowered psychometric intelligence under at least some conditions. The task force agrees that large differences do exist between the average IQ scores of blacks and whites, saying:

The cause of that differential is not known; it is apparently not due to any simple form of bias in the content or administration of the tests themselves. The Flynn effect shows that environmental factors can produce differences of at least this magnitude, but that effect is mysterious in its own right. Several culturally based explanations of the Black/ White IQ differential have been proposed; some are plausible, but so far none has been conclusively supported. There is even less empirical support for a genetic interpretation. In short, no adequate explanation of the differential between the IQ means of Blacks and Whites is presently available.

The APA journal that published the statement, American Psychologist, subsequently published eleven critical responses in January 1997, several of them arguing that the report failed to examine adequately the evidence for partly genetic explanations.

Dynamic assessment

A notable and increasingly influential[137][138] alternative to the wide range of standard IQ tests originated in the writings of psychologist Lev Vygotsky (1896-1934) of his most mature and highly productive period of 1932-1934. The notion of the zone of proximal development that he introduced in 1933, roughly a year before his death, served as the banner for his proposal to diagnose development as the level of actual development that can be measured by the child’s independent problem solving and, at the same time, the level of proximal, or potential development that is measured in the situation of moderately assisted problem solving by the child.[139] The maximum level of complexity and difficulty of the problem that the child is capable to solve under some guidance indicates the level of potential development. Then, the difference between the higher level of potential and the lower level of actual development indicates the zone of proximal development. Combination of the two indexes—the level of actual and the zone of the proximal development—according to Vygotsky, provides a significantly more informative indicator of psychological development than the assessment of the level of actual development alone.[140][141]

The ideas on the zone of development were later developed in a number of psychological and educational theories and practices. Most notably, they were developed under the banner of dynamic assessment that focuses on the testing of learning and developmental potential[142][143][144] (for instance, in the work of Reuven Feuerstein and his associates,[145] who has criticized standard IQ testing for its putative assumption or acceptance of “fixed and immutable” characteristics of intelligence or cognitive functioning). Grounded in developmental theories of Vygotsky and Feuerstein, who maintained that human beings are not static entities but are always in states of transition and transactional relationships with the world, dynamic assessment received also considerable support in the recent revisions of cognitive developmental theory by Joseph Campione, Ann Brown, and John D. Bransford and in theories of multiple intelligences by Howard Gardner and Robert Sternberg.[146]

Classification

Main article: IQ classification

IQ classification is the practice by IQ test publishers of designating IQ score ranges as various categories with labels such as “superior” or “average.”[147] IQ classification was preceded historically by attempts to classify human beings by general ability based on other forms of behavioral observation. Those other forms of behavioral observation are still important for validating classifications based on IQ tests.

High IQ societies

Main article: High IQ society

There are social organizations, some international, which limit membership to people who have scores as high as or higher than the 98th percentile on some IQ test or equivalent. Mensa International is perhaps the best known of these. There are other groups requiring a score above the 99th percentile.

Lateral Thinking

Thinking Tools – The Art and Science of Thinking

“On the Internet there is much misleading and erroneous information about ‘lateral thinking’. Some of the sites make false claims about me and my work. Because this is my official website I want to take this opportunity of clarifying matters regarding lateral thinking” – Edward de Bono.

I invented the term ‘lateral thinking’ in 1967. It was first written up in a book called “The Use of Lateral Thinking” (Jonathan Cape, London) – “New Think” (Basic Books, New York) – the two titles refer to the same book.

For many years now this has been acknowledged in the Oxford English Dictionary which is the final arbiter of the English Language.

There are several ways of defining lateral thinking, ranging from the technical to the illustrative.

1. “You cannot dig a hole in a different place by digging the same hole deeper”

This means that trying harder in the same direction may not be as useful as changing direction. Effort in the same direction (approach) will not necessarily succeed.

2. “Lateral Thinking is for changing concepts and perceptions”

With logic you start out with certain ingredients just as in playing chess you start out with given pieces. But what are those pieces? In most real life situations the pieces are not given, we just assume they are there. We assume certain perceptions, certain concepts and certain boundaries. Lateral thinking is concerned not with playing with the existing pieces but with seeking to change those very pieces. Lateral thinking is concerned with the perception part of thinking. This is where we organise the external world into the pieces we can then ‘process’.

3. “The brain as a self-organising information system forms asymmetric patterns. In such systems there is a mathematical need for moving across patterns. The tools and processes of lateral thinking are designed to achieve such ‘lateral’ movement. The tools are based on an understanding of self-organising information systems.”

This is a technical definition which depends on an understanding of self-organising information systems.

4. “In any self-organising system there is a need to escape from a local optimum in order to move towards a more global optimum. The techniques of lateral thinking, such as provocation, are designed to help that change.”

This is another technical definition. It is important because it also defines the mathematical need for creativity.

 

Lateral Thinking is a skill that can be learned.

This is an important point to understand. With the right training you can improve your ability to think laterally.

For details on my online Lateral Thinking Course entitled “de Bono’s Creativity” visit: www.debonotrainer.com

Lateral Thinking Workshop

Edward de Bono – The Father of Lateral Thinking and Creativity

WARNING! Lateral Thinking will change the way you think.

 

 There is nothing more exciting ……than thinking of a new idea

 There is nothing more rewarding…..than seeing a new idea work

 There is nothing more useful…..then a new idea that helps you meet a goal

 

Lateral Thinking is:

seeking to solve problems by apparently illogical means

a process and willingness to look at things in a different way

a relatively new type of thinking that complements analytical and critical thinking not part of our mainstream education – yet

a fast, effective tool used to help individuals, companies and teams solve tough problems and create new ideas, new products, new processes and new services.

a term that is used interchangeably with creativity

 

Joel Barker has told you to get out of your paradigms—- Edward de Bono shows you how!

 

Lateral Thinking:

A way of thinking that seeks a solution to an intractable problem through unorthodox methods or elements that would normally be ignored by logical thinking. Edward de Bono divides thinking into two methods. He calls one ‘vertical thinking’ that is, using the processes of logic, the traditional-historical method. He calls the other ‘lateral thinking’, which involves disrupting an apparent sequence and arriving at the solution from another angle.

When you are faced with fast-changing trends, fierce competition, and the need to work miracles despite tight budgets, you need Lateral Thinking.

Developing breakthrough ideas does not have to be the result of luck or a shotgun effort. Edward de Bono’s proven methods provide a deliberate, systematic process that will result in innovative thinking. Creative thinking is not a talent, it is a skill that can be learnt. It empowers people by adding strength to their natural abilities which improves teamwork, productivity and where appropriate – profit.

Today, better quality and better service are essential, but they are not enough. Creativity and innovation are the only engines that will drive lasting, global success.

Our minds are trained to find typical and predictable solutions to problems. You can master the the tools for innovative thinking. Lateral thinking will also help you with strategic planning and thinking outside the box of everyday issues.

 

The Need for Creative Thinking

 The limits of logical and critical thinking

 Why Creative Thinking is a learnable set of skills

 

The Thinking Techniques:

ALTERNATIVES: How to use concepts as a breeding ground for new ideas. Sometimes we do not look beyond the obvious alternatives. Sometimes we do not look for alternatives at all. This session shows how to extract the concept behind a group of alternatives and then use it to generate further alternatives.

FOCUS: When and how to change the focus of your thinking. The discipline of defining your focus and sticking to it. The attitude of focusing on matters that are not problem areas. How to generate and use a Creative Hit List.

CHALLENGE: Breaking free from the limits of the accepted ways of operating. With Challenge, we believe that the present way of doing things is not necessarily the best. Challenge is not an attack or criticism. It is the willingness to explore the reasons why we do things the way we do and whether there are any alternatives.

RANDOM ENTRY: Using unconnected input to open up new lines of thinking.

PROVOCATION & MOVEMENT: Generating provocative statements and then using them to build new ideas. This session explores the nature of perception and how it limits our creativity. The provocation techniques are designed to challenge these limitations. Movement is a new mental operation that we can use as an alternative to judgement. It allows us to develop a provocative idea into one that is workable and realistic.

HARVESTING: At the end of a Creative Thinking session one takes note of specific ideas that seem practical and have obvious value. We need to make a deliberate harvesting effort to collect ideas and concepts that are less well developed.

TREATMENT OF IDEAS: How to develop ideas and shape them to fit an organisation or situation.

The aim is for you to leave the workshop with skills you have practised and can apply immediately on return to your home or workplace.

Lateral thinking

~~~

Lateral thinking is solving problems through an indirect and creative approach, using reasoning that is not immediately obvious and involving ideas that may not be obtainable by using only traditional step-by-step logic. The term was coined in 1967 by Edward de Bono. [1]

According to de Bono, lateral thinking deliberately distances itself from standard perceptions of creativity as either “vertical” logic (the classic method for problem solving: working out the solution step-by-step from the given data) or “horizontal” imagination (having a thousand ideas but being unconcerned with the detailed implementation of them).

Methods

Critical thinking is primarily concerned with judging the true value of statements and seeking errors. Lateral thinking is more concerned with the movement value of statements and ideas. A person uses lateral thinking to move from one known idea to creating new ideas. Edward de Bono defines four types of thinking tools:

  • Idea generating tools that are designed to break current thinking patterns—routine patterns, the status quo
  • Focus tools that are designed to broaden where to search for new ideas
  • Harvest tools that are designed to ensure more value is received from idea generating output
  • Treatment tools that are designed to consider real-world constraints, resources, and support[2]

Random Entry Idea Generating Tool: The thinker chooses an object at random, or a noun from a dictionary, and associates it with the area they are thinking about.

Provocation Idea Generating Tool: The use of any of the provocation techniques—wishful thinking, exaggeration, reversal, escape, distortion, or arising. The thinker creates a list of provocations and then uses the most outlandish ones to move their thinking forward to new ideas.

Movement Techniques: The thinker develops provocation operations[clarification needed] by the following methods: extract a principle, focus on the difference, moment to moment, positive aspects, special circumstances.

Challenge Idea Generating Tool: A tool which is designed to ask the question “Why?” in a non-threatening way: why something exists, why it is done the way it is. The result is a very clear understanding of “Why?” which naturally leads to fresh new ideas. The goal is to be able to challenge anything at all, not just items which are problems. For example, one could challenge the handles on coffee cups. The reason for the handle seems to be that the cup is often too hot to hold directly. Perhaps coffee cups could be made with insulated finger grips, or there could be separate coffee cup holders similar to beer holders.

Concept Fan Idea Generating Tool: Ideas carry out concepts. This tool systematically expands the range and number of concepts in order to end up with a very broad range of ideas to consider.

Disproving: Based on the idea that the majority is always wrong (as suggested by Henrik Ibsen and John Kenneth Galbraith), take anything that is obvious and generally accepted as “goes without saying”, question it, take an opposite view, and try to convincingly disprove it. This technique is similar to de Bono’s “Black Hat” of the Six Thinking Hats, which looks at the ways in which something will not work.

Lateral thinking and problem solving

Problem Solving: When something creates a problem, the performance or the status quo of the situation drops. Problem solving deals with finding out what caused the problem and then figuring out ways to fix the problem. The objective is to get the situation to where it should be. For example, a production line has an established run rate of 1000 items per hour. Suddenly, the run rate drops to 800 items per hour. Ideas as to why this happened and solutions to repair the production line must be thought of, such as giving the worker a pay raise.

Creative Problem Solving: Using creativity, one must solve a problem in an indirect and unconventional manner. For example, if a production line produced 1000 books per hour, creative problem solving could find ways to produce more books per hour, use the production line, or reduce the cost to run the production line.

Creative Problem Identification: Many of the greatest non-technological innovations are identified while realizing an improved process or design in everyday objects and tasks either by accidental chance or by studying and documenting real world experience.

Lateral Problem “Solving”: Lateral thinking will often produce solutions whereby the problem appears as “obvious” in hindsight. That lateral thinking will often lead to problems that you never knew you had, or it will solve simple problems that have a huge potential. For example, if a production line produced 1000 books per hour, lateral thinking may suggest that a drop in output to 800 would lead to higher quality, more motivated workers etc. etc.

Lateral thinking puzzles: These are puzzles that are supposed to demonstrate what lateral thinking is about. However any puzzle that has only one solution is “not” lateral. While lateral thinking may help you construct such puzzles, the lateral thinking tools will seldom help you solve puzzles.

See also

8 simple everyday things that geniuses struggle with

inside the mind of a genius

Genius image by Shutterstock

Since you like to hang out at Guyism, I can only assume that you’re already a genius. And so you already know that geniuses tend to struggle with some of the more mundane mortal tasks. Yes, these are the things that most of us take for granted, but which cause geniuses an endless amount of misery and torment. And just in case you’re not a genius and wonder just exactly how their weird brains really work, consider this a glimpse into the twisted inner life of the genius and marvel at these eight simple everyday things that geniuses struggle with.

School

school

woodleywonderworks, Flickr

Sure, lots of people struggle with school. After all, it’s hard to concentrate when your hormones are directing Russ Meyer movies in your head or if you’re just, you know, dumb. But school seems like it was made for geniuses, right? Well oddly enough, many geniuses flat out suck when it comes to the scholastic life – Einstein famously flunked out and we all know at least one super smart dude whose grades inexplicably resembled a Playmate’s cup size, and I’m talking post-enhancement. This may seem strange, but consider for a moment how boring school must be for these dudes and lady dudes. I mean, think how boring it was for you and then imagine that you knew more than all your teachers and had to sit there all day listening to them drone on about something you understood when you were four years-old, like how to not eat Play-Doh or how to not piss yourself. That sounds kind of like a nightmare, right? Well, that’s kind of what school is like for geniuses.

Playing by Rules

rules

oatsy40, Flickr

We naturally live our lives by a commonly accepted set of rules. They’re not even things we think about. We just follow them because, well, that’s just the way it is and it’s easier that way. But if you’ve ever met a genius then you know that they think about everything and that means that they inevitably question a lot of the things most people take for granted. The result is that it’s hard for them to live within the same social constructs as the rest of us. It simply doesn’t make sense to them, and what’s more they often end up resenting that they’re supposed to. It’s not that they don’t have common sense, it’s that they’ve thought about it and decided that common sense is dumb. And that’s why your weird cousin with the 190 IQ wore shorts to your wedding.

Meeting People

meeting people

Girl on curb image by Shutterstock

Countless studies have shown that there is a fairly heavy correlation between genius and autism and many, many other social and behavioral disorders. Whether this is a pure accident of chemistry or whether it’s due to our own biases of what’s “normal,” the end result is that a lot of geniuses have trouble dealing with people, and especially with meeting them. Now I’m not talking some Big Bang Theory clichéd foot-in-mouth nonsense, where the genius just wants to talk in Klingon all day. What I’m talking about is a genuine inability to even initiate contact with another human being. My theory is that geniuses have an internal life – and internal monologue – that paralyzes them. In short, they think too goddamn much. That makes it really easy to psyche yourself out and when that happens suddenly meeting people for them is as difficult as rocket science is for the rest of the world.

Small talk

small talk

CarbonNYC, Flickr

Even if they do manage to meet people without much of a problem, it’s hard for many geniuses to get any further than that because they suck at making small talk. You probably don’t realize it, but a huge chunk of your daily interaction with people involves meaningless banter that you just participate in without really thinking about it. How’s the weather, nice day we’re having, that’s a nice ass you’ve got there… you know, the usual. It’s easy and it’s comfortable and you don’t have to think twice about it. But that’s just the thing – geniuses love to think twice about it and then they like to think about it a third time and then a fourth time and then… you get the point. And really, once you think about all those idiotic, meaningless small-talk topics and phrases that we all use to get by it’s hard not to see them as completely ridiculous. Geniuses suck at making small talk because they simply can’t see the point. Small talk is lubrication for social convention and as we’ve already seen, geniuses don’t care much for social convention.

Working

working

jeffwilcox, Flickr

It’s kind of hard to land a decent job when you have no idea how to talk to people on their level, no matter how smart you are. A lot of geniuses can’t even make it through the interview process because they say weird shit and refuse to play along with the games we all accept as part of that whole deal. They’ll answer honestly and often brutally and hey, guess what? People don’t really like that. And then if they do get a decent job, there’s still the whole playing nice with others thing, which is kind of hard to do when you think everyone’s an idiot – I mean, a person on the low-end of the genius scale has a 140 IQ which is 40 points higher than average. Imagine going to work every day and trying to deal with people with 60 IQs. You’d go nuts, right? Well, that’s what it’s like for geniuses. To them, all their coworkers and their bosses are Forrest Gump.

Fitting in a 9 to 5 World

9 to 5

Dave Stokes, Flickr

When you spend so much time questioning the rules of society, it tends to have a domino effect on the rest of your life. Suddenly, you can’t understand why you need to arbitrarily be at the office from nine to five – especially since you’re so smart that you can probably get your work done in like a half an hour. Next, you wonder why you should go to sleep just because the sun goes down – studies have shown that geniuses are statistically more likely to be night owls, and some have theorized that this is because with the advent of electricity, the need to sleep when the sun goes down and wake when it comes up – inherent in humanity for millennia – is no longer applicable, and the genius mind makes that intuitive leap where others don’t and so they naturally rebel against the socially accepted sleeping patterns and biorhythms. Their bodies literally change to accommodate these intuitive leaps that the rest of us simply don’t make. Either that or they’re on drugs.

Not Trying to Kill Superman

Jesus, Lex, give it a rest already.

Not Being Completely Friggin’ Nuts

crazy

Crazy man image by Shutterstock

Take everything else on this list – the inability to play by rules, the high incidence of mental instability and behavioral disorder, the social alienation and inability to fit into the world’s standards for normality, and it becomes easy to see why so many things in everyday life are a surprising struggle for a lot of geniuses and why the phrase “a fine line between genius and insanity” is so well known. It’s got to drive geniuses completely insane to have to live in a world designed by people who seem literally retarded to them. There must be times when it seems like hell. Again, imagine being 40, 50, 60 IQ points smarter than everybody else. Of course you’d end up going nuts. It’s a real problem for me… I mean, for them.

What is it like to be close friends with a genius?