Folklore and many anecdotal stories have relayed how some individuals have claimed to be able to ``foretell'' the future, or have experienced premonitions of events before they actually occurred. While much of this information is likely due to misinterpretation, misrepresentation or other flaws of human perception, memory and reasoning, there are experimental findings which suggest that precognition may occur (see Wiseman and Morris [22] for an overview of ways we can be deceived, or can deceive ourselves into interpreting a normal incident as being paranormal).
Honorton and Ferrari [23] conducted a meta-analysis of 309 precognition studies conducted between 1935 and 1987. These studies all used a ``forced-choice'' methodology, in which the subject is aware of the possible target choices, and is asked to choose one of them as his answer (as opposed to ``free-response'' methodologies, such as ganzfeld studies). In all of these studies, the subject made their choice as to the target identity prior to the target identity actually being randomly generated. Thus the subjects' responses were to targets which did not exist at the time of their response. These studies are thought by some to be methodologically superior to other ESP studies as there is little possibility of the subject ``cheating'', or receiving any subtle cues about the target identity, as the target does not exist when their response is made.
The studies included in this meta-analysis were conducted by
62 different senior investigators, and included nearly two million
individual trials contributed by over 50,000 subjects. While
the mean effect size per trial is small (
es = .02), it is sufficiently consistent for the overall
effect from these studies to be highly significant (combined
). Using eight different measures
of study quality,
no systematic relationship was found between study outcome and
study quality. A ``fail-safe N'' estimate would require 14,268
unreported, null studies to reduce the significance of the database
to chance levels. Given the wide diversity of study methods and
procedures found in this database, it is not surprising that
the study outcomes were extremely heterogeneous. The authors
eliminated outliers by discarding those studies with
z scores falling within the top and bottom 10 percent
of the distribution, leaving 248 studies. It should be noted
that the elimination of outlier studies to obtain homogeneity
is a common practice, and in other, non-parapsychological reviews
``it is sometimes necessary to discard as many as 45% of the
studies to achieve a homogeneous effect size distribution'' (p.
1507) [24]. The resulting mean trial effect size was .012, and
the combined z still highly significant (
).
While it was found that study quality improved
significantly over the 55 year period during which these studies
were conducted (correlation coefficient
r[246 degrees of freedom] = .282,
, study effect sizes did not significantly co-vary
with the year of publication. Study effect sizes are homogenous
across the 57 investigators contributing to the trimmed database.
The rest of the analyses conducted were all performed upon this
smaller database.
The authors identified four ``moderating'' variables that appeared
to relate systematically to study outcome. The first variable
involved the subject population. It was found that studies using
subjects who were selected on the basis of good ESP performance
in previous experimental sessions obtained significantly better
ESP effects than those studies using unselected subjects (a
t test with 246 degrees of freedom [df] giving
t = 3.16,
p = 0.001). Another variable which covaried with study
effect size was whether the subjects were tested individually
or in groups, with individual testing studies obtaining significantly
higher outcomes than those using group testing methods (
).
A further moderating variable involved the type of feedback subjects
received about the accuracy of their responses. There were four
feedback categories, including no feedback, delayed feedback
(usually via mail), feedback given after a sequence of responses
(often after 25 responses), and feedback given after each response.
Of the 104 studies which supplied the necessary information,
there was a linear and significant correlation between the precognition
effect and feedback level (
,
p = 0.009), with effect sizes increasing with level of
feedback. A related finding involves the time interval between
the subject's responses and the target selection. This finding
is confounded by the feedback level, as time duration between
the response and target generation may co-vary with feedback
level (i.e., when feedback was given after every response, the
time interval between response and target selection would have
to be shorter than was necessarily the case when feedback was
given after a sequence of calls, or a month after the responses
had been made). There were seven different time interval categories,
varying from a millisecond to months. There was found to be a
significant decline in precognition effect sizes as the time
interval between response and target selection increased (
). The significant temporal decline/study
effect
size relationship is due entirely to those studies which used
unselected subjects, with the studies that tested selected subjects
showing a small, non-significant increase in precognition scoring
as the time interval increased (the difference between these
groups was not significant).
It should be noted that there was no significant difference in
quality between studies using selected and unselected subjects.
Also, studies which tested subjects individually did show significantly
higher study quality than those utilising group testing procedures
(
). A correlation between feedback level and
research quality was positive, but not significant (
.
In summarising the precognition findings, Honorton and Ferrari concluded ``the forced-choice precognition experiments confirm the existence of a small but highly significant precognition effect.'' (p. 300). Furthermore, they concluded that the most important outcome of the meta-analysis was the identification of moderating variables, which not only provides guidelines for future research, but may also help expand our understanding of the phenomena.