Princeton Engineering Anomalies Research Lab
From Wikipedia, the free encyclopedia
This article is missing citations or needs footnotes. Please help add inline citations to guard against copyright violations and factual inaccuracies. (March 2008) |
The Princeton Engineering Anomalies Research (PEAR) program was established at Princeton University in 1979 by Robert G. Jahn, then Dean of the School of Engineering and Applied Science, to pursue rigorous scientific study of the interaction of human consciousness with physical devices, systems, and processes common to contemporary engineering practice. Until the end of February 2007,[1] an interdisciplinary staff of engineers, physicists, psychologists, and humanists conducted a comprehensive agenda of experiments and attempted the development of complementary theoretical models to enable better understanding of the role of consciousness within physical reality. A number of academics have called the PEAR data into question stating that the PEAR methodologies were flawed and questioning their interpretation of the collected data.[2]
Contents |
[edit] Overview
For most of its history PEAR has pursued two parallel experimental programs, in "human-machine interaction" and in "remote perception." The program has also maintained an effort in theoretical modeling, aimed at understanding the experimental results. As was noted above, there is some disagreement regarding the soundness or reliability of PEAR's results. The following subsections summarize the conclusions of PEAR's own researchers without repeating this caveat on each point.
[edit] Human-Machine Interaction
The first example, and in some ways the archetype, of these experiments is the REG (Random Event Generator) experiment which started data collection shortly after PEAR's founding. The original REG was an analog noise source which could be sampled at various rates: samples were converted to binary pulses and counted in groups of varying size. In the mode used most often for experiments, the noise was sampled at 1 kHz and the resulting bits collected in groups of 200, thus collecting data for 1/5 of a second before presenting the final result to the operator. The result of such a single "trial" was the sum of "1" bits in the random binary stream; thus, it was a number averaging 100 and with a standard deviation between samples of slightly over 7. A typical experimental "series", at the outset, involved collecting 5000 trials in each of three "intentions" (see next paragraph); over the course of the program's development different protocols developed in which the series length was 2500, 3000, or 1000 trials per intention.
The experiment involved three distinct "intentions" or mental states of the human operator: trying to produce larger numbers (high), trying to produce smaller numbers (low), and observing the data generation without making any particular mental effort (baseline). Intention was defined purely operationally: a "high" intention was, by definition, anything an operator chose to do after declaring that the next run would be a high effort. (Intentions had to be recorded before any data were generated, for obvious reasons of data integrity.) This had the virtue of sidestepping a great deal of philosophical and psychological debate about the real meaning of "intention", and the drawback of lumping together widely disparate personal mental strategies under the label of a single intention.
The fundamental definition of a successful effect in the original REG experiment was a distinction between the mean trial values in the high and low intentions. Since, at least in principle, the only thing that differed between these conditions was a mental state of the human operator, such a difference would if taken at face value indicate a response of the electronic noise source to that change in mental states. At the time that the original REG experiment was closed out and the final database tabulated, the mean trial values over somewhat more than 800,000 trials per intention were 100.026 in the high intention and 99.984 in the low intention. This difference corresponds to a Z-score of slightly over 3.8; the difference is about 3.8 times bigger than the expected margin of error in the measurement. This is something that could arise by chance, but is extremely unlikely to do so (which is what the phrase "statistical significance" means). Alternatively it could arise from very infrequent errors (around 1 in 10 000 runs) in recording results, or lapses in the experiment protocol.
The original REG experiment was used as a paradigm or template for several other human-machine experiments, including
- A "remote" experiment in which the device was influenced from a distance.
- "Pseudo" experiments in which deterministic pseudorandom sequences replaced the analog noise source.
- A "random mechanical cascade" in which operators attempted to influence the trajectories of macroscopic polystyrene balls falling through an array of pegs.
- A "pendulum" experiment in which operators attempted to influence the swinging of a pendulum.
In addition to these basic changes in mode, several variant versions of the basic REG experiment explored different types of feedback to the operator and different modes of assigning and determining intention, and different types of electronic sources and rates of data collection.
[edit] PEAR's conclusions on human-machine interaction
A quick summary of the PEAR researchers' current evaluations of their human-machine experiments:
- Human minds can affect random physical processes, to a minor but statistically detectable degree.
- The effect seems to disappear when deterministic (pseudo-random) sources are substituted.
- The effect is idiosyncratic (different individuals produce different results).
- The effect is erratic, showing long-term fluctuations which can be partly (but only partly) explained by changes in the operator pool.
- The scaling in response to simple physical variables is not obvious: for example, speeding up sampling by a factor of 10 produced no detectable difference in the effect size per bit, but speeding up sampling by a factor of 10,000 inverted the sign of the effect and reduced the per-bit effect size by a factor of 30.
[edit] Remote Perception
This phenomenon has seen considerable attention in recent days under the title of "Remote Viewing", but since it appears to involve all sensory modalities rather than vision alone, PEAR has always used "Remote Perception" as its label. In a typical PEAR experiment, one participant (the "agent") goes to a specified site and describes it on a written transcript, preferably including photographs and/or sketches. Another participant, the "percipient", attempts to perceive the agent's surroundings, and provides another transcript, preferably including a sketch.
Both transcripts are then coded onto a "descriptor" checksheet which attempts to summarize important features of the scene. The checksheets are then scored against each other for their degree of similarity. Because there will be some degree of correspondence between any pair of descriptions, the statistical analysis of the remote perception experiment proceeds by scoring all agent checksheets against all percipient checksheets, regardless of whether they belong to the same trial. The matched scores corresponding to actual trials are then compared to the background of mismatched scores from different trials, which measures the degree of chance correspondence from random scenes, participant prejudices and styles, "Barnum" descriptions (that is, descriptions that apply to almost any scene), and, it is hoped, all other relevant confounding factors.
[edit] Staff
- Robert G. Jahn, Program Director. Jahn is an emeritus Professor of Aerospace Sciences and Dean of the School of Engineering and Applied Science, who has taught and published extensively in advanced space propulsion systems, plasma dynamics, fluid mechanics, quantum mechanics, and engineering anomalies. He is a Fellow of the American Physical Society and of the American Institute of Aeronautics and Astronautics, and has been chairman of the AIAA Electric Propulsion Technical Committee, associate editor of the AIAA Journal, and a member of the NASA Space Science and Technology Advisory Committee. He is vice President of the Society for Scientific Exploration and Chairman of the Board of the International Consciousness Research Laboratories consortium. He has been a long-term member of the Board of Directors of Hercules, Inc. and chairman of its Technology Committee, and a member and chairman of the Board of Trustees of Associated Universities, Inc. He has received the Curtis W. McGraw Research Award of the American Society for Engineering Education and an honorary Doctor of Science degree from Andhra University.
- Brenda J. Dunne, Laboratory Manager. Dunne is formally trained as a psychologist and serves as the Laboratory Manager of the PEAR lab.
- York H. Dobyns, Analytical Coordinator
- Lisa Langelier-Marks, Administrative Assistant
- Elissa Hoeger, General Factotum
[edit] Emeritus members
- G. Johnston Bradish, Technical Coordinator
- Arnold L. Lettieri Jr., Communications Director
- Roger D. Nelson, Operations Coordinator
[edit] Supporters
The internal and external programs of the PEAR laboratory have been supported by a number of persons and organizations, among them: Richard Adams, the Geraldine R. Dodge Foundation, the Fetzer Institute, the Institut für Grenzgebiete der Psychologie und Psychohygiene, the Lifebridge Foundation, the James S. McDonnell Foundation, the Ohrstrom Foundation, Mr. Laurance S. Rockefeller, and the late Mr. Donald Webster, along with various other philanthropic agencies and individuals.
[edit] References
- ^ Carey, Benedict (2007-02-06). "A Princeton Lab on ESP Plans to Close Its Doors". New York Times. http://www.nytimes.com/2007/02/10/science/10princeton.html?pagewanted=1&ei=5090&en=2f8f7bdba3ac59f1&ex=1328763600. Retrieved on 2007-08-03.
- ^ Princeton Engineering Anomalies Research at The Skeptic's Dictionary
[edit] External links
|