Consult: Thomas Blass, PhD

 

A Social Science Consultation on

“Psychological Stress Experiments on Soldiers”

Thomas Blass, PhD

October 6, 2008

80-minute teleconference

Reported by Jean Maria Arrigo, amended by Martha Davis


Social psychologist Thomas Blass, PhD, is a professor at University of Maryland, an expert on obedience to authority, and the biographer of Stanley Milgram (The Man Who Shocked the World, 2004, Basic Books).  His background as a Holocaust survivor also informs his understanding of “Psychological Stress Experiments on Soldiers.”


Teleconference Participants:  Jean Maria Arrigo, Ray Bennett, Thomas Blass, Dan Christie, Martha Davis, Jancis Long, Susan Opotow


Note:  Quotations are condensed and approximate.


Selective Summary of the Tele-Consultation with Dr. Blass on

The Psychology and Military Intelligence Casebook on Interrogation Ethics


     Dr. Blass first differentiated among the three psychological stress studies led by Mitchell Berkun.  He praised Berkun’s experimental design in the technological skills experiment at the Presidio of Monterey.  He declared the Desert Rock (Nevada Test Site) study reprehensible because subjects were misled into believing they were in a safe environment, which was truly dangerous to long-term health due to high radiation exposure.  He deemed the technological skills study legitimate under the assumption that we have to have a military and to focus on national security.  Berkun was reasonably trying to mimic actual crisis situations that might confront a soldier and to evoke stress the soldier would experience.  Surely it was better to study the results in an experimental situation than in an actual combat situation.  Because the subjects were not actually harmed physically, it was not much different from many psychological deception experiments.  He did not consider Berkun’s air bubble experiment, which did not inflict physical damage, to be as unethical as the Desert Rock experiment, which did. The air bubble experiment is difficult to evaluate though because we do not know the purpose.

     He credited Berkun for debriefing his soldier-subjects, which was not a regular part of research at that time.  In back issues of the Journal of Abnormal Psychology, Dr. Blass has found no mention of debriefing in the 1950s and 1960s.  Around 1970, a survey with input from hundreds of psychologists first resulted in a code of psychological ethics.  [legitimate reference needed]


     Stanley Milgram regarded Berkun’s technological skills experiments at the Presidio of Monterey as less ethical than his own obedience experiments at Yale University.  In the 1970s, Milgram was still bothered by the lingering ethical criticism and wanted to show the contrast.  He applied to EVISTA (?) to make a film about experimental ethics, using three scenarios:


          1.  the Milgram obedience experiment

          2.  the Berkun ditching experiment

          3.  a 1964 Canadian conditioning experiment on alcoholics in treatment, with injection of the drug scoline

               that arrests muscular movement so the subjects couldn’t breathe and thought they were dying


In Milgram’s plan for the film, an Institutional Review Board, with people such as Herb Kelman, would judge the acceptability of the experiments.

     Dr. Blass said that the Berkun psychological stress experiments have only a limited analogy to the Milgram obedience experiments.  In the military, insubordination could lead to very negative consequences; in the experimental situation created by Milgram, subjects and confederates alike always had a choice. There was no threat of force or career impediments to motivate participants, only the participants’ acceptance of the legitimacy of the authority.


     From a previous, military consultation on the Berkun experiments, Casebook writers raised Dr. Rood’s objections that the experiments were spurious because the subjects were raw recruits.  Dr. Blass countered that the disaster situations were so unusual a soldier was not likely to experience them in basic training anyway.  “Does basic training cover what you’re supposed to do when you think you’ve killed someone?”

     A military participant remarked that basic training already conditions people to function in a way they haven’t previously.   Maybe Berkun used raw recruits to develop a model of people who hadn’t been conditioned, as a baseline.  Or, as a psychologist suggested, to get experience on which to base future training. Another wondered whether it should be imperative for such studies to specify their intended uses because it is so difficult to evaluate the studies otherwise.

     A participant found Berkun’s rationale for the experiments to be similar to rationales for the Nazi doctors’ experiments — not similar in harms but similar in rationale.  Dr. Blass though considered the studies to be “drastically different.”  Nazi doctors, for instance, had sucked all the air out of a room to determine how long pilots could fly in rarefied air.  The result was always death.  As far as he knew, no US military experiments aimed for the death of the subjects.  We did not resolve the question of the relative moral weights of the rationales and the harms of an experiment.


     Returning to the military context of the Berkun experiments, Dr. Blass noted that “We are talking about a whole subsociety with its own norms.”  Behaviors that may be problematic in a civilian world are not problematic here.   In city streets people don’t take up arms against each other, but this is acceptable in the military.  The military context lowers the bar of acceptability, and deception and harms in experiments are easier to justify in a military setting.  Other participants did not accept his conclusion.  Dr. Blass continued:  in the military context, “there is already a level of general behavior that is counternormative.  To the extent that we need an armed forces, we are already implicitly condoning such behavior....”

     Another psychologist identified this as “a huge thesis!”   Even the American Psychological Association (APA), with its officially permissive view of psychologists’ participation in interrogations (from June 2005 to September 2008 [REF]) emphasized that the ethics code is the same for psychologists in civilian and military roles. There are still arguments though about whether the standards should be different or whether the lower bar should remain an unspoken reality.  This Casebook grapples with whether and where the bar should be lower.

     Dr. Blass maintained that “However you look at it, if soldiers are shooting at others on the field of battle, the standards have been lowered.”


     Another psychologist characterized forensic psychology as the “transition milieu” between civilian and military psychology.  The 2005 APA task force that developed guidelines on the conduct of psychologists in interrogations had argued that assistance in interrogations was merely an extension of what the foresenic psychologists were doing domestically.  Forensic psychologists advise hostage negotiation teams, assess personnel fitness to serve, and testify on forensic issues in court.  Civilian law enforcement occupies a midpoint on the spectrum from civilian to military in that police departments have a paramilitary organization and command structure, but criminal investigations, detention, and prosecution are controlled by a civilian legal system different from the military JAG system.  Also there is a huge difference between investigations of crimes and intelligence gathering.  Thus, forensic psychologists who work for the police department or judicial system have to balance adherence to the dictates of criminal law and the security requirements for dealing with criminal suspects with the code of conduct for psychologists regarding confidentiality, do no harm, limits to ones area of expertise, etc.  They are therefore working under very different rules and conditions than military psychologists involved in detainee interrogations.  The emerging research and consulting speciality called investigational psychology is also at this mid-point in that20it has restraints and codes of conduct that differ from what research psychologists may do in classified research for the military.


     In the teleconference period of last comments and questions, Susan Opotow asked: if we acknowledge that soldiers do behave differently under fire, how does that affect ethics?  Forensic psychology lies somewhere along the continuum between military and civilian contexts, but that doesn’t settle the ethics.  She had just reviewed the APA Ethics Code and found no contextual factors.  Linking the ethics to normative behavior doesn’t settle the question of right or wrong.  Contextual demands do not necessarily defeat universal ethics.  Contexts indeed have demand characteristics, but  how does this affect the ethics of experiments in military contexts?  We are using historical cases to understand systems and chronic issues.  But does that change what we ought to do?

     Tom Blass continued to draw implications from military behavioral norms to psychological ethics.  As long as we need a military to defend us, certain behaviors will be acceptable in the military that are not acceptable in civilian life.  And this inevitably lowers the bar for psychological experiments in military settings.

     Jancis Long agreed to admit the question of whether to accept a lower bar.  But, even granting the lower bar, the crucial question is whether there are certain behaviors that are not acceptable for psychologists in the military contexts.

     Martha Davis asked whether civilian deception research with military applications also has a lower bar?  Tom Blass said: “Yes, of necessity. We’re talking about training skills for soldiers that could be a matter of life or death for them.  In that context, telling them up front what this is all about can then destroy the validity of the effort and result in greater harm to them.”  In which case, Martha Davis observed, there’s a different criterion in the risk-benefit analysis at the outset.

     Exploring further the counternormative processes in the military context, Tom Blass spoke of the dehumanizing training of a soldier he knew who had volunteered for special forces.  The soldier had described mindless, humiliating, vulgarity-filled training regimens.  Only10% of trainees completed the course.  If somebody had a lifelong goal of becoming an airborne ranger, then he had to put up with this abhorrent behavior by trainers.  The training style seemed to Tom Blass counterproductive, if the goal is to build soldiers’ trust in their leaders.  Forcing a soldier to do 200 pushups because he didn’t look straight at his drill sergeant would seem to undermine respect for another.

     Ray Bennett explained that the training regimen Tom Blass found nonsensical and humiliating had been developed to be cost effective.  So the soldier’s socks weren’t rolled in a certain way!  These are things all recruits have; the military doesn’t have to spend more money to create this stressful environment.

     Referring to well known psychodynamics of behavior, Susan Opotow noted that the training experience of being brutalized facilitated the soldiers’ later brutalization of others.

     Dan Christie returned to the question of ethics in relation to context, whether the bar for ethical conduct by psychologists, varies from civilian to military contexts.  Certainly the historical context matters.  From the 1970s, federally funded research required approval by Institutional Review Boards.  He was doing developmental studies in 1970s, when the decision had just been made that researchers needed parental permission to use child subjects.  Berkun’s psychological stress experiments all took place in a certain context.  What was the social science information gained from these studies?  Then, later, what was the emotional and physical cost to these studies?  For example, in the ditching experiment, in which the recruits were led to believe their airplane would crash, there are many other ways to get information about that.  People actually do ditch airplanes.  How do people react when they think they’re going to die right away?  We have natural observation research that can be done on that.  What is it we learn from these moral ethically studies that cannot be learned far better through alternative, clearly ethical methods?

     Tom Blass recharacterized the experiments to be not only about learning but also about applications to the military mission.  In the ditching and demolition experiments, the researchers learned about physical effects as well.  The science was not dominant; the military mission was dominant.  Martha Davis inquired whether the application had been primary for Milgram.  “Yes, it was.  One day you’re neighbors.  The next day you’re ready to kill them.  There were maybe a million killers and 60 million other Germans letting their neighbors being carried off.  How did this happen?”

     Dan Christie returned to the demolition study, noting that of the five life-threatening disasters contrived by Berkun it had induced the greatest psychological stress in the soldier-subjects.  “You’ve hurt your comrade.  Maybe it’s useful that we can scale what matters most.  But how can we weigh the cost of these studies until we know what we are learning from these studies?”


     Reporter’s Comment: For psychological studies that cause harm, characterization of the rationales and actual consequences—both epistemic and operational—emerged as one of the central ethical issues during this session.