<img height="1" width="1" alt="" style="display:none" src="https://www.facebook.com/tr?id=1482979731924517&amp;ev=PixelInitialized">

The Litigation Consulting Report

[New and Free E-Book] The Litigator's Guide to Combating Junk Science - 2nd Edition

Posted by Ken Lopez on Mon, Jun 8, 2015 @ 03:16 PM

 

junk-science-ebook-cta-tallby Ken Lopez
Founder/CEO
A2L Consulting

We have long participated in a joint publishing effort with Innovative Science Solutions (ISS), a company that provides strategic consulting services designed to ensure that you are prepared and knowledgeable about scientific and technical issues relevant to your case.

A2L has partnered with ISS for the benefit of many law firms and corporations. We have already had the pleasure of working together on everything from tobacco litigation to hydraulic fracturing to alleged health effects of cell phones. Along the way, we have learned, often by overcoming enormous challenges, how to make science your ally -- whether inside or outside the courtroom.

Today, A2L and ISS have just published the new and revised second edition of their e-book, The Litigator’s Guide to Combating Junk Science. The book is built on the following important concepts:

  1. Science plays a critical role in the courtroom. Access to scientific research and an understanding of scientific principles, as well as the ability to effectively convey this information, can enable the litigator to build a powerful case. This communication must effectively communicate complex technical concepts and show how they fit within the relevant law. But first and foremost the litigator must sort sound science from junk science.
  2. Many legal actions rely heavily on scientific information and testimony: personal injury, consumer protection, medical malpractice, securities law and patent law. Junk science can be present in any of them.
  3. Frequently, the case will amount to a battle of the experts, who will engage in a debate about the validity of the scientific evidence presented. Even the experts often disagree when interpreting sound scientific data.
  4. Dubious or biased scientific information is all too present in the courtroom. Judges and juries tend to accept any scientific information placed before them, for better or worse, and can decide a case incorrectly. That is one of the problems with junk science.
  1. However, when a case relies on misinformation, unsubstantiated claims, and misleading data, opposing counsel can successfully counterattack by using and providing access to the right resources.

This comprehensive, 2nd Edition e-book identifies examples of junk science; after all, how can you combat junk science if you cannot identify it?

The e-book also provides a checklist for identifying credible scientific sources online and rejecting those that are not credible. It notes that peer review is one of the foundations of good science, but that this concept is also abused to push junk science. It provides access to resources dedicated to exposing junk science. After all, the fight against junk science in the courtroom has raged for many years. This section identifies some terrific resources for continuing this fight. It gives access to government resources that will allow you to counter misinformation with scientifically sound principles.

Among the topics covered in the book are: “What Is Junk Science?” “Limitations of the Peer-Review Process,” “Teaching Science to Jurors,” “Explaining Complex Science/Statistics Using Trial Graphics,” and “Anti-Junk Science Websites.”

We are confident that by reading this e-book, you will become familiar with the hallmarks of junk science and that you will be able to recognize it and successfully argue in court against the use and admissibility of junk science.

ISS A2L Combating Junk Science E-Book

Tags: Statistics, Trial Consultants, Trial Presentation, Litigation Consulting, E-Book, Demonstrative Evidence, Juries, Jury Consultants, Science, Product Liability

10 Key Expert Witness Areas to Consider in Your Next Toxic Tort Case

Posted by Ken Lopez on Wed, Jul 17, 2013 @ 07:07 AM


toxic tort expert witness types scientists case trialby David H. Schwartz

Managing Director, Scientific Support to Counsel,
Innovative Science Solutions

The key to any toxic tort case involving complex scientific concepts is retaining the right experts. However, as any experienced litigator well knows, finding the right expert is not a simple or straightforward matter. Although getting the right lead on a specific individual can be challenging, half the battle is often identifying the right type of expert for your case.

Here are 10 broad expert areas that you should consider for your next toxic tort case. We subdivide each expert area into the relevant sub-disciplines that you should consider.

  1. Toxicology

    In many ways, the toxicologist is the core expert in any toxic tort case. Toxicology is the branch of biology, chemistry, and medicine concerned with the study of the adverse effects of chemicals on living organisms. Like a pharmacologist in a pharmaceutical case, a toxicologist specializes in evaluating adverse health risks posed by chemical exposures.

    There are many kinds of toxicologists that should be considered for any toxic tort case. A clinical or medical toxicologist is a physician with a board certification in toxicology. A reproductive toxicologist is an individual (Ph.D. or MD) who specializes in evaluating adverse health effects of chemical exposures on the fetus or offspring. Some toxicologists have particular expertise in evaluating human exposures, while others specialize in assessing animal exposures. Finally, risk assessment toxicologists focus on quantifying and assessing risks from chemical exposures. Retaining the right kind of toxicologist (or multiple toxicologists) for your toxic tort case is critical.
     
  2. Epidemiology and Statistics

    An epidemiologist specializes in studying exposure-disease relationships, a key factor in achieving a positive outcome in a case. It is rare to see a toxic tort case where there are no published data on the chemicals of interest. A skilled epidemiologist is critical to an effective analysis of those data since he or she can provide relevant testimony to address claims that the data support plaintiffs’ case. Epidemiologists relevant to toxic tort cases can be broadly divided into occupational and environmental specialties, and the appropriate choice is dictated by the type of exposure that is at issue in the case.

    In addition to an epidemiologist, because all scientific data (including epidemiological data) is interpreted using statistical techniques, you will also probably require a statistician. Therefore, whether you are confronting animal experiments, epidemiological studies, or in vitro mechanistic data, you probably need a statistician to help interpret the data and respond to your adversary’s interpretation of the same data. You probably need a biostatistician, but depending on the specific nuances of the case, you may require a statistician who specializes in psychological data. Finally, you may require an expert who specializes in data analytics or informatics.

    Click here to Download a Free Litigation E-Book
     
  3. Industrial Hygiene

    Industrial hygiene is the study of workplace factors that may result in harm or injury to employees or contract workers. You will need an industrial hygienist for any case involving workplace exposures in which you confront allegations that those exposures resulted in injury. Different industrial hygienists specialize in different kinds of assessments. Some individuals focus on airborne exposures, while others focus on assessment of physical agents, such as machinery. If radiation is a particular concern in a case, a certified health physicist may be a valuable expert to pursue.
     
  4. Environmental Science

    Environmental science is the study of environmental factors that could affect human health. These individuals are soil scientists or air and water modeling experts. Scientists in these areas excel at providing hazard assessments from soil exposures or dispersion modeling for airborne chemical releases and water exposures.
     
  5. Medicine

    By definition all toxic tort cases involve alleged injuries to human beings. You will therefore need credentialed physicians as experts in the specific medical areas related to the allegations in the case. These experts will most often testify as to the plaintiff’s specific medical condition, including whether or not the diagnosis is appropriate and whether there is general acceptance that the exposure is linked in some way to the disease state at issue. Quite often, toxic tort cases will require surgical or medical oncologists to testify about cancer issues, but all kinds of other medical specialties often come into play including dermatology, neurology, pulmonology, and cardiology.
     
  6. Clinical Psychology

    When human behavioral issues come into play, it is critical to enlist an expert in psychology. In our experience, the most relevant type of psychologist is a licensed neuropsychologist to deal with allegations of brain damage and neuropsychological deficits. However, there is often a need for a trained clinical psychologist.
     
  7. Scientific Specialty

    Quite often a toxic tort case will involve issues that call for a scientist in a specific discipline. These specific scientific disciplines can include genetics, molecular biology, physiology, psychology, and neuroscience. These experts will often be called upon to provide general education to the judge or jury and can be an extremely important component of making the defense case.
     
  8. Regulatory

    The goal of an expert in this area is to provide testimony that your client complied with the appropriate regulations. Every lawyer who tries toxic tort cases knows that regulatory experts can be among the most difficult to find. Depending on the nature of the case, you may require an expert with specific experience dealing with the EPA, OSHA, or sometimes even the FDA. Most often, you will want someone who was actually employed at one of these regulatory agencies, but sometimes it is sufficient to have an expert who has experience complying with the regulations in some capacity.
     
  9. Physical Sciences

    Many toxic tort cases require the retention of experts in the physical sciences, including hydrogeologists, seismologists, petroleum engineers, materials scientists, and process engineers. The need for these experts and a decision as to which kind is critical is usually tightly aligned to the specific facts and allegations made in the individual case.
     
  10. General Causation

    A general causation expert is an individual who is going to wrap up your case and tell the causation story. This expert is often an epidemiologist or a clinical toxicologist, but in our view, it is helpful to think of him or her in a separate category. This expert should have special knowledge and training that will allow them to synthesize the science in the case and come to an educated conclusion about causation.

    david schwartz innovative science solutions
    David H. Schwartz, Ph.D.
     of Innovative Science Solutions has extensive experience designing programs that critically review the scientific foundation for product development and major mass tort litigation. For 20 years, he has worked with the legal community evaluating product safety and defending products such as welding rods, cellular telephones, breast implants, wound care products, dietary supplements, general healthcare products, chemical exposures (e.g., hydraulic fracturing components, pesticides, and other chemical exposures), and a host of pharmaceutical agents (including antidepressants, dermatologics, anti-malarials, anxiolytics, antipsychotics, and diet drugs). Innovative Science Solutions and A2L Consulting are frequent partners in high-stakes litigation and advocacy.

    Articles related to science in the courtroom and expert witnesses:

    Science Issues Experts Trial Litigation Litigation Graphics

    Tags: Statistics, Trial Graphics, Trial Consultants, Litigation Graphics, Demonstrative Evidence, Science, Expert Witness, Toxic Tort

    The Top 10 TED Talks for Lawyers, Litigators and Litigation Support

    Posted by Ken Lopez on Thu, Dec 13, 2012 @ 06:15 AM


    TED talks lawyers litigators litigation support videosby Ken Lopez
    Founder & CEO
    A2L Consulting

     

    In the 1980s, a small conference was started in California focused on topics related to technology, entertainment and design. Now known by the acronym TED, what was once a small conference is now an international movement devoted to the dissemination of "Ideas Worth Spreading." 

    The format is simple. Compelling speakers with compelling messages are invited to speak for between five and 20 minutes to a live audience. The talks are video recorded and generally posted online. These online TED Talks have been viewed over one billion times worldwide.

    Some TED Talks are among the most popular educational materials on the Internet, and there is a lot that lawyers, litigators and litigation support professionals can learn from them. Whereas a PSY video may be the most watched video of all time on YouTube, TED Talks are the viral videos of the intellectually curious.

    While the TED Talks are a pricey conference to attend live, there are now TEDx events as well. These are locally organized TED Talks that are only loosely affiliated with the parent. On average five occur every day somewhere in the world in over 1,200 cities, and they are inexpensive or free to attend.

    I regularly attend TEDx talks that are close to me. They are inspiring, they are motivating, they are moving, and sometimes you even find a major law firm litigation partner speaking at one. I recommend you find one near you to attend.

    Here are 10 TED videos that I believe are especially helpful to lawyers, litigators and litigation support professionals.

    1) Changing How You Are Perceived by Changing Your Body Language: Whether you are trying a case in front of a jury, negotiating a deal, or managing a litigation support team, how you are perceived will change how people react to your message. Oddly, it turns out that by purposefully changing your body language, you will not only change how you are perceived, you will measurably change your own body chemistry.

     

    2) Inspire and Persuade Others by Speaking in this Order: If you see me speaking somewhere or if I am advising on the development of an opening statement, you'll notice that I follow the teachings of Simon Sinek. I have recommended his golden circle talk before, and I still think it is among the best TED Talks, because it is just so easy to implement. 

     

    3) How Lawyers Can Tell a Great Story (R-Rated): The writer of Toy Story, WALL-E and others reminds us of something critical to any trial presentation, "Make me care!" Learning to tell better stories may be one of the best skills a litigator can learn. Making an emotional connection with your audience is how you get them on your side - not by overloading them with facts, details and backup.

     

    4) How to Structure a Great Talk: Nancy Duarte does a great job of explaining how to structure a good story and offers a format that can be applied easily to any brief, opening or closing statement

     

    5) Persuading the Rational Decision-maker: The speaker reminds us that decisions are made on emotion and justified on fact. This is true in sales, and it is true in the jury deliberation room. To persuade, we must trigger people's encoded memories and their emotions. Even if your role is that of litigation support on a trial team, it is critical to remind trial counsel of the importance of these lessons. Remember, you can always forward this article.

     

    6) How Statistics Fool Juries: We've written before on topics related to statistics including the use of trial graphics to teach statistics for trial and statistical significance as it relates to litigation. For anyone making a Daubert challenge, this is an especially useful talk.

     

    7) Negotiating Effectively from the author of Getting to Yes: He shares his journey of walking in the steps of Abraham and how it may serve as a model for Middle East peace. In the process, he reminds us of how to negotiate effectively as lawyers, litigators and litigation support professionals by looking at the third side.

     

    8) Let's Simplify Legal Jargon: As a designer with a law degree and a passion for simplicity, my eyes open wide any time someone says they want to simplify legal things. Here, in less than five minutes, another designer who has spent some time in law school, Alan Siegal, shows how he simplified IRS notices and credit card statements.

     

    9) Battling Bad Science and How Evidence Can Be Distorted: An epidemiologist reminds us of how science can easily be interpreted incorrectly. Since we often consult on litigation where human health effects are alleged, sometimes on a mass scale, I find this talk helpful. It reminds me how often evidence is distorted to try to create liability.

     

    10) Harnessing the Power of Introverts: I saw former corporate lawyer Susan Cain speak at a conference recently, and I found her talk eye-opening. Not only did I re-discover some of my buried but natural introvert roots, but I learned better techniques for leading introverted members of my team. Whether you lead a trial team, a litigation support group or a law firm, this is an important talk to hear for leaders.

     

    I hope you've enjoyed the videos. If you've watched a number of them, you'll notice a similar presentation style. It's one that you might compare to a Steve Jobs keynote, or like that of Garr Reynolds, or Cliff Atkinson would follow. This style is one that I want to see more litigators embrace during opening and closing arguments.

    Notice the lack of bullet points throughout the presentations. We wrote about avoiding the use of bullet points in July, and it has been one of our most popular articles ever.  And I don't think a TED Talk is all that dissimilar from an opening or closing statement.

    Like this 2012 article? Here's a great follow-up article from 2014: The Top 14 TED Talks for Lawyers and Litigators 2014

    Other great A2L Video Posts for lawyers, litigators and litigation support professions:

     

    Free E-Book - Click to Download Guide to Engaging Trial Technicians

    Tags: Statistics, Trial Presentation, Courtroom Presentations, Litigation Consulting, Litigation Support, Psychology, Bullet Points, Opening, Closing Argument, Body Language, Negotiation

    Litigation Support: Making Sense of the Statistically Significant

    Posted by Ken Lopez on Thu, Sep 6, 2012 @ 08:45 AM


    litigation support services using statistics in trialA Q&A about using statistics in litigation with David Schwartz, Ph.D., Nathan Schachtman, Esq. moderated by litigation support specialist Ken Lopez

    Recently, we posted an article discussing the effective use of trial graphics to help win your cases involving statistical principles. In this prior article, David Schwartz, Ph.D. of Innovative Science Solutions served as a coauthor, helping us to address some fundamental principles related to the use of statistics and hypothesis testing. We ended up the article with a very important question: What can we conclude from studies failing to show statistical significance?

    In this article, we attempt to address that important concept in a Q&A with Dr. Schwartz and Nathan A. Schachtman, Esq., an attorney with a nationally recognized legal practice, and who also teaches statistics to law students at Columbia University School of Law. The session was moderated by Ken Lopez, Founder & CEO of A2L Consulting, a national litigation support services firm.

    *****************************

    Ken Lopez (moderator): Nathan, you reviewed the article that David and I posted about using trial graphics to address some very fundamental principles in statistics?

    Nathan Schachtman: Yes, I read your post with great interest; you wrote about issues that are at the heart of what I teach to law students at Columbia.

    Moderator: And what do you think about the value of using the right trial graphics provided by a litigation support services firm to teach these principles? 

    Schachtman: When it comes to teaching judges and jurors, I believe that only graphics will allow us to overcome fear and loathing of mathematics, symbols, and formulae. Trial graphics are extremely important to any attempt to educate non-scientists and scientists alike.

    Schwartz: So, Nathan, we want you to help us understand what we can and cannot conclude from data that are not statistically significant. Why is that such an important issue?

    Schachtman: Statisticians and careful scientists are well aware of the fallacy of embracing the so-called null hypothesis of no association from a study that has not found a statistically significant disparity. We see the fallacy in the law where it infects some defense counsels' thinking, and some judges' thinking, when they actually conclude no association from a statistically insignificant result. An inconclusive study is, well, inconclusive, and sometimes that is all we can say.  Still, even though the burden of proof is typically upon the party claiming the causal effect, we all are very interested to know under what conditions we can say there really is no effect.

    Join the Jury! Vote A2L in The Best of LegalTimes 2012

    Schwartz: So, let’s assume we have a study where we cannot reject the null hypothesis. Let’s say our p-value is 0.2?

    Schachtman: That is where the problem starts to arise. Essentially, we can conclude little to nothing from a single study with a p-value in that range.  The size of the p-value tells us that a disparity at least as large as we saw between the expected and observed values could well have been the result of chance, assuming there was no difference.  We say we have failed to rule out random variability as creating the disparity.

    Moderator: Can you give us an example?

    Schachtman: Suppose we flip a coin 10 times, and we observe 6 heads and 4 tails.  Is this coin lopsided?  The answer is "we do not know." The heads/tails ratio observed was 1.5, and that might be the best estimate of the correct, long-term value, but our evidence is very flimsy because of random variation.

    Schwartz: Why can't we just accept the null hypothesis? It’s the most likely scenario; right?

    Schachtman: No; no; no. The null hypothesis is set as an assumption, and you can't prove an assumption by simply assuming it to be true. The nature of much of statistics, not all, is based upon assuming a so-called null hypothesis, and a reasonable model of probabilistic distribution of events, and asking how likely is it to observe data at least as extreme as we have observed.  In many situations, when we obtain an answer that the likelihood of observing data at least as extreme as observed is greater than 5%, we say we cannot reject the starting assumption of no association.  Keep in mind that we are talking about the result of a single study, with the p-value greater than 5%.

    Schwartz: Many people – scientists and lawyers alike – have transformed this probability into the likelihood of the null hypothesis; haven’t they?

    Schachtman: True, they have done that.  Any number of courts, expert witnesses, lawyers, litigation support services firms and even published, peer-reviewed articles have stated that a high p-value provides us with the likelihood of the null hypothesis. The mistake is so common, it has a name: “the transpositional fallacy.” The critically important point is that the p-value tells us how likely the data (or the data more extreme) are, given the null hypothesis, and that the p-value does not provide us with a likelihood for the null hypothesis.

    Schwartz: But the null hypothesis is what the Defense is really interested in; isn’t it?

    Schachtman: The probability of the null hypothesis, or of the observed result, is what everyone in the courtroom is interested in; no question about it. But our desire for an answer of one type doesn’t change the fact that the p-value in traditional hypothesis testing does not allow us to talk about the likelihood of our hypotheses, but only about the likelihood of obtaining the data or data more extreme, given the null hypothesis.

    Moderator: Yet people get this wrong all the time, don’t they?

    Schachtman: Absolutely. That’s the trap that judges, lawyers, and even statisticians fall into. I’ve written extensively about this on my blog (see this post, for example, where I cite many legal cases where statistical conclusions have been misstated).

    Schwartz: Can you give some examples of the types of misstatements you have seen?

    Click me

    Schachtman: In one litigation that I tried to verdict, the federal judge who presided over the pre-trial handling of claims said “P-values measure the probability that the reported association was due to chance… .”   See In re Phenylpropanolamine (PPA) Prods. Liab. Litig., 289 F.Supp. 2d 1230, 1236 n.1 (W.D. Wash. 2003).  The judge who wrote this incorrect statement was the director of the Federal Judicial Center, which directs the educational efforts of judges on scientific issues.  I assure you though that this was not an isolated example of this fallacy.

    Schwartz: So, at the end of the day, what can we say about null data? After all, when there are studies showing no difference, the Defense should be able to highlight those studies; shouldn’t they?

    Schachtman: Of course. First, let me note that you have now postulated that there are multiple studies that show no difference.  Remember, the burden of proof is supposed to be on the plaintiff. So, the defense typically need only show that the plaintiffs cannot prove what they claim. But of course defendants would like, if they can, to go further and interpret the data as showing no association.  So multiple null studies do form an important part of the Defense case. But the Defense must be careful not to overstate the conclusions from a single null study. But, as usual, the devil is in the details.

    Moderator: What do you mean?

    Schachtman: Well, you can actually have a number of different scenarios with respect to null outcomes. Let’s go with the benzene in the fish example that you outlined in your previous blog post. I can think of three interesting scenarios. 

    litigation support statistics 

    Schachtman: Scenario 1: A single study, with a good deal of random variability, which fails to reject the null with extremely low statistical significance (say, p = 0.4).

    Scenario 2: A study with a good deal of statistical precision, which fails to reject the null hypothesis, with marginal statistical significance, say 0.06.

    Scenario 3: A series of studies with good statistical precision, each of which fails to reject the null hypothesis.

    Schwartz: Why don’t you go through your interpretation of each of the different scenarios you just outlined. Let’s start with Scenario 1, a single, statistically imprecise study, which fails to reject the null hypothesis.

    Schachtman: In this scenario, you essentially know very little more than you did before you did the study. You have failed to reject the null hypothesis, but because your study had little statistical precision, the defense cannot really conclude anything about the null hypothesis. To be fair, it would be entirely inappropriate for the plaintiffs to use this example to further their case either. The situation is almost as if the study did not exist. It is very much like my example of flipping a coin 10 times, and observing 6 heads, and 4 tails.  We cannot say whether the coin is fair or not fair.

    Schwartz: This gets us into the realm of “absence of evidence” vs. “evidence of absence?”

    Schachtman: That’s right. Technically, the defense has no burden of proof.  If the defense chooses to offer evidence, it may decide to show only that there is no evidence supporting the plaintiff’s case. However, the defense typically wants to go beyond its technical burden and to show that there is affirmative evidence exonerating the defendant; that is, the defense often would like to show the so-called “evidence of absence.”

    Moderator: Can you elaborate on that?

    Schachtman: If you are going to be statistically correct, you couldn’t argue that this study demonstrated “evidence of absence” – i.e., that Refinery Fish have the same benzene levels as the Control Fish. You flip a coin 10 times, and get 6 heads or 4 tails, do you have a coin that is unfairly weighted, or a fair coin that will yield 50% heads over the long haul?  The observation of the 10 flips simply doesn't really help us answer the question. In the example, we simply can’t say whether the Refinery Fish have a higher level of benzene than other fish. We have inconclusive evidence.  End of story.

    Schwartz: The next Scenario (Scenario 2), a reasonable large, statistically precise study that fails to reject the null with marginal statistical significance, say p = 0.053?

    Schachtman: In this case, plaintiffs may be able to argue that although the study didn’t reach statistical significance by the 0.05 standard, it is reasonable to rely on a slightly relaxed standard and to therefore reject the null – i.e., conclude that the Refinery Fish may actually have higher benzene levels than the Control Fish. They would highlight that the 0.05 standard is just a convention and that we shouldn’t slavishly adhere to this standard. The difference between the attained significance probability, 0.053, and the convention, 0.05, is itself not compelling.

    Schwartz: And how would the Defense respond?

    Schachtman: The Defense would counter that this is the generally accepted standard and that we need some sort of bright line cut-off value. The law needs a "test." Certainly in this type of scenario, it is difficult for the Defense to argue for “evidence of absence”. The defense will want to argue for “absence of evidence” but that becomes difficult the closer the p value is to the conventional 0.05 cut off.  If the association is real, then the plaintiffs should not have difficulty obtaining a p-value under 5% by increasing their sample size. One question courts struggle with is whether it is reasonable to insist on a large study to resolve the statistical question.

    Schwartz: And the last scenario: a series of reasonably statistically precise studies that each fail to reject the null hypothesis?

    Schachtman: Now we are in a scenario where it becomes much more reasonable to argue that we can accept the null hypothesis as a reasonable inference from our data. When a hypothesis has been repeatedly and severely tested, and the tests consistently fail to find no association, there comes a point at which we lose interest in the claim that there is an association, and we embrace a conclusion of no association. After looking under my bed many times, with bigger and bigger flashlights, lasers, motion detectors, and failing to find any communists, I have come to believe that there are no communists under my bed.  I sleep much better, and I stop taking my Xanax. Indeed, we have seen this phenomenon of repeated, severe testing leading to the acceptance of no association in a rigorous legal and medical review of the evidence related to silicone breast implants and the risk of systemic autoimmune disease came to this conclusion [see IOM report].

    Moderator: Why is it so complicated? We trial graphics and litigation support firms are in the business of simplifying!

    Schachtman: A bit too much to go into here. But there has been a lot of writing on this issue, going back at least to the great statistician, Sir Ronald A. Fischer, who refined the notion of significance tests back in the 1920s.

    Click me

    Schwartz: Sometimes the seminal papers are difficult to get through. Anything more modern?

    Schachtman: Actually, Sir Ronald wrote with wonderful clarity, and some of his papers are not burdened with a great deal of mathematical formulae.  There is a statistician by the name of Sander Greenland, who has dealt with this subject in numerous publications (here is a good example). Of course, the statistics chapter by Law Professor David Kaye, and a very accomplished statistician, the late David Freedman, in the latest edition of the Reference Manual on Scientific Evidence, is  an excellent resource.

    Schwartz: Is there any way to address the ultimate question? Any way that we can tell a judge or a jury that general causation is so unlikely that it shouldn’t be taken seriously?

    Schachtman: Actually, the classic hypothesis testing we have been talking about is called the frequentist model and it was advanced by Sir Ronald Fisher in the 1920s and 1930s. There is a whole other approach to statistical inference, called Bayesian statistics, which theoretically would allow us to offer a probability of belief in the existence of an association. Some disciples of Bayesian statistics complain that the selection of the p-value for statistical significance is arbitrary, but the Bayesian school has its fair share of conceptual problems, as well. But that is a story for another day. I think the important point is that the ultimate question of "how likely is there an association" requires a qualitative synthesis of evidence across studies, and an evaluation of validity within studies.

    Schwartz: And, finally, what about causation? You haven’t once mentioned causation.

    Schachtman: That is a good point. Because the fish study is based upon observational data – as opposed to randomized or interventional studies -- we haven’t even begun to determine whether we have a reasonable case for causation or whether bias or confounding can better explain the data. Our statistical test addressed only random variability – or the role of chance.

    Moderator: What other factors are there?

    Schachtman: The two additional factors we must address are bias and confounding. Bias refers to other systematic errors, other than random variation, which threaten the validity of the study.  Confounding refers to the presence of a "lurking" variable, which is independently associated with both the exposure and the outcome.  Bias and confounding can mask a real relationship; and they can falsely create the appearance of an association.  We haven’t even begun to address these. Indeed, bias and confounding can often be much greater threats to the validity of a scientific inference than the role of chance.  Stated simply, in evaluating causation from our statistical analyses of random variation in observational studies, we haven’t even gotten off the dime on evaluating causation.

    Moderator: And that would involve what?

    Schachtman: Some folks would argue that we would have to analyze the available studies under guidelines laid out by Sir Austin Bradford-Hill in his famous address to the Royal Society in 1965. These criteria have come to be known as the Bradford Hill Criteria. Actually, I believe those Bradford Hill guidelines were pretty good for almost 50 years ago, but today we know much more is involved. But as with the Bayesian discussion, that is a story for another day. 

    Other articles related to teaching statistics or science to juries and judges:

    Click me

    Tags: Statistics, Trial Graphics, Litigation Graphics, Courtroom Presentations, Trial Consulting, Litigation Support

    Using Trial Graphics & Statistics to Win or Defend Your Case

    Posted by Ken Lopez on Mon, Jul 9, 2012 @ 09:00 AM


    david schwartz innovative science solutionsThis article is coauthored by A2L Consulting’s CEO, Kenneth J. Lopez, J.D., a trial graphics and trial consulting expert and David H. Schwartz, Ph.D. of Innovative Science Solutions. Dr. Schwartz has extensive experience designing programs that critically review the scientific foundation for product development and major mass tort litigation. For 20 years, he has worked with the legal community evaluating product safety and defending products such as welding rods, cellular telephones, breast implants, wound care products, dietary supplements, general healthcare products, chemical exposures (e.g., hydraulic fracturing components), and a host of pharmaceutical agents (including antidepressants, dermatologics, anti-malarials, anxiolytics, antipsychotics, and diet drugs).

    [See also follow-up article discussing the null hypothesis

    Many of us have been there in the course of a trial or hearing. An expert or opposing counsel starts spouting obscure statistical jargon. Terms like "variance," "correlation," "statistical significance," "probability" or the "null hypothesis." For most, especially jurors, such talk can cause a mental shutdown as the information seems obscure and unfamiliar.

    It’s no surprise that talk of statistics causes confusion in a courtroom setting. Sometimes, a number can be much higher than another number and yet the finding will not be statistically significant. In other instances, a number can be nearly the same as its comparison value and this difference can be highly statistically significant.

    Helping judge and jury develop a clear and accurate understanding of statistical principles is critical – and using the right type of trial graphics can be invaluable. 

    Let’s demonstrate this by way of example.

    Suppose we want to know whether a petroleum refinery increases the level of benzene in fish that inhabit the coastal waters near the refinery.


    statistics litigation null hypothesis trial graphics resized 600

    The hypothesis is that the benzene level in the coastal fish near the refinery (the Refinery Fish) is higher than the benzene level in off-shore fish that live in waters far from the refinery (the Control Fish).

    trial graphics statistics hypothesis litigation courtroom resized 600

    Because we can never collect every single fish and measure benzene levels in all of them, we will never know the precise answer to the hypothesis (not to mention the fact that if we did, the study would be irrelevant because there would be no more fish). But we can sample some of the fish near the refinery and then compare the benzene levels in these fish to a sample of fish collected from the middle of the sea. Statistical techniques are a clever tool that we use to answer the research question, even though we haven't measured all the fish in each location.

    Unless one is trained in statistics, the evaluating might appear easy and straightforward. Simply compare benzene levels in the Refinery Fish sample to the benzene levels in the Control Fish sample and see which is higher. But what if our sample only reveals a very small difference between the benzene levels in the Refinery Fish sample compared to the Control Fish sample? How do we know if that difference we observed in our samples is a real difference (i.e., potentially due to a causal relationship with the refinery) or whether it was simply due to our sampling techniques (i.e., due to chance)? Statistical techniques provide us with a way to properly interpret our findings.

    An overview of well-established statistical techniques surrounding hypothesis testing is in the trial graphic below:

    statistical analysis trial graphics litigation court resized 600

    While this graphic is somewhat oversimplified, it does provide the basic steps that are taken in the hypothesis testing decision tree.

    Although imperfect [pdf], a criminal case serves as a useful analogy to help understand how statistics work. In a criminal case, the defendant is assumed to be innocent unless proven guilty beyond a reasonable doubt. In statistical terms, the overall trial can be likened to statistical testing of a hypothesis (i.e. did he do it?), and the presumption of innocence can be likened to the "null hypothesis." Like the null hypothesis, the starting point in a criminal trial is that defendant is not guilty, and in statistical terms, that the connection you've set out to establish is just not there. The trial graphics below provide an overview of this concept. Again, this is an imperfect metaphor and is subject to criticism from a pure statistical vantage point. Neverteless, it provides some assistance to the novice in clarifying the fundamental tenets of hypothesis testing.

    trial graphics explaining null hypothesis criminal analogy statistics resized 600

    Returning to our refinery hypothetical, we form our null hypothesis.

    In this case, the null hypothesis is that the Refinery Fish are exactly the same as all the other fish in the ocean in terms of benzene levels — specifically, that they come from the same population. Succinctly, the null hypothesis is as follows:

    Null Hypothesis

    There is no difference in benzene levels between the Refinery Fish and the Control Fish.

    In our study, as in all scientific studies, we will be testing how likely it is that we would obtain dataat least as extreme as our data if the null hypothesis were true. In other words, we will be evaluating the conditional probability of obtaining the data that we observe.

    In plain English, proper statistical testing means assuming your hypothesis is wrong and then evaluating the likelihood that you would come up with the findings that you did. Statistical testing is not about proving things true. Rather, it is about proving that the alternative — i.e. your null hypotheses — is likely not true. Only then can we reject the null hypothesis and conclude that our research hypothesis is plausible.

    Determining whether or not it is reasonable to reject the null hypothesis is done by collecting data in a scientific study. Here, we start by measuring benzene levels in two samples of fish: (1) a group of fish near the refinery (Refinery Fish); and . . . 

    trial graphics statistics hypothesis testing standard resized 600


    (2) a group of fish in the middle of the ocean, nowhere near the refinery (Control Fish).

    trial graphics statistics rejecting null hypothesis research control resized 600
    We will then calculate an average benzene level in each group of fish, which will serve as a reasonable estimate of the benzene level in each population of fish (i.e., all fish living near the refinery and all fish not living near the refinery). Of course, how we take our samples is a critical component of the study design, but we will assume for this example that we have used appropriate sampling techniques.

    Click me

    Let's examine 3 possible outcomes in the trial graphics below. The first possibility will deal with an obvious result.

    trial graphic using statistics attorney 1 resized 600

    In this example, let's assume that every fish in the Refinery Fish sample had a benzene level of 10, and every fish in the Control Fish sample had a benzene level of 1. Thus, the average Refinery Fish benzene level is 10 and the average Control Fish benzene level is 1. When we do our statistical test, we calculate the conditional probability – i.e., the probability that we would have obtained this dramatic difference (10 vs. 1) given that the null hypothesis is true. This probability is called a "p value."

    In this case, the p value is so low (let's say: p = 0.00000001) that we reject the null hypothesis. Stated another way: The probability of obtaining such extreme data if the null hypothesis were true is 0.0000001. Based on this analysis, it doesn’t make sense to believe that we would have obtained these results if the null hypothesis were true. So we reject the null hypothesis.

    Our study was a success. We reject the null hypothesis, and we draw a clear-cut conclusion -- i.e., the Refinery Fish come from a different population of fish with respect to benzene levels. So we conclude that the refinery, absent other factors, may have something to do with the benzene levels in these fish. Because this difference was so clear-cut (every single fish in the Refinery Fish sample had extremely high benzene levels and every single fish in the Control Fish sample had extremely low values), we didn’t even need statistics to get our answer.

    Now let's look at another, more realistic, possibility. This time the difference between the two samples is a little less clear cut.

    trial graphic using statistics in courtroom 2 resized 600

    In this example, the average benzene level in the Refinery Fish sample is 8 and the average benzene level in the Control Fish sample is 3. When we do our statistical test, we learn that the p value is 0.02. Said another way, the probability that we would have obtained these findings, given that the null hypothesis is true, is about 2%.

    Thus, as with the extreme example above, the probability of obtaining these findings, given that the null hypothesis is true is very low (not quite as low as in the prior example, but still pretty low). This raises the question: how low a probability is low enough?

    null hypothesis teaching to jury standards rules resized 600

    Traditionally, statisticians have used a “cut-off” probability level of 5%. If the probability of obtaining a certain set of results is less than 5% (given the null hypothesis), then scientists and statisticians have agreed that it is reasonable to reject the null hypothesis. In this case, we reject the null hypothesis and conclude that the Refinery Fish must come from a different population than the Control Fish. Again, as with the earlier example, we conclude that the refinery must have something to do, absent other factors, with the benzene levels.

    So far, so good. Now, let's do one more. This time let's assume that the difference between the Refinery Fish sample and the Control Fish sample has gotten much smaller.

    trial graphics using statistics judge jury law 3 resized 600 1

    In this example, the average benzene level in the Refinery Fish sample is 5 and the average benzene level in the Control fish sample is 4. The benzene levels, on average, are numerically higher in the Refinery Fish compared to the Control Fish. But are they statistically higher? In statistical terms, how likely would it be to obtain these findings if all the fish were the same with respect to their benzene levels? In other words, is it reasonable to conclude we would have obtained findings this extreme if the refinery had nothing to do with the benzene levels?

    When we do our statistical test, we learn that the p value is 0.25. Thus, the probability that we would have obtained findings this extreme, given that the null hypothesis is true, is about 25%. One in four times that we take these samples, we will get findings like this if the null hypothesis is true.

    A twenty-five percent chance is not so unlikely. It certainly doesn't meet the 5% cut-off rule (i.e., less than 5%). Therefore, statistical best practices tell us that we cannot reject the null hypothesis.

    But what does it mean when we cannot reject the null hypothesis? Can we conclude that the null is true? This is actually a critical question, and it represents an area where statistics often get misused in court, in trial graphics, in the media and elsewhere. And what about other intervening factors like bias and confounding?

    Our next posts on using trial graphics and statistics to win or defend your case will grapple with these important questions. Please do leave a comment below (your email address is not displayed or shared).

     

    [See also follow-up article discussing the null hypothesis

     

    Other A2L Resources related to this article:



      Click me

     

    Click me

    Tags: Energy Litigation, Statistics, Trial Graphics, Litigation Graphics, Courtroom Presentations, Demonstrative Evidence, Science, Environmental Litigation, Advocacy Graphics

    Confidential A2L Consulting Conflicts Check Form

    Join 8,900 Subscribers and Get Notified of New Articles Every Week

    Watch Now: Persuading with Storytelling



    Free Litigation Webinars - Watch Now

    ryan flax a2l litigation consultants webinar recorded


    patent litigation webinar free litigation graphics demonstrative

    Featured E-Book: The Patent Litigator's Guide to Trial Presentation & Trial Preparation

    patent litigation ebook 3rd edition

    Featured Free Download: The Complex Civil Litigation Trial Guide

    a2l consultants complex civil litigation trial guide download

    Free Webinar - Integrating Expert Evidence & Winning Arguments - Watch Anytime.

    expert witness teach science complex subject courtroom webinar

    Nationally Acclaimed - Voted #1 Jury Research Firm and #1 Demonstrative Evidence Firm in the U.S.

    voted best demonstrative evidence consultants

    A2L best demonstrative trial graphics consultants
    best demonstrative evidence litigation graphics consultants

    Download the (Free) Storytelling for Litigators E-Book

    describe the image

    Considering Using a Trial Technician at Your Next Trial? Download this first.

    trial technicians trial technology atlanta houston new york boston virginia

    Featured Free Download: Using Science to Prevail in Your Next Case or Controversy

    using science to win at trial litigation jury

    Featured FREE A2L E-Book: Using Litigation Graphics Persuasively

    using litigation graphics trial graphics trial presentation consultants

    Free Jury Consulting & Trial Consulting Guidebook for Litigators

    jury consulting trial consultants guide

    Timelines Appear In Most Trials - Learn how to get the most out of using trial timelines in this ebook

    trial timelines graphics consultants litigators

    Featured Complimentary eBook - The 100-page Antitrust Litigation Guide

    antitrust ebook a2l litigation consultants

    Featured Complimentary eBook - Leadership Lessons for Litigators and Litigation Support

    leadership lessons litigation law firms litigation support

    Featured E-Book: The Environmental Litigator's Guide to Trial Presentation & Prep

    environmental litigation trial presentation trial prep ebook a2l

    Authors

    KenLopez resized 152

    Ken Lopez founded A2L Consulting in 1995. The firm has since worked with litigators from all major law firms on more than 10,000 cases with over $2 trillion cumulatively at stake.  The A2L team is comprised of psychologists, jury consultants, trial consultants, litigation consultants, attorneys and information designers who provide jury consulting, litigation graphics and trial technology.  Ken Lopez can be reached at lopez@A2LC.com.


    tony-klapper-headshot-500x500.jpg 

    Tony Klapper joined A2L Consulting after accumulating 20 years of litigation experience while a partner at both Reed Smith and Kirkland & Ellis. Today, he is the Managing Director of Litigation Consulting and General Counsel for A2L Consulting. Tony has significant litigation experience in products liability, toxic tort, employment, financial services, government contract, insurance, and other commercial disputes.  In those matters, he has almost always been the point person for demonstrative evidence and narrative development on his trial teams. Tony can be reached at klapper@a2lc.com.


    dr laurie kuslansky jury consultant a2l consulting







    Laurie R. Kuslansky, Ph.D., Managing Director, Trial & Jury Consulting, has conducted over 400 mock trials in more than 1,000 litigation engagements over the past 20 years. Dr. Kuslansky's goal is to provide the highest level of personalized client service possible whether one's need involves a mock trial, witness preparation, jury selection or a mock exercise not involving a jury. Dr. Kuslansky can be reached at kuslansky@A2LC.com.

    Articles by Category

    Follow A2L Consulting

    Member Red Well Blog
    ABA Blawg 100 2013 7th annual

    Follow Us on Google+

    A2L on Google+