Blood Spatter Evidence in Bryan Case

Recent testimony from an evidentiary hearing in the Joe Bryan case in Texas has centered on blood spatter interpretation. Bryan was a school principal who has been serving a 99-year prison sentence for the 1985 murder of his wife. His conviction was based in part on the testimony of an expert witness in blood spatter interpretation.

The blood spatter testimony indicated that the weapon used was a .357, like the one that Bryan owned. The prosecution stated that a flashlight found in Bryan’s car days after the murder contained small drops of the victim’s blood, although only presumptive tests were conducted. The blood spatter analyst testified the killer held the flashlight as he shot from close range.

Bryan continues to assert his innocence and an evidentiary hearing was held last month in which his attorney argued – backed by the testimony of a crime scene investigator – that the blood spatter testimony presented at trial was scientifically unsubstantiated. The hearing is currently in recess, awaiting the results of DNA testing on the flashlight that are expected to come back sometime in the fall.

In May, Pamela Collof published a two-part article describing in-depth the background of the case and the testimony of the blood spatter expert witness. The articles are available at: (Part I) and (Part II).

Amicus Brief in McPhaul

A group submitted an amicus brief to the North Carolina Supreme Court that argues for stronger examination of the reliable application of fingerprint and other expert evidence.  I authored the brief with remarkable assistance of a group of twenty-six leading forensic analysts, statisticians, and researchers, who signed the brief.  Duke Law describes that the group included Professor Nita Farahany JD/MA ’04 PhD ’06, who directs the Duke Initiative on Science and Society, and Pate Skene ’13, an associate research professor of neurobiology at Duke.

The brief was filed in a case in which testimony of a latent fingerprint examiner helped to convict the defendant. The N.C. Court of Appeals affirmed the conviction last year, but found error in the admission of the fingerprint testimony.

“The goal of the brief was to emphasize to the court that only expert work that is reliably applied to the facts should be admitted at trial,” said Garrett, the L. Neil Williams, Jr. Professor of Law. “Wrongful convictions can and have resulted when forensic methods are poorly applied in criminal cases.”

North Carolina rules of evidence require that forensic evidence presented in court be “the product of reliable principles and methods” and that an expert testifying about it “has applied the principles and methods reliably to the facts of the case.” However, the fingerprint examiner who testified in the case, North Carolina v. McPhaul, made “unequivocal statements” that the defendant was the source of prints found on certain pieces of evidence. Despite repeated questioning by the defense attorney, the prosecutor, and also the trial judge, the fingerprint examiner still could not explain what was done during the comparison process or how this conclusion was reached.

Garrett said the appellate court was correct in ruling that the fingerprint testimony should have been excluded at trial. “The reliability of a method like fingerprinting depends on the skill, experience, and the work done by the examiner,” he said. “North Carolina has adopted Rule 702 in the rules of evidence, following the federal approach. This ‘reliability rule’ asks the judge to ensure that the application of methods to the facts is itself reliable.  Even if the method itself is accepted, the person’s work in a given case must be reliable.”

AAAS Responds to Justice Department Fingerprint Guidelines

The American Association for the Advancement of Science released a letter and an article inviting the Department of Justice to build on its recent approval  of “Uniform Language for Testimony and Reports” to be used by its forensic examiners in  statements about analyses of forensic latent fingerprint evidence conducted in its labs.

AAAS CEO Rush Holt stated, “some of the new [DOJ] measures constitute much-needed and welcome changes relating to the testimony or statements examiners are permitted to offer in latent finger print analyses.”

However,  is still “no scientific basis for estimating the number of individuals who might have a particular pattern of features; therefore, there is no scientific basis on which an examiner might form an expectation of whether an arrangement comes from the same source,” said Holt. “The proposed language fails to acknowledge the uncertainty that exists regarding the rarity of particular fingerprint patterns. Any expectations that an examiner asserts necessarily rest on speculation, rather than scientific evidence.”

Holt proposes that the Justice Department’s guidelines should instead instruct examiners to avoid conclusory language and unsupportable claims in favor of language that reflects scientific uncertainty in matching outcomes and processes.

Such language would allow examiners to note when two fingerprints display “a great deal of detail with no differences,” Holt proposed. Yet, such an observation would have to be accompanied with the admission that, “there is no way to determine how many other people might have a finger with a corresponding set of ridge features, but it is my opinion that this set of features would be unusual.”

A Special Investigation into a Conviction by Toolmark Comparison

The Nation conducted a special investigation into the investigation and trial of Jimmy Genrich, a man whose “fate hung on the [analysis of] toolmarks, the only physical evidence that connected him to” a series of fatal bombings in Colorado. The investigation concludes that “Genrich’s case reveals a system that makes it nearly impossible to throw unproven forensic science out of courts and may be keeping thousands of innocent people behind bars.”

The piece details the progression of the investigation from the commencement of law enforcement interest in Genrich owing to his “history of mental illness” and attempt to purchase The Anarchist Cookbook during the time frame in which the bombings occurred. In an investigation of Genrich totalingmore than $1 million” the only potentially incriminating evidence located by the police included pliers and wire-strippers believed to be used in constructing the bombs. The police recruited forensic analyst John O’Neil to compare Genrich’s tools to marks found on recovered bomb fragments. O’Neil concluded, and later testified to the fact that “that Genrich’s tool must have cut the wire in the bomb, ‘to the exclusion of any other tool’ in the world.”

During Genrich’s trial, his legal team learned there were no scientific studies to back up toolmark comparisons. Furthermore “there was no standardized protocol to be followed. There were no criteria for how many points of similarity constituted a unique match. It seemed to be just O’Neil’s subjective judgment.” Despite these realities, the jury deliberated for four days and delivered a guilty verdict.

Today, Genrich is represented by the Innocence Project and is arguing that “the scientific consensus around toolmark evidence has changed.” He cites leading scientists at the NAS and PCAST who “say toolmark matching has not yet proved to be a scientifically reliable method,” and is “barely science at all.” Therefore, the kind of testimony O’Neil gave is “scientifically indefensible.” The Innocence Project argues this indefensible testimony constitutes “newly discovered evidence” and that Genrich deserves a new trial.

The piece concludes with a discussion of the future of toolmark comparisons. Toolmark analysis as a science is not without support, as a single study from 2009, that tested toolmark examiners’ abilities in a controlled setting, found that eight FBI toolmark examiners made no errors in analyzing marks left by screwdrivers. However, “one small study, in which the researchers have a vested interest in the outcome, on one type of tool is hardly a validation of the field.” As Judge Catherine Easterly wrote in a recent opinion in the DC Court of Appeals, until scientists conducting toolmark comparisons can establish regulations and a clear error rate, “a certainty statement regarding toolmark pattern matching has the same probative value as the vision of a psychic: it reflects nothing more than the individual’s foundationless faith in what he believes to be true.”

OSAC Lexicon – online dictionary for forensics

A new registry of terms used in forensics and definitions – here – e.g. here are ten different definitions of “identification” :
Identification  In computer forensics, a process involving the search for, recognition and documentation of potential digital evidenceDigital Evidence, Facial Identification, Video/Imaging Technology & Analysis (Digital / Multimedia) 02/23/18

Identification  In facial identification, a task in which a biometric system searches a database for a reference matching a submitted biometric sample and, if found, returns a corresponding identity. (Compare individualization)Digital Evidence, Facial Identification, Video/Imaging Technology & Analysis (Digital / Multimedia) 02/23/18

Identification  An examination conclusion that results from the observance of agreement of all discernible class characteristics and sufficient agreement of a combination of individual characteristics where the extent of agreement exceeds that which can occur in the comparison of toolmarks made by different tools, and is consistent with the agreement demonstrated by toolmarks known to have been produced by the same tool. Such identifications are made to the practical, not absolute, exclusion of all other tools. See Range of Conclusions Possible When Comparing ToolmarksFirearms & Toolmarks (Physics and Pattern Interpretation) 02/23/18

Identification  An opinion by an examiner that the particular known footwear or tire was the source of, and made, the impression. This is the highest degree of association expressed in footwear and tire impression examinationsFootwear & Tire (Physics and Pattern Interpretation) 02/23/18

IdentificationA task where the biometric system searches a database for a biometric reference matching a submitted biometric sample and, if found, returns a corresponding identity and biometric references which can result in a biometric verification/authentication i.e. access control systemFacial Identification (Digital / Multimedia) 02/23/18

Identification1. See individualization. 2. In some forensic disciplines, this term denotes the similarity of class characteristicsFriction Ridge (Physics and Pattern Interpretation) 02/23/18

Identification  A classification process intending to discriminate individual members of a setDigital Evidence, Facial Identification, Video/Imaging Technology & Analysis (Digital / Multimedia) 02/23/18

Identification  The conclusion that the sources of two samples cannot be distinguished from each otherDigital Evidence, Facial Identification, Video/Imaging Technology & Analysis (Digital / Multimedia) 02/23/18

Identification  The practice of using comparative examination to deduce the taxonomic origin of an organism, its parts, or derivatives (e.g. taxonomic identification)Wildlife Forensics (Biology / DNA) 02/23/18

Identification  See Individualization

Grisham on Flawed Forensics – Read the Transcript

John Grisham wrote a powerful op-ed, here, today in the L.A Times, discussing causes of wrongful convictions, including flawed forensic evidence.  He notes, citing to data that I’ve collected, that “Of the 330 people exonerated by DNA tests between 1989 and 2015, 71% were convicted based on forensic testimony, much of which was flawed, unreliable, exaggerated or sometimes outright fabricated.”

Grisham then discusses a fantastic new book by Radley Balko and Tucker Carringon, “The Cadaver King and the Country Dentist,” that describes how over many years, two experts in Mississippi, testified about forensics to convict people later exonerated.

You can read the testimony in one of those cases, later shown to be false, in the death penalty case of DNA exoneree Kennedy Brewer, here, on my resource website.  The analyst concluded that Brewer’s teeth in fact left the marks: “Within reasonable medical certainty, the teeth of Kenneth—un, Mr. Kennedy Brewer inflicted the patterns described on the body” of the victim, and explaining that reasonable medical certainty means “yes, he did” leave the marks.

March 26 Forensics, Statistics and Law conference at UVA

Forensics, Statistics and the Law

Experts in forensics, statistics and the law will convene for a conference at the University of Virginia School of Law on March 26 to mark the 25th anniversary of the U.S. Supreme Court’s decision in Daubert v. Merrell Dow Pharmaceuticals Inc., which reshaped how judges evaluate scientific and expert evidence.

Judge Jed Rakoff of the U.S. District Court for the Southern District of New York will deliver the keynote address at noon. The conference begins at 8:30 a.m. in the Law School’s Caplin Pavilion.

The Daubert ruling coincided with a surge in scientific research relevant to criminal cases, including the development of modern DNA testing that both exonerated hundreds of individuals and provided more accurate evidence of guilt.

“Leading scientific commissions have pointed out real shortcomings in the use of forensic evidence in the courtroom,” said professor Brandon Garrett, a participant in the conference and a principal investigator for the Law School’s Center for Statistics and Applications in Forensics Evidence, or CSAFE, projects. “The CSAFE collaboration, extending across four universities, including UVA, has been working with generous support from the National Institute of Standards and Technology to research these questions.”

Panelists will discuss how to develop better forensic evidence, how to analyze it more accurately in the crime lab and how to present it more effectively in criminal cases. Several contributions will be published in a special symposium issue of the Virginia Journal of Criminal Law.

The conference is sponsored by the Virginia Journal of Criminal Law and the Center for Statistics and Applications in Forensic Evidence.

The talks are free and open to the public. Attendees may contact Garrett at or (434) 924-4153 for more information.


Monday, March 26

Caplin Pavilion

8:30-9:15 a.m.

Continental Breakfast

9:15-9:30 a.m.


  • Brandon Garrett, White Burkett Miller Professor of Law, Public Affairs Justice Thurgood Marshall Distinguished Professor of Law, University of Virginia School of Law
  • Karen Kafadar, Commonwealth Professor and Chair, Department of Statistics, University of Virginia

9:30-10:30 a.m.

Introductory Remarks: The Importance of Statistics and Forensics

Statistics and Forensics

  • Susan M. Ballou, Program Manager, National Institute of Standards and Technology, American Academy of Forensic Science Fellow

Statistics and the Courts

  • Peter Neufeld, Co-Director, The Innocence Project, Benjamin N. Cardozo School of Law

10:45 a.m-Noon

Statistics, Research and Forensics

  • ModeratorM. Chris Fabricant, Director of Strategic Litigation, The Innocence Project
  • Alicia Carriquiry, Distinguished Professor, Department of Statistics, Iowa State University
  • Hari Iyer, Statistical Design, Analysis, and Modeling Group, National Institute of Standards and Technology, U.S. Department of Commerce
  • Karen Kafadar, Commonwealth Professor and Chair, Department of Statistics, University of Virginia

Noon-1:15 p.m.


Keynote Address: Judging Forensics

Jed S. Rakoff, Senior Judge, U.S. District Court for the Southern District of New York

1:30-2:45 p.m.

Statistics in the Crime Lab

  • ModeratorBrandon Garrett, White Burkett Miller Professor of Law, Public Affairs Justice Thurgood Marshall Distinguished Professor of Law, University of Virginia School of Law
  • Linda C. Jackson, Director, Virginia Department of Forensic Science
  • Sharon Kelley, Assistant Professor, Department of Psychiatry and Neurobehavioral Sciences, University of Virginia
  • Peter Stout, President and CEO, Houston Forensic Science Center
  • Henry Swofford, Chief, Latent Print Branch, Defense Forensic Science Center

3-4:30 p.m.

Bringing Statistics into the Courtroom

  • ModeratorWilliam C. Thompson, Professor of Criminology, Law, and Society; Psychology and Social Behavior; and Law, University of California, Irvine School of Social Ecology
  • David L. Faigman, Chancellor and Dean, John F. Digardi Distinguished Professor of Law, University of California Hastings College of Law
  • David H. Kaye, Distinguished Professor of Law, Weiss Family Scholar, Penn State Law
  • A.J. Kramer, Federal Public Defender’s Office, District of Columbia
  • Barbara A. Spellman, Professor of Law, Professor of Psychology, University of Virginia School of Law

The Myth of the Reliability Test

A new piece by Chris Fabricant and I is now posted on ssrn here.  Below is the abstract:

The U.S. Supreme Court’s ruling in Daubert v. Merrell Dow Pharmaceuticals, Inc., and subsequent revisions to Federal Rule of Evidence 702, was supposed to usher a reliability revolution. This modern test for admissibility of expert evidence is sometimes described as a reliability test. Critics, however, have pointed out that judges continue to routinely admit unreliable evidence, particularly in criminal cases, including flawed forensic techniques that have contributed to convictions of innocent people later exonerated by DNA testing. This Article examines whether Rule 702 is in fact functioning as a reliability test, focusing on forensic evidence used in criminal cases and detailing the use of that test in states that have adopted the language of the 2000 revisions to Rule 702. Surveying hundreds of state court cases, we find that courts have largely neglected the critical language concerning reliability in the Rule. Rule 702 states that an expert may testify if that testimony is “the product of reliable principles and methods,” which are “reliably applied” to the facts of a case. Or as the Advisory Committee puts it simply, judges are charged to “exclude unreliable expert testimony.” Judges have not done so in state or federal courts, and in this study, we detail how that has occurred, focusing on criminal cases.

We assembled a collection of 229 state criminal cases that quote and in some minimal fashion discuss the reliability requirement. This archive will hopefully be of use to litigators and evidence scholars. We find, however, that in the unusual cases in which state courts discuss reliability under Rule 702 they invariably admit the evidence, largely by citing to precedent and qualifications of the expert or by acknowledging but not acting upon the reliability concern. In short, the supposed reliability test adopted in Rule 702 is rarely applied to assess reliability. We call on judges do far more to ensure reliability of expert evidence and recommend sharper Rule 702 requirements. We emphasize, though, that it is judicial inaction and not the language of Rule 702 that has made the reliability test a myth.

The Cadaver King and the Country Dentist

Read Tim Requarth’s piece in Slate here about the gripping and important new book by Radley Balko and Tucker Carrington.  Requarth quotes from the book:

The primary antagonists in this story are Steven Hayne, the state’s former de facto medical examiner, and Michael West, a prolific forensic dentist. A third is the state of Mississippi itself—not its people, but its institutions. In a larger sense, blame rests on courts—both state and federal—media, and professional organizations that not only failed to prevent this catastrophe but did little to nothing even after it was clear that something was terribly wrong. What you’re about to read didn’t happen by accident.

Requarth then says:

The Cadaver King and the Country Dentist is a densely reported book that highlights not only the cases of Brewer and Brooks but also a dizzying array of other wrongful convictions. The authors conducted more than 200 interviews and reviewed thousands of pages of court documents, letters, memos, case reports, and media accounts to trace the contours of a corrupted system. Hayne, they note, performed 80 percent of Mississippi’s state-ordered autopsies, or about 1,700 annually. This stands in contrast to guidelines from the National Association of Medical Examiners, which states that performing more than 325 annually is tantamount to malpractice. Hayne’s pace was likely a problem. In one autopsy report, Hayne described removing the uterus and ovaries—from a man. But quality, perhaps, wasn’t the point. With West as a sidekick, the duo could be counted on to deliver the “evidence” prosecutors needed for convictions. Hayne would discover “bite marks” on a victim’s body, and West would be called in to match them to the suspect’s teeth.

Testimonial Monitoring

I’ve long thought it extremely important that testimony in court by forensic analysts be routinely read and reviewed by supervisors, to ensure accuracy, consistency, and professionalism.  The DOJ importantly just announced in a memo, here, a program to do just that.  The introduction to the memo reads:

Testimony monitoring is a quality assurance measure by which Department of Justice forensic laboratories and digital analysis entities can ensure that results of forensic analyses are properly qualified and appropriately communicated in testimony. Its purpose is to provide examiners with ongoing assessments of their testimonial presentations and to highlight opportunities for continual improvement.

%d bloggers like this: