AAAS Responds to Justice Department Fingerprint Guidelines

The American Association for the Advancement of Science released a letter and an article inviting the Department of Justice to build on its recent approval  of “Uniform Language for Testimony and Reports” to be used by its forensic examiners in  statements about analyses of forensic latent fingerprint evidence conducted in its labs.

AAAS CEO Rush Holt stated, “some of the new [DOJ] measures constitute much-needed and welcome changes relating to the testimony or statements examiners are permitted to offer in latent finger print analyses.”

However,  is still “no scientific basis for estimating the number of individuals who might have a particular pattern of features; therefore, there is no scientific basis on which an examiner might form an expectation of whether an arrangement comes from the same source,” said Holt. “The proposed language fails to acknowledge the uncertainty that exists regarding the rarity of particular fingerprint patterns. Any expectations that an examiner asserts necessarily rest on speculation, rather than scientific evidence.”

Holt proposes that the Justice Department’s guidelines should instead instruct examiners to avoid conclusory language and unsupportable claims in favor of language that reflects scientific uncertainty in matching outcomes and processes.

Such language would allow examiners to note when two fingerprints display “a great deal of detail with no differences,” Holt proposed. Yet, such an observation would have to be accompanied with the admission that, “there is no way to determine how many other people might have a finger with a corresponding set of ridge features, but it is my opinion that this set of features would be unusual.”

A Special Investigation into a Conviction by Toolmark Comparison

The Nation conducted a special investigation into the investigation and trial of Jimmy Genrich, a man whose “fate hung on the [analysis of] toolmarks, the only physical evidence that connected him to” a series of fatal bombings in Colorado. The investigation concludes that “Genrich’s case reveals a system that makes it nearly impossible to throw unproven forensic science out of courts and may be keeping thousands of innocent people behind bars.”

The piece details the progression of the investigation from the commencement of law enforcement interest in Genrich owing to his “history of mental illness” and attempt to purchase The Anarchist Cookbook during the time frame in which the bombings occurred. In an investigation of Genrich totalingmore than $1 million” the only potentially incriminating evidence located by the police included pliers and wire-strippers believed to be used in constructing the bombs. The police recruited forensic analyst John O’Neil to compare Genrich’s tools to marks found on recovered bomb fragments. O’Neil concluded, and later testified to the fact that “that Genrich’s tool must have cut the wire in the bomb, ‘to the exclusion of any other tool’ in the world.”

During Genrich’s trial, his legal team learned there were no scientific studies to back up toolmark comparisons. Furthermore “there was no standardized protocol to be followed. There were no criteria for how many points of similarity constituted a unique match. It seemed to be just O’Neil’s subjective judgment.” Despite these realities, the jury deliberated for four days and delivered a guilty verdict.

Today, Genrich is represented by the Innocence Project and is arguing that “the scientific consensus around toolmark evidence has changed.” He cites leading scientists at the NAS and PCAST who “say toolmark matching has not yet proved to be a scientifically reliable method,” and is “barely science at all.” Therefore, the kind of testimony O’Neil gave is “scientifically indefensible.” The Innocence Project argues this indefensible testimony constitutes “newly discovered evidence” and that Genrich deserves a new trial.

The piece concludes with a discussion of the future of toolmark comparisons. Toolmark analysis as a science is not without support, as a single study from 2009, that tested toolmark examiners’ abilities in a controlled setting, found that eight FBI toolmark examiners made no errors in analyzing marks left by screwdrivers. However, “one small study, in which the researchers have a vested interest in the outcome, on one type of tool is hardly a validation of the field.” As Judge Catherine Easterly wrote in a recent opinion in the DC Court of Appeals, until scientists conducting toolmark comparisons can establish regulations and a clear error rate, “a certainty statement regarding toolmark pattern matching has the same probative value as the vision of a psychic: it reflects nothing more than the individual’s foundationless faith in what he believes to be true.”

OSAC Lexicon – online dictionary for forensics

A new registry of terms used in forensics and definitions – here – e.g. here are ten different definitions of “identification” :
Identification  In computer forensics, a process involving the search for, recognition and documentation of potential digital evidenceDigital Evidence, Facial Identification, Video/Imaging Technology & Analysis (Digital / Multimedia) 02/23/18

Identification  In facial identification, a task in which a biometric system searches a database for a reference matching a submitted biometric sample and, if found, returns a corresponding identity. (Compare individualization)Digital Evidence, Facial Identification, Video/Imaging Technology & Analysis (Digital / Multimedia) 02/23/18

Identification  An examination conclusion that results from the observance of agreement of all discernible class characteristics and sufficient agreement of a combination of individual characteristics where the extent of agreement exceeds that which can occur in the comparison of toolmarks made by different tools, and is consistent with the agreement demonstrated by toolmarks known to have been produced by the same tool. Such identifications are made to the practical, not absolute, exclusion of all other tools. See Range of Conclusions Possible When Comparing ToolmarksFirearms & Toolmarks (Physics and Pattern Interpretation) 02/23/18

Identification  An opinion by an examiner that the particular known footwear or tire was the source of, and made, the impression. This is the highest degree of association expressed in footwear and tire impression examinationsFootwear & Tire (Physics and Pattern Interpretation) 02/23/18

IdentificationA task where the biometric system searches a database for a biometric reference matching a submitted biometric sample and, if found, returns a corresponding identity and biometric references which can result in a biometric verification/authentication i.e. access control systemFacial Identification (Digital / Multimedia) 02/23/18

Identification1. See individualization. 2. In some forensic disciplines, this term denotes the similarity of class characteristicsFriction Ridge (Physics and Pattern Interpretation) 02/23/18

Identification  A classification process intending to discriminate individual members of a setDigital Evidence, Facial Identification, Video/Imaging Technology & Analysis (Digital / Multimedia) 02/23/18

Identification  The conclusion that the sources of two samples cannot be distinguished from each otherDigital Evidence, Facial Identification, Video/Imaging Technology & Analysis (Digital / Multimedia) 02/23/18

Identification  The practice of using comparative examination to deduce the taxonomic origin of an organism, its parts, or derivatives (e.g. taxonomic identification)Wildlife Forensics (Biology / DNA) 02/23/18

Identification  See Individualization

Grisham on Flawed Forensics – Read the Transcript

John Grisham wrote a powerful op-ed, here, today in the L.A Times, discussing causes of wrongful convictions, including flawed forensic evidence.  He notes, citing to data that I’ve collected, that “Of the 330 people exonerated by DNA tests between 1989 and 2015, 71% were convicted based on forensic testimony, much of which was flawed, unreliable, exaggerated or sometimes outright fabricated.”

Grisham then discusses a fantastic new book by Radley Balko and Tucker Carringon, “The Cadaver King and the Country Dentist,” that describes how over many years, two experts in Mississippi, testified about forensics to convict people later exonerated.

You can read the testimony in one of those cases, later shown to be false, in the death penalty case of DNA exoneree Kennedy Brewer, here, on my resource website.  The analyst concluded that Brewer’s teeth in fact left the marks: “Within reasonable medical certainty, the teeth of Kenneth—un, Mr. Kennedy Brewer inflicted the patterns described on the body” of the victim, and explaining that reasonable medical certainty means “yes, he did” leave the marks.

March 26 Forensics, Statistics and Law conference at UVA

Forensics, Statistics and the Law

Experts in forensics, statistics and the law will convene for a conference at the University of Virginia School of Law on March 26 to mark the 25th anniversary of the U.S. Supreme Court’s decision in Daubert v. Merrell Dow Pharmaceuticals Inc., which reshaped how judges evaluate scientific and expert evidence.

Judge Jed Rakoff of the U.S. District Court for the Southern District of New York will deliver the keynote address at noon. The conference begins at 8:30 a.m. in the Law School’s Caplin Pavilion.

The Daubert ruling coincided with a surge in scientific research relevant to criminal cases, including the development of modern DNA testing that both exonerated hundreds of individuals and provided more accurate evidence of guilt.

“Leading scientific commissions have pointed out real shortcomings in the use of forensic evidence in the courtroom,” said professor Brandon Garrett, a participant in the conference and a principal investigator for the Law School’s Center for Statistics and Applications in Forensics Evidence, or CSAFE, projects. “The CSAFE collaboration, extending across four universities, including UVA, has been working with generous support from the National Institute of Standards and Technology to research these questions.”

Panelists will discuss how to develop better forensic evidence, how to analyze it more accurately in the crime lab and how to present it more effectively in criminal cases. Several contributions will be published in a special symposium issue of the Virginia Journal of Criminal Law.

The conference is sponsored by the Virginia Journal of Criminal Law and the Center for Statistics and Applications in Forensic Evidence.

The talks are free and open to the public. Attendees may contact Garrett at bgarrett@virginia.edu or (434) 924-4153 for more information.

Schedule

Monday, March 26

Caplin Pavilion

8:30-9:15 a.m.

Continental Breakfast


9:15-9:30 a.m.

Introduction/Welcome

  • Brandon Garrett, White Burkett Miller Professor of Law, Public Affairs Justice Thurgood Marshall Distinguished Professor of Law, University of Virginia School of Law
  • Karen Kafadar, Commonwealth Professor and Chair, Department of Statistics, University of Virginia

9:30-10:30 a.m.

Introductory Remarks: The Importance of Statistics and Forensics

Statistics and Forensics

  • Susan M. Ballou, Program Manager, National Institute of Standards and Technology, American Academy of Forensic Science Fellow

Statistics and the Courts

  • Peter Neufeld, Co-Director, The Innocence Project, Benjamin N. Cardozo School of Law

10:45 a.m-Noon

Statistics, Research and Forensics

  • ModeratorM. Chris Fabricant, Director of Strategic Litigation, The Innocence Project
  • Alicia Carriquiry, Distinguished Professor, Department of Statistics, Iowa State University
  • Hari Iyer, Statistical Design, Analysis, and Modeling Group, National Institute of Standards and Technology, U.S. Department of Commerce
  • Karen Kafadar, Commonwealth Professor and Chair, Department of Statistics, University of Virginia

Noon-1:15 p.m.

Lunch

Keynote Address: Judging Forensics

Jed S. Rakoff, Senior Judge, U.S. District Court for the Southern District of New York


1:30-2:45 p.m.

Statistics in the Crime Lab

  • ModeratorBrandon Garrett, White Burkett Miller Professor of Law, Public Affairs Justice Thurgood Marshall Distinguished Professor of Law, University of Virginia School of Law
  • Linda C. Jackson, Director, Virginia Department of Forensic Science
  • Sharon Kelley, Assistant Professor, Department of Psychiatry and Neurobehavioral Sciences, University of Virginia
  • Peter Stout, President and CEO, Houston Forensic Science Center
  • Henry Swofford, Chief, Latent Print Branch, Defense Forensic Science Center

3-4:30 p.m.

Bringing Statistics into the Courtroom

  • ModeratorWilliam C. Thompson, Professor of Criminology, Law, and Society; Psychology and Social Behavior; and Law, University of California, Irvine School of Social Ecology
  • David L. Faigman, Chancellor and Dean, John F. Digardi Distinguished Professor of Law, University of California Hastings College of Law
  • David H. Kaye, Distinguished Professor of Law, Weiss Family Scholar, Penn State Law
  • A.J. Kramer, Federal Public Defender’s Office, District of Columbia
  • Barbara A. Spellman, Professor of Law, Professor of Psychology, University of Virginia School of Law

The Myth of the Reliability Test

A new piece by Chris Fabricant and I is now posted on ssrn here.  Below is the abstract:

The U.S. Supreme Court’s ruling in Daubert v. Merrell Dow Pharmaceuticals, Inc., and subsequent revisions to Federal Rule of Evidence 702, was supposed to usher a reliability revolution. This modern test for admissibility of expert evidence is sometimes described as a reliability test. Critics, however, have pointed out that judges continue to routinely admit unreliable evidence, particularly in criminal cases, including flawed forensic techniques that have contributed to convictions of innocent people later exonerated by DNA testing. This Article examines whether Rule 702 is in fact functioning as a reliability test, focusing on forensic evidence used in criminal cases and detailing the use of that test in states that have adopted the language of the 2000 revisions to Rule 702. Surveying hundreds of state court cases, we find that courts have largely neglected the critical language concerning reliability in the Rule. Rule 702 states that an expert may testify if that testimony is “the product of reliable principles and methods,” which are “reliably applied” to the facts of a case. Or as the Advisory Committee puts it simply, judges are charged to “exclude unreliable expert testimony.” Judges have not done so in state or federal courts, and in this study, we detail how that has occurred, focusing on criminal cases.

We assembled a collection of 229 state criminal cases that quote and in some minimal fashion discuss the reliability requirement. This archive will hopefully be of use to litigators and evidence scholars. We find, however, that in the unusual cases in which state courts discuss reliability under Rule 702 they invariably admit the evidence, largely by citing to precedent and qualifications of the expert or by acknowledging but not acting upon the reliability concern. In short, the supposed reliability test adopted in Rule 702 is rarely applied to assess reliability. We call on judges do far more to ensure reliability of expert evidence and recommend sharper Rule 702 requirements. We emphasize, though, that it is judicial inaction and not the language of Rule 702 that has made the reliability test a myth.

The Cadaver King and the Country Dentist

Read Tim Requarth’s piece in Slate here about the gripping and important new book by Radley Balko and Tucker Carrington.  Requarth quotes from the book:

The primary antagonists in this story are Steven Hayne, the state’s former de facto medical examiner, and Michael West, a prolific forensic dentist. A third is the state of Mississippi itself—not its people, but its institutions. In a larger sense, blame rests on courts—both state and federal—media, and professional organizations that not only failed to prevent this catastrophe but did little to nothing even after it was clear that something was terribly wrong. What you’re about to read didn’t happen by accident.

Requarth then says:

The Cadaver King and the Country Dentist is a densely reported book that highlights not only the cases of Brewer and Brooks but also a dizzying array of other wrongful convictions. The authors conducted more than 200 interviews and reviewed thousands of pages of court documents, letters, memos, case reports, and media accounts to trace the contours of a corrupted system. Hayne, they note, performed 80 percent of Mississippi’s state-ordered autopsies, or about 1,700 annually. This stands in contrast to guidelines from the National Association of Medical Examiners, which states that performing more than 325 annually is tantamount to malpractice. Hayne’s pace was likely a problem. In one autopsy report, Hayne described removing the uterus and ovaries—from a man. But quality, perhaps, wasn’t the point. With West as a sidekick, the duo could be counted on to deliver the “evidence” prosecutors needed for convictions. Hayne would discover “bite marks” on a victim’s body, and West would be called in to match them to the suspect’s teeth.

Testimonial Monitoring

I’ve long thought it extremely important that testimony in court by forensic analysts be routinely read and reviewed by supervisors, to ensure accuracy, consistency, and professionalism.  The DOJ importantly just announced in a memo, here, a program to do just that.  The introduction to the memo reads:

Testimony monitoring is a quality assurance measure by which Department of Justice forensic laboratories and digital analysis entities can ensure that results of forensic analyses are properly qualified and appropriately communicated in testimony. Its purpose is to provide examiners with ongoing assessments of their testimonial presentations and to highlight opportunities for continual improvement.

DOJ Approved Uniform Language for Latent Fingerprint Comparisons

On February 21, the DOJ released this document, setting out uniform language for latent print comparisons.  It sets out definitions for source identification, inconclusive and exclusion conclusions and sets out certain qualifications and limitations.  Below are excerpts.

Source Identification

‘Source identification’ is an examiner’s conclusion that two friction ridge skin impressions originated from the same source. This conclusion is an examiner’s decision that the observed friction ridge skin features are in sufficient correspondence such that the examiner would not expect to see the same arrangement of features repeated in an impression that came from a different source and insufficient friction ridge skin features in disagreement to conclude that the impressions came from different sources.

The basis for a ‘source identification’ conclusion is an examiner’ s decision that the observed corresponding friction ridge skin features provide extremely strong support for the proposition that the two impressions came from the same source and extremely weak support for the proposition that the two impressions came from different sources.

A source identification is a statement of an examiner’s belief (an inductive inference)2 that the probability that the two impressions were made by different sources is so small that it is negligible. A source identification is not based upon a statistically-derived or verified measurement or comparison of all friction ridge skin impression features in the world’s population.

And –

• An examiner shall not assert that two friction ridge impressions originated from the same source to the exclusion of all other sources or use the terms ‘ individualize’ or ‘ individualization. ‘ This may wrongly imply that a source identification is based upon a statistically-derived or verified measurement or comparison of all friction ridge skin impression features in the world’s population, rather than an examiner’s expe11 conclusion.

• An examiner shall not assert a 100% level of certainty in his/her conclusion, or otherwise assert that it is numerically calculated.

• An examiner shall not assert that latent print examination is infallible or has a zero error rate.

• An examiner shall not cite the number of latent print comparisons performed in his or her career as a measure for the accuracy of a conclusion offered in the instant case.

• An examiner shall not use the expressions ‘reasonable degree of scientific certainty,’ ‘reasonable scientific certainty,’ or similar assertions of reasonable certainty as a description of the confidence held in his or her conclusion in either reports or testimony unless required to do so by a judge or applicable law.

Forensics, Statistics and Law Conference at the University of Virginia School of Law

In honor of the 25th Anniversary of the landmark U.S. Supreme Court case Daubert v. Merrell Dow Pharmaceuticals Inc. that reshaped how judges evaluate scientific and expert evidence, join experts in the fields of forensics, statistics and law for a one-day conference on Monday, March 26, 2018 at the University of Virginia School of Law.

Panels will discuss how we can develop better forensic evidence, how we can analyze it more accurately in the crime lab, and how we can present it more effectively in criminal cases.  Speakers will also address the role of statistics in forensics, the crime lab and the court room. Several contributions will be published in a special symposium issue of the Virginia Journal of Criminal Law.

Speakers include Sue Ballou, the president-elect of the American Academy of Forensic Science, and Peter Neufeld, the founder and co-director of the Innocence Project who will introduce the conference. Judge Jed Rakoff of the U.S. District Court for the Southern District of New York will deliver the keynote at noon, addressing the role of judges as gatekeepers and retired federal district judge Nancy Gertner will conclude the conference.

The event is free and open to the public.

Attendees may contact Brandon Garrett at bgarrett@virginia.edu or (434) 924-4153 for more information

%d bloggers like this: