DOJ Approved Uniform Language for Latent Fingerprint Comparisons

On February 21, the DOJ released this document, setting out uniform language for latent print comparisons.  It sets out definitions for source identification, inconclusive and exclusion conclusions and sets out certain qualifications and limitations.  Below are excerpts.

Source Identification

‘Source identification’ is an examiner’s conclusion that two friction ridge skin impressions originated from the same source. This conclusion is an examiner’s decision that the observed friction ridge skin features are in sufficient correspondence such that the examiner would not expect to see the same arrangement of features repeated in an impression that came from a different source and insufficient friction ridge skin features in disagreement to conclude that the impressions came from different sources.

The basis for a ‘source identification’ conclusion is an examiner’ s decision that the observed corresponding friction ridge skin features provide extremely strong support for the proposition that the two impressions came from the same source and extremely weak support for the proposition that the two impressions came from different sources.

A source identification is a statement of an examiner’s belief (an inductive inference)2 that the probability that the two impressions were made by different sources is so small that it is negligible. A source identification is not based upon a statistically-derived or verified measurement or comparison of all friction ridge skin impression features in the world’s population.

And –

• An examiner shall not assert that two friction ridge impressions originated from the same source to the exclusion of all other sources or use the terms ‘ individualize’ or ‘ individualization. ‘ This may wrongly imply that a source identification is based upon a statistically-derived or verified measurement or comparison of all friction ridge skin impression features in the world’s population, rather than an examiner’s expe11 conclusion.

• An examiner shall not assert a 100% level of certainty in his/her conclusion, or otherwise assert that it is numerically calculated.

• An examiner shall not assert that latent print examination is infallible or has a zero error rate.

• An examiner shall not cite the number of latent print comparisons performed in his or her career as a measure for the accuracy of a conclusion offered in the instant case.

• An examiner shall not use the expressions ‘reasonable degree of scientific certainty,’ ‘reasonable scientific certainty,’ or similar assertions of reasonable certainty as a description of the confidence held in his or her conclusion in either reports or testimony unless required to do so by a judge or applicable law.

Forensics, Statistics and Law Conference at the University of Virginia School of Law

In honor of the 25th Anniversary of the landmark U.S. Supreme Court case Daubert v. Merrell Dow Pharmaceuticals Inc. that reshaped how judges evaluate scientific and expert evidence, join experts in the fields of forensics, statistics and law for a one-day conference on Monday, March 26, 2018 at the University of Virginia School of Law.

Panels will discuss how we can develop better forensic evidence, how we can analyze it more accurately in the crime lab, and how we can present it more effectively in criminal cases.  Speakers will also address the role of statistics in forensics, the crime lab and the court room. Several contributions will be published in a special symposium issue of the Virginia Journal of Criminal Law.

Speakers include Sue Ballou, the president-elect of the American Academy of Forensic Science, and Peter Neufeld, the founder and co-director of the Innocence Project who will introduce the conference. Judge Jed Rakoff of the U.S. District Court for the Southern District of New York will deliver the keynote at noon, addressing the role of judges as gatekeepers and retired federal district judge Nancy Gertner will conclude the conference.

The event is free and open to the public.

Attendees may contact Brandon Garrett at bgarrett@virginia.edu or (434) 924-4153 for more information

Daubert’s Failure

A wonderful new piece forthcoming in Case Western Reserve Law Review, from Paul Giannelli.  Here is a link and the abstract is below:

In 2015, a federal judge noted that “[m]any defendants have been convicted and spent countless years in prison based on evidence by arson experts who were later shown to be little better than witch doctors.” In the same year, a White House science advisor observed: “Suggesting that bite marks [should] still be a seriously used technology is not based on science, on measurement, on something that has standards, but more of a gut-level reaction.” According to another judge “[a]s matters currently stand, a certainty statement regarding toolmark pattern matching has the same probative value as the vision of a psychic.” A recent New York Times editorial echoed these sentiments: “And the courts have only made the problem worse by purporting to be scientifically literate, and allowing in all kinds of evidence that would not make it within shouting distance of a peer-reviewed journal. Of the 329 exonerations based on DNA testing since 1989, more than one-quarter involved convictions based on ‘pattern’ evidence — like hair samples, ballistics, tire tracks, and bite marks — testified to by so-called experts.”

These criticisms are valid — which raises a puzzling and consequential question: Why didn’t the Supreme Court’s “junk science” decision, Daubert v. Merrell Dow Pharmaceuticals, Inc., prevent or restrict the admissibility of testimony based on flawed forensic techniques? Daubert was decided in 1993, nearly twenty-five years ago.

This article examines the justice system’s failure by reviewing the status of several forensic techniques: (1) bite mark analysis, (2) microscopic hair comparisons, (3) firearms and toolmark identifications, (4) fingerprint examinations, (5) bullet lead analysis, and (6) arson investigations. It argues that the system’s failure can be traced back to its inability to demand and properly evaluate foundational research, i.e., Daubert’s first factor (empirical testing), and concludes that the courts may be institutionally incapable of applying Daubert in criminal cases.

A different paradigm is needed, one that assigns an independent agency the responsibility of evaluating foundational research. This approach was recently recommended by the National Commission on Forensic Science and the President’s Council of Advisors on Science and Technology. Both recommended that the National Institute of Standards and Technology evaluate all forensic disciplines on a continuing basis, thereby injecting much needed scientific expertise into the process.

Turkey DNA and Mesa Verde

For Thanksgiving – Science describes how mitochondrial DNA testing of wild turkeys is apparently being used to suggest what might have happened to the ancient Anasazi people.  “The researchers compared the genetic material from Mesa Verde turkeys to turkeys found in the northern Rio Grande region before and after the Ancestral Puebloans disappeared.” And “Before 1280, the two turkey populations were unrelated in the maternal line, the team found. But afterward, the northern Rio Grande turkeys carried Mesa Verde “haplogroups”—clusters of genes inherited together as a group—indicating they were descended at least in part from the Ancestral Puebloans’ stock.”

So – “The most likely explanation, the researchers argue in PLOS ONE, is that the Ancestral Puebloans left Mesa Verde around 1280 and brought their turkeys with them. This transplanted line of turkeys then replaced those that lived in northern Rio Grande before their arrival.”  Another researcher, though, calls the findings “a little weak.”

Trial Judges and Forensics

A great new piece by Stephanie Damon-Moore in NYU Law Review asks why trial judges so rarely exercise gatekeeping authority over forensic evidence.  A link is here.  Below is the abstract:

In the last decade, many fields within forensic science have been discredited by scientists, judges, legal commentators, and even the FBI. Many different factors have been cited as the cause of forensic science’s unreliability. Commentators have gestured toward forensic science’s unique development as an investigative tool, cited the structural incentives created when laboratories are either literally or functionally an arm of the district attorney’s office, accused prosecutors of being overzealous, and attributed the problem to criminal defense attorneys’ lack of funding, organization, or access to forensic experts. But none of these arguments explain why trial judges, who have an independent obligation to screen expert testimony presented in their courts, would routinely admit evidence devoid of scientific integrity. The project of this Note is to understand why judges, who effectively screen evidence proffered by criminal defendants and civil parties, fail to uphold their gatekeeping obligation when it comes to prosecutors’ forensic evidence, and how judges can overcome the obstacles in the path to keeping bad forensic evidence out of court

John Oliver on Forensics

This week tonight (from yesterday) :

https://www.youtube.com/watch?v=ScmJvmzDcG0&feature=youtu.be (Links to an external site.)Links to an external site.

Oliver even discusses the role of judicial precedent and “scientifically illiterate” judges, lawyers, and jurors.

All the best,

Brandon Garrett

Proficiency of Experts

A new paper by Greg Mitchell and I, forthcoming next year in Penn. L. Rev., and available as of today on SSRN here.  The abstract is below and our thesis can be summarized very briefly: expertise = proficiency.

Expert evidence plays a crucial role in civil and criminal litigation. Changes in the rules concerning expert admissibility, following the Supreme Court’s Daubert ruling, strengthened judicial review of the reliability and the validity of an expert’s methods. However, judges and scholars have neglected the threshold question for expert evidence: whether a person should be qualified as an expert in the first place. Judges traditionally focus on credentials or experience when qualifying experts without regard to whether those criteria are good proxies for true expertise. We argue that credentials and experience are often poor proxies for proficiency. Qualification of an expert presumes that the witness can perform in a particular domain with a proficiency that non-experts cannot achieve, yet many experts cannot provide empirical evidence that they do in fact perform at high levels of proficiency. To demonstrate the importance of proficiency data, we collect and analyze two decades of proficiency testing of latent fingerprint examiners. In this important domain, we found surprisingly high rates of false positive identifications for the period 1995 to 2016. These data would falsify the claims of many fingerprint examiners regarding their near infallibility, but unfortunately, judges do not seek out such information. We survey the federal and state case law and show how judges typically accept expert credentials as a proxy for proficiency in lieu of direct proof of proficiency. Indeed, judges often reject parties’ attempts to obtain and introduce at trial empirical data on an expert’s actual proficiency. We argue that any expert who purports to give falsifiable opinions can be subjected to proficiency testing, and proficiency testing is the only objective means of assessing the accuracy and reliability of experts who rely on subjective judgments to formulate their opinions (so-called “black-box experts”). Judges should use proficiency data to make expert qualification decisions when the data is available, should demand proof of proficiency before qualifying black-box experts, and should admit at trial proficiency data for any qualified expert. We seek to revitalize the standard for qualifying experts: expertise should equal proficiency.

ProPublic Seeks FST Software

From the Fair Punishment Project newsletter today:

ProPublica Seeks Source Code for New York City’s Disputed DNA Software. From 2011 to this year, the New York City Medical Examiner analyzed DNA from 1,350 criminal cases with the use of software called Forensic Statistical Tool (FST). Though known as a pioneer in analyzing the most difficult evidence from crime scenes, the software has come under intense scrutiny. A defense expert in Kevin Johnson’s case found serious issues with it, leading U.S. District Judge Valerie Caproni of the Southern District of New York to issue the first ruling requiring the city to provide the software’s source code to the defense. The Medical Examiner’s office has long kept the source code a secret, denying public information requests and successfully opposing motions in previous cases. Judge Caproni’s ruling includes a protective order preventing public access to the information, but the public wants answers about this potential source of wrongful convictions. ProPublica filed a motion requesting Judge Caproni lift the protective order, and a coalition of defense attorneys sent a letter to New York State Inspector General Catherine Leahy Scott demanding an investigation. Politicians have also taken note: several New York City Council members have expressed concern, and State Assemblyman Joseph Lentol has proposed legislation mandating that membership in the New York State Commission on Forensic Science be restricted to scientists. Currently, the group—a subcommittee of which unanimously approved FST in 2010 even though it did not have access to FST’s source code in its evaluation process—also includes lawyers, law enforcement, and politicians. [Lauren Kirchner / ProPublica] See also In Justice Today first covered questions about FST in the September 11 newsletter.

AAAS Fingerprint Report

The AAAS released a lengthy Latent Fingerprint Examination Report: https://mcmprodaaas.s3.amazonaws.com/s3fs-public/reports/Latent%20Fingerprint%20Report%20FINAL%209_14.pdf?i9xGS_EyMHnIPLG6INIUyZb66L5cLdlb

The report includes 14 recommendations.  Here they are:

1. Resources should be devoted to further research on possible quantitative methods for estimating the probative value or weight of fingerprint evidence.

2. Resources should also be devoted to further research on the performance of latent fingerprint examiners under typical laboratory conditions (as discussed in Section V).

3. Research is needed on how accurately latent print examiners can assess intra-finger variability- that is, the degree to which prints may be changed due to distortion. To the extent their assessments are imperfect, researchers should endeavor to determine whether that problem arises from inadequate understanding of the existing scientific literature (in which case better training is warranted) or whether it results from deficiencies in the existing literature (in which case more research on intra-finger variability may be needed).

4. Research is also needed on ways to reduce the probability of false exclusions.

5. NIST should continue to evaluate the performance of commercial AFIS systems, particularly their performance in identifying latent prints. Open tests in which vendors are invited to participate are important for spurring competition in order to assure continuing improvement in AFIS technology. Continued testing will help law enforcement agencies choose systems best suited for their needs and provide information to the broader scientific community on how well those systems work.

6. Developing better quantitative measures of the quality of latent prints should be a research priority. Such measures will be helpful for assessing and improving AFIS as well as for evaluating the performance of human examiners.

7. Law enforcement agencies and vendors should work together, perhaps with guidance from NIST, to better assure interoperability of AFIS systems and avoid compatibility problems that may result in the loss of valuable evidence or investigative leads.

8. Context management procedures should be adopted by all forensic laboratories in order to reduce the potential for contextual bias in latent print examination. Some examples of such procedures include blinding examiners to task-irrelevant information, using case managers, and sequencing workflow among analysts (i.e., using “sequential unmasking” or “linear ACEV”). Laboratories that lack sufficient staff to administer context management procedures internally should deal with the problem through cooperative procedures with other laboratories.

9. Forensic laboratories should undertake programs of research on factors affecting the performance of latent print examiners. The research should be done by introducing known source prints into the flow of casework in a manner that makes test samples indistinguishable from casework samples.

10. Government funding agencies should facilitate research of this type by providing incentive funding to laboratories that undertake such programs and by funding the creation of research test sets- i.e., latent print specimens of known source that can be used for testing examiner performance. The research test sets should be designed with the help of practitioners, statisticians, and experts on human performance to ensure that the research is scientifically rigorous and that it addresses the issues most important to the field.

11. In research of this type, errors are to be expected and should be treated as opportunities for learning and improvement. It is not appropriate for examiners to be punished or to suffer other disadvantages if they make errors in research studies that are designed to test the limits of human performance. Nor is it appropriate for laboratories to suffer disadvantage as a consequence of engaging in such research. Accordingly, the criminal justice system should consider carefully whether the information about examiner performance in research studies should even be admissible as evidence. If the results of research studies are admitted as evidence in the courtroom, it should only be under narrow circumstances, and with careful explanation of the limitations of such data for establishing the probability of error in a given case.

12. Examiners should be careful not to make statements in reports or testimony that exaggerate the certainty of their conclusions. They can indicate that the differences between a known and latent print are such that the donor of the known print can be excluded as the source of the latent print. They can also indicate that the similarity between a latent and a known print are such that the donor of the known print cannot be excluded as the source of the latent print. But they should avoid statements that claim or imply that the pool of possible sources is limited to a single person. Terms like “match,” “identification,” “individualization” and their synonyms, imply more that the science can sustain. Because fingerprint impressions are known to be highly variable, examiners who observe a high degree of correspondence between a latent print and known print may be justified in making statements about the rarity of the shared features. For example, examiners might say something like the following:

“The latent print on Exhibit ## and the record fingerprint bearing the name XXXX have a great deal of corresponding ridge detail with no differences that would indicate they were made by different fingers. There is no way to determine how many other people might have a finger with a corresponding set of ridge features, but this degree of similarity is far greater than I have ever seen in non-matched comparisons.”

13. When latent print examiners testify, they should be prepared to discuss forthrightly the results of research studies that tested the accuracy of latent print examiners on realistic known-source samples.

14. Further research is also needed on how lay people, such as police officers, lawyers, judges, and jurors evaluate and respond to fingerprint evidence. Research of this type would be helpful in evaluating how best to present fingerprint evidence in reports and expert testimony. It would help ensure that the statements made in reports and testimony will be interpreted in the intended manner.

Discovering Forensic Fraud

A great new piece by Jennifer Oliva and Valena Beety forthcoming in Northwestern U. L. Rev. describing the need to revamp discovery regarding forensic evidence in criminal cases.  A link is here and the abstract is below:

This piece posits that certain structural dynamics, which dominate criminal proceedings, significantly contribute to the admissibility of faulty forensic science in criminal trials. The authors believe that these dynamics are more insidious than questionable individual prosecutorial or judicial behavior in this context. Not only are judges likely to be former prosecutors, prosecutors are “repeat players” in criminal litigation and, as such, routinely support reduced pretrial protections for defendants. Therefore, we argue that the significant discrepancies between the civil and criminal pretrial discovery and disclosure rules warrant additional scrutiny.

In the criminal system, the near absence of any pretrial discovery means the criminal defendant has little to no realistic opportunity to challenge forensic evidence prior to the eve of trial. We identify the impact of pretrial disclosure by exploring the admission of expert evidence in criminal cases from a particular forensic discipline, specifically forensic odontology. Finally, this Essay proposes the adoption of pretrial civil discovery and disclosure rules in criminal proceedings to halt the flood of faulty forensic evidence routinely admitted against defendants in criminal prosecutions.

%d bloggers like this: