Wieneke Law Office, LLC

View Original

Bias Plays a Role Even in Digital Forensics

Evidence collected through the use of digital forensics is being presented all the time in criminal cases. If you have had a case involving cell phone extraction data, Google alerts to law enforcement regarding child pornography, the use of facial recognition technology, etc., then you have encountered digital forensics.

We often think of digital evidence as being objective and credible; we rarely think bias could play a role in its collection. Sure, bias can come into play when deciding how to understand/interpret the extracted data, but we assume bias would have no impact on whether the data is actually collected in the first place.

As Nina Sunde and Itiel Dror note in their new study, however, the quality and outcome of the process used in a digital forensic investigation “is dependent on cognitive and human factors, which can lead to bias and error.” Sunde and Dror provided a hypothetical case scenario and evidence file to 65 digital forensics examiners from eight different countries. The scenario involved determining whether a particular user leaked confidential information. The evidence file contained files one would typically find on a work computer, including programs for handling documents, spreadsheets, presentations, emails, etc. The file also included internet browsing history. The study sought to answer two questions: (1) whether digital forensic examiners were biased by contextual information; and (2) whether examiners were consistent with one another.

Regarding the first question, the examiners were split into groups and given no contextual information at all; contextual information suggesting the user’s innocence; contextual information suggesting weak indications of guilt of the user; and contextual information suggesting strong indications of guilt of the user.

Sunde and Dror analyzed whether the contextual information affected the number of observations/traces of evidence found, how those traces were interpreted, and what conclusions the examiners drew from those observations and interpretations.

Not surprisingly, examiners who received contextual information suggesting the user’s innocence found the fewest traces/observations, while examiners who received contextual information with indications of weak guilt found the most traces/observations. The contextual information did not appear to play a role in how those observations were interpreted or in conclusions drawn from those observations.

Additional analysis revealed examiners receiving contextual information suggesting the user’s innocence either found some additional traces but did not include them in their reports, or simply stopped looking for traces sooner than the other examiners. Examiners who received contextual information giving weak indications of the user’s guilt found the most races because they continued looking for additional traces longer than examiners in the other groups.

Even more troubling, however, was the authors’ conclusion that examiners within each group exhibited a low consistency on all three levels (number of traces, interpretation, and conclusions).

Practically speaking, what does this all mean for legal practitioners? First, it is imperative that attorneys know what contextual information the digital forensic examiner received. Second, where the evidence tends to establish your client’s guilt, having the evidence re-examined by a different digital forensic examiner is necessary. With low consistency exhibited by digital forensic examiners, there is a good chance a second digital forensic examiner will reach different conclusions than the first examiner did.