What keeps e-Discovery professionals up at night? One challenge is - Did we produce the “right” information in the right format?
Redacting privileged, commercially sensitive and personally identifiable information is one way that we protect client data. In a recent CBC article, the federal government mistakenly sent information to a lawyer in the form of redacted documents with text hidden behind the black boxes. Unfortunately this is an all too common occurrence, and there are various examples of information being inadvertently shared because redactions were not properly burned in.
It’s not surprising this kind of mistake can happen which is why organizations need processes in place to ensure redactions are properly burned into records. This is something we do whenever we are redacting records for litigation and regulatory matters.
At MT>3 we use state of the art redaction tools to conduct redactions, but technology has its limitations. At the end of the day you still need a human to check the records that are being produced and to ensure that the redactions are properly (and permanently!) burned into the records.
Susan Wortzman and Michael Lalande
Our clients hear from us all the time that we are using AI and machine learning to help us with our data analysis work. The powerful analytics, combined with lightning fast evaluations, make it an essential tool in our review toolkit.
Two recent articles highlight the challenges organizations can face when using AI. The first one, bias in medicine, highlights how using historical data to train an AI system can introduce unintentional racial bias into diagnosing medical conditions, while another article discusses how Department of National Defence failed to follow the government’s privacy impact regulations when employing third party AI technology, opening up the possibility that bias could be introduced into in their recruitment process.
Bias in AI/Machine learning is a real issue. In 2016, Microsoft technicians were developing a “conversational understanding” AI system that was designed to learn from chatting with people, and eventually be able to engage in conversations. In order to speed up training, the technicians decided to attach it to a Twitter account. People could tweet with the system, and the system would respond. Unfortunately, within 24 hours, the system developed a very well defined misogynistic and racist personality. It proved, once again, that garbage in=garbage out. There was nothing wrong with the underlying AI technology. The problem was with the data used to train the AI.
While AI / Machine Learning is powerful, organizations need to be aware of potential biases that can be introduced during the training/model development phase. These biases, if left unchecked, will dramatically affect the results. In document review, biases and errors are expected, and a robust, independent validation of results needs to be included in all review projects.
At MT>3 we continually conduct quality control, testing, and validation of the Machine Learning and AI tools in order to verify the results.
Susan Wortzman, Chuck Rothman, Michael Lalande
On November 17th Navdeep Bains, the Minister of Innovation, Science and Industry, introduced Bill C-11, the Digital Charter Implementation Act, 2020. Bill C-11 seeks to modernize Canadian privacy legislation through the introduction of the new Consumer Privacy Protection Act (“CPPA”) and the creation of a new enforcement tribunal through the Personal Information and Data Protection Tribunal Act (“PIDPT”). This represents a significant overhaul of the existing Personal Information Protection and Electronic Documents Act (“PIPEDA”) that governs privacy in the private sector.
The proposed legislation introduces new record keeping and data management concerns for companies. In particular,
Beyond these sections, Data data management will also be impacted by the rights to data portability (the right to transfer personal information from one organization to another) and data disposal (the right to request permanent deletion of personal information), as well as the new data de-identification obligations, particularly as applied to the sharing of information in prospective business transactions.
When the Bill is passed, it will be crucial for companies to review their privacy practices and data governance plans. These changes come with teeth – the maximum penalty for violations is the higher of $25,000,000 or 5% of the organization’s gross global revenue. This is notably higher than the 4% maximum penalty imposed by the EU General Data Protection Regulation (“GDPR”), and on par with the recent draft Personal Data Protection Law in China.
Being able to identify and locate personal information, and automating this process, will be the key to ensuring compliance with these new laws. Contact MT>3 (Susan Wortzman or Gordon Lee) to discuss how to plan and update your data governance strategies and learn more about the technological tools that exist to help this process.
For more analysis on the new Bill and its changes, please see the McCarthy Tétrault TechLex blog post: Hello CPPA & PIDPT: The Federal Government Proposes Dramatic Evolution of PIPEDA.