ML:Integrity

October 19 2022

The world is adopting AI at fever pace.
Join executives, policy makers, and academics to discuss the corporate and societal challenges and how to overcome them.
HOSTED BY

Unlike any other
conference on machine learning

Machine learning models often produce erroneous predictions as a result of their sensitivity to subtle changes in data. As these models are used to automate critical decisions such as credit worthiness, healthcare coverage, and hiring practices, prediction errors can have dire consequences. In light of this, companies are actively building strategies and engineering paradigms to instill machine learning integrity in their systems.

ML:Integrity is the first event dedicated to advancing machine learning integrity, brining together leading executives, policy makers, and academics to share their perspectives and best practices, as well as advocate for the adoption of a comprehensive set of standards throughout the AI community. Join us for a day of virtual sessions on ML fairness, security, scale, regulation, and more.

Learn more about the genesis of the conference in a blog by Robust Intelligence CEO Yaron Singer.

Sessions include

ML Failure Prevention
Silent failures that occur when making business-critical decisions go unchecked, harming the company and those who rely on the technology. We will diagnose the origins of silent failures and ways to avoid them.
Open Source ML
The advent of complex large-scale NLP and CV models makes open source models an important development framework for organizations. In this session we will discussion the challenges and best engineering practices of development using open source model repositories.
Compliance & Regulation
Companies in highly regulated industries need to navigate a rapidly evolving policy landscape to meet both external and self-imposed regulations. In this session we will be joined by experts in AI compliance and regulation to discuss best practices in management of models for sensitive applications.
ML Quality Control
Developing and maintaining even a single production-ready model can be a challenge for data science teams, let alone hundreds of models. This session will focus on quality control for organizations that deal with deploying a large number of models at scale.
ML Security
Adversaries can evade, steal intellectual property, or manipulate machine learning models and the software and data supply chains they rely on. In this session we invite industry leaders to share examples of model vulnerabilities and best practices to secure models from adversarial attacks.
Bias in ML
Data that is used for training models can be biased and models may not properly generalize to populations that they’re applied on. This session will contain discussions on origins of bias in ML models and strategies for its mitigation.

On the agenda

October 19, 2022