NMSU alum gets international recognition for AI privacy research
LAS CRUCES — Ferdinando Fioretto, who earned a dual doctorate in computer science from New Mexico State University and the University of Udine in Italy, has won several awards over the past year for his research influential on privacy and fairness in machine learning algorithms as well as the integration of operations research with deep learning.
For his pivotal contributions, Fioretto, Assistant Professor of Computer Science at Syracuse University, received an Early Career Researcher Award from the Association for Constraint Programming, a Young Investigator Award for Research in Computer Science by the Italian Scientists and Scholars in North America Foundation, a National Science Foundation CAREER Award, and a Google Research Scholar Award.
In addition to its NSF and Early Career Awards, Fioretto has won several Best Paper Awards, including a Best AI Dissertation Award from the International Conference of the Italian Association for Artificial Intelligence.
“As we evolve as a society,” Fioretto said, “algorithms are replacing some of the decisions that were made by us humans.” These algorithms are used in various areas of our society, such as federal funds allocation, energy systems, criminal justice, loan approvals, and filing systems like those used in employment decisions.
The algorithms are fed by historical data which they analyze to build mathematical models that can be used on later data to make predictions or “an abstract reason to give us an outcome,” Fioretto explained. In this process, called “machine learning”, algorithms frequently introduce or make use of existing biases, which in turn seep into the decisions they make. Therefore, these decisions can have huge societal impacts.
For example, the so-called PATTERN algorithm is used to predict whether an inmate will be likely to re-offend based on data from previous court cases, which have been decided by people with their own biases, even unconscious ones. Unfortunately, the algorithm learns these biases by creating a predictive model from the data.
These biases can come from latent factors, variables that aren’t directly observed but can be inferred, that are “interpreted or misinterpreted as relevant,” Fioretto said. These factors may be associated with the protected attributes of individuals, such as race and gender.
In 2006, there was a “breakthrough,” as Fioretto described it, in theoretical computer science called “differential privacy,” which is a mathematical way of ensuring that data generated by an algorithm cannot be used to derive identifiable information about the individuals described in the data. Among others, the US Census Bureau adopts this definition of privacy in its 2020 data release.
“This technology relies on adding noise to ensure that the algorithm’s outputs remain insensitive to any individual’s participation,” Fioretto explained. However, this noise affects the accuracy of the algorithm outputs. “Once you add noise, you change some properties of the data,” he said. “My group has found that the mechanisms adopted to satisfy privacy properties can make errors that are actually disproportionate to certain characteristics of the population and exacerbate existing biases toward minorities.”
In other words, the implementation of privacy protection measures on the data used for decision-making processes can have significant societal and economic repercussions.
Fioretto’s research focuses on how allocation decisions are affected. Title I, the program that funds low-income school districts, uses rules-based algorithms to decide where and how much money is distributed. He explained that depending on the confidentiality of the data used, school districts could receive much less money than would have been justified.
Fioretto added that “the relationship between confidentiality, accuracy and fairness is complicated.” His research aims to untangle this relationship to design algorithms capable of guaranteeing the privacy of individuals without sacrificing too much on the accuracy of the data and the fairness of the decisions made.
Fioretto not only seeks solutions to the large-scale consequences of machine learning algorithms, but also engages the community around the importance of these issues. He co-organizes the annual AAAI workshop on Artificial Intelligence Preserving Privacy and works with policy makers and data users to understand and address these concerns. Fioretto’s research and activism will ultimately bring us closer to fairer decisions made for our communities.
This week’s Eye on Research was written by Jessica Brinegar of NMSU Marketing and Communications. She can be reached at email@example.com.
Comments are closed.