User profiles for Leonard Berrada
Leonard BerradaResearch Scientist, DeepMind Verified email at google.com Cited by 800 |
Unlocking high-accuracy differentially private image classification through scale
Differential Privacy (DP) provides a formal privacy guarantee preventing adversaries with
access to a machine learning model from extracting information about individual training points…
access to a machine learning model from extracting information about individual training points…
Griffin: Mixing gated linear recurrences with local attention for efficient language models
Recurrent neural networks (RNNs) have fast inference and scale efficiently on long sequences,
but they are difficult to train and hard to scale. We propose Hawk, an RNN with gated …
but they are difficult to train and hard to scale. We propose Hawk, an RNN with gated …
Smooth loss functions for deep top-k classification
The top-k error is a common measure of performance in machine learning and computer
vision. In practice, top-k classification is typically performed with deep neural networks trained …
vision. In practice, top-k classification is typically performed with deep neural networks trained …
Differentially private diffusion models generate useful synthetic images
The ability to generate privacy-preserving synthetic versions of sensitive image datasets
could unlock numerous ML applications currently constrained by data availability. Due to their …
could unlock numerous ML applications currently constrained by data availability. Due to their …
ConvNets match vision transformers at scale
Many researchers believe that ConvNets perform well on small or moderately sized datasets,
but are not competitive with Vision Transformers when given access to datasets on the web…
but are not competitive with Vision Transformers when given access to datasets on the web…
Training neural networks for and by interpolation
L Berrada, A Zisserman… - … conference on machine …, 2020 - proceedings.mlr.press
In modern supervised learning, many deep neural networks are able to interpolate the data:
the empirical loss can be driven to near zero on all samples simultaneously. In this work, we …
the empirical loss can be driven to near zero on all samples simultaneously. In this work, we …
Recurrentgemma: Moving past transformers for efficient open language models
We introduce RecurrentGemma, a family of open language models which uses Google's
novel Griffin architecture. Griffin combines linear recurrences with local attention to achieve …
novel Griffin architecture. Griffin combines linear recurrences with local attention to achieve …
Operationalizing contextual integrity in privacy-conscious assistants
Advanced AI assistants combine frontier LLMs and tool access to autonomously perform
complex tasks on behalf of users. While the helpfulness of such assistants can increase …
complex tasks on behalf of users. While the helpfulness of such assistants can increase …
Deep Frank-Wolfe for neural network optimization
Learning a deep neural network requires solving a challenging optimization problem: it is a
high-dimensional, non-convex and non-smooth minimization problem with a large number of …
high-dimensional, non-convex and non-smooth minimization problem with a large number of …
Unlocking accuracy and fairness in differentially private image classification
Privacy-preserving machine learning aims to train models on private data without leaking
sensitive information. Differential privacy (DP) is considered the gold standard framework for …
sensitive information. Differential privacy (DP) is considered the gold standard framework for …