Natacha Crooks, UC Berkeley
Pesto: Scaling Distributed Trust Through ACID transactions
Fault-tolerance is an essential part of any application. Database and consensus systems form the reliable, scalable core atop of which application functionality can be built. They must have good performance, expressive APIs, and a fault model that reflects realistic failures. To this effect, this talk will present Pesto, a new transactional Byzantine Fault Tolerant (BFT) database. Pesto leverages ACID transactions to scalably implement the abstraction of a trusted shared log in the presence of Byzantine actors. Unlike traditional BFT approaches, Pesto executes non-conflicting operations in parallel and commits transactions in a single round-trip during fault-free executions. Pesto offers full SQL compatibility and can act as a drop-in replacement to traditional SQL databases.
Alexandra Silva, Cornell University
KATch (me if you can): Algebraic Network Verification
We present recent results on the development of data structures and algorithms for checking verification queries in NetKAT, a domain-specific language for specifying the behavior of network data planes. We introduce KATch, a verification engine used for properties such as reachability and slice isolation, and outline an active learning algorithm for symbolic NetKAT automata which can be used to reverse engineer closed-box configurations. These recent advancements underscore NetKAT’s potential as a practical, declarative language for network specification and verification.
Maria Christakis, TU Wien
Just Nail It: Systematic Testing for Complex Systems
With a well-stocked toolbox of program-analysis techniques–from
random fuzzing to formal verification–I’ve had the opportunity to
drive many kinds of nails: traditional software, smart contracts,
heterogeneous systems, and more. But in this talk, I want to focus on
one particularly versatile hammer: metamorphic testing.
Metamorphic testing is especially powerful when test oracles are not
easily available. We’ll see it in action on program analyzers,
zero-knowledge systems, and machine-learning models. This talk is not
just about where metamorphic testing works, but how to make it
work. I’ll confront the challenges of adapting this technique to
different contexts and share my insights along the way.
Whether you’re into systems, security, formal methods, or machine
learning, there’s something here for you. Come see what breaks when we
swing the hammer.
Stefanie Jegelka, MIT EECS and TU Munich
Learning and Reasoning with Structure
The development of powerful, reliable AI models bears great promise for science, medicine and numerous industries. Such broad applications require widely applicable, flexibly adaptive and resource-efficient learning models that are robust under commonly occurring distribution shifts and able to solve complex tasks. A promising step towards this goal is to understand and exploit “structure”: in the data, latent space and processing. In this talk, I will illustrate examples of exploiting such types of structure: graph or relational structure, geometry and computational structure. For instance, applications as diverse as drug and materials design, traffic and weather forecasting, learnable simulations, learnable algorithms, recommender systems and chip design can be cast via graphs. But to fully leverage deep learning for graphs, the models need to be sufficiently expressive and robust. We will see how taking into account computational structure and addressing relevant symmetries can help address these goals, provably in theory and effectively in practice. Finally, many applications leverage foundation models that can flexibly adapt to different contexts and learn complex reasoning processes. Understanding and steering relevant representations and learning processes can help us obtain more capable models and employ them in different settings, from safety to graph tasks.
Zeynep Akata, TU Munich
Explainable Multimodal Intelligence: Bridging AI, Science, and Human Values
Abstract: In an era of foundation models that learn from text, images, and more, how can we ensure these AI systems serve both science and humanity responsibly? Prof. Zeynep Akata’s talk addresses this question by bridging technical innovation with human-centered principles. She discusses how insights from her research on vision-language models, zero-shot learning, and explainable AI lay the foundation for AI that can drive discoveries in science and medicine while remaining accountable and fair. This forward-looking talk explores explainability techniques for complex multimodal systems and outlines a roadmap for human-centric AI that advances knowledge without compromising ethics. The result is a compelling vision of AI as a partner in science: transparent, equitable, and profoundly impactful.
Elissa Redmiles, Georgetown University
Defining AI Safety? Defense-in-Depth Against the Use of AI for Sexual Abuse
Image-based sexual abuse is a form of sexual violence that encompasses the non-consensual creation and/or sharing of sexual content. Decades of research in computer vision and AI has led to broad availability of techniques such as inpainting and body mesh estimation, which can be used to e.g., edit a source image of a clothed individual into a nude image or even video. While several countries have criminalized the rising use of generative AI to create and disseminate image-based sexual abuse content, the landscape of technical defenses is bleak. Drawing on over half a decade of research in Europe and the US on the use cases, threat models, and protections needed for intimate content and interactions, this talk will explore the past, present, and future of technology-powered IBSA. We will explore key threat models, potential lines of defense, and underlying open questions that can more broadly inform a scientific and humanistic definition of AI safety.