Events

  • Ben Fish
    Mila
    Please Note: Brown Login Required For This Talk
    Title: Defining and Ensuring Algorithmic Fairness in Artificial Intelligence
    Abstract: Artificial intelligence is increasingly used to make decisions about people in social domains. Failure to take into account its effects on people’s lives risks grave consequences, including enacting and perpetuating discrimination, and more broadly creating AI systems imbued with values we do not intend or desire. In this talk, I will detail the development over the last few years of increasingly sophisticated approaches to formalize and understand the normative impact of AI, and my own contributions to these approaches. Using examples from binary classification, influence maximization, and hiring markets, I will illustrate through theory and experiments the impact that considerations of fairness have in creating and analyzing algorithms. I will provide algorithms for ensuring group-level fairness in binary classification problems, algorithms for how to more equitably spread information in a social network, and a new approach to defining fairness in hiring markets. This work demonstrates how explicit mathematical modeling of the social impact of decision making can reveal new ways to capture the moral impacts of AI, and emphasizes that further progress in this area will be made by creating AI specifically for the surrounding social context in which it is embedded.
    Bio: Ben Fish is a postdoctoral fellow at Mila hosted by Fernando Diaz, which he joined after moving from the Fairness, Accountability, Transparency, and Ethics (FATE) Group at Microsoft Research Montréal, also hosted by Fernando Diaz. His research develops methods for machine learning and other computational systems that incorporate human values and social context. This includes scholarship in fairness and ethics in machine learning and learning over social networks. He received his Ph.D. from the University of Illinois at Chicago as a member of the Mathematical Computer Science group. He was previously a visiting researcher at the University of Melbourne and the University of Utah, and earned a B.A. from Pomona College in Mathematics and Computer Science.
    Host: Seny Kamara
  • Please join us on Thursday, February 25, at 4 p.m. for “Kinder-ready clinics: Emerging models of how clinics can support parents in early-childhood development through low-cost, scalable interventions,” presented by Susanna Loeb, PhD, and Lisa Chamberlain, MD, MPH.

    Dr. Loeb is director of the Annenberg Institute for School Reform and professor of education and of international and public affairs at Brown University. Dr. Loeb is a member of the Executive Council of The Policy Lab at Brown.

    Dr. Chamberlain is associate professor of pediatrics, associate chair of policy and community and the Arline and Pete Harman Faculty Scholar at Stanford Children’s Hospital. She founded and co-directs the Stanford Pediatric Advocacy Program to train leaders in community pediatrics and advocacy.

    Please register below to receive the Zoom link for this virtual event.

    Biology, Medicine, Public Health, Education, Teaching, Instruction, Research, Teaching & Learning
  • Riad Wahby
    Stanford University
    Please Note: Brown Login Required For This Talk
     
    Abstract: In the past decade, systems that use probabilistic proofs in real-world
    applications have seen explosive growth. These systems build upon some
    of the crown jewels of theoretical computer science—interactive proofs,
    probabilistically checkable proofs, and zero-knowledge proofs—to solve
    problems of trust and privacy in a wide range of settings.

    This talk describes my work building systems that answer questions ranging
    from “how can we build trustworthy hardware that uses untrusted components?”
    to “how can we reduce the cost of verifying smart contract execution in
    blockchains?” Along the way, I will discuss the pervasive challenges of
    efficiency, expressiveness, and scalability in this research area; my approach
    to addressing these challenges; and future directions that promise to bring
    this exciting technology to bear on an even wider range of applications.
    Bio: Riad S. Wahby is a Ph.D. candidate at Stanford, advised by Dan Boneh and
    Keith Winstein. His research interests include systems, computer security,
    and applied cryptography. Prior to attending Stanford, Riad spent ten years
    as an analog and mixed-signal integrated circuit designer. Riad and his
    collaborators received a 2016 IEEE Security and Privacy Distinguished Student
    Paper award; his work on hashing to elliptic curves is being standardized
    by the IETF.
    Host: Vasilis Kemerlis
  • Zhuoran Yang
    Princeton University
    Please Note: Brown Login Required For This Talk
    Abstract: Coupled with powerful function approximators such as deep neural networks, reinforcement learning (RL) achieves tremendous empirical successes. However, its theoretical
    understandings lag behind. In particular, it remains unclear how to provably attain the optimal
    policy with a finite regret or sample complexity. In this talk, we will present the two sides of the
    same coin, which demonstrates an intriguing duality between optimism and pessimism.
    - In the online setting, we aim to learn the optimal policy by actively interacting with an environment. To strike a balance between exploration and exploitation, we propose an optimistic least-squares value iteration algorithm, which achieves a \sqrt regret in the presence of linear, kernel, and neural function approximators.
    - In the offline setting, we aim to learn the optimal policy based on a dataset collected a priori.
    Due to a lack of active interactions with the environment, we suffer from the insufficient coverage of the dataset. To maximally exploit the dataset, we propose a pessimistic least-squares value iteration algorithm, which achieves a minimax-optimal sample complexity.
    Bio: Zhuoran Yang is a final-year Ph.D. student in the Department of Operations Research and
    Financial Engineering at Princeton University, advised by Professor Jianqing Fan and Professor Han Liu. Before attending Princeton, He obtained a Bachelor of Mathematics degree from Tsinghua University. His research interests lie in the interface between machine learning, statistics, and optimization. The primary goal of his research is to design a new generation of machine learning algorithms for large-scale and multi-agent decision-making problems, with both statistical and computational guarantees. Besides, he is also interested in the application of learning-based decision-making algorithms to real-world problems that arise in robotics, personalized medicine, and computational social science.
    Host: Roberta De Vito
  • Zhuoran Yang
    Princeton University
    Please Note: Brown Login Required For This Talk
    Abstract: Coupled with powerful function approximators such as deep neural networks, reinforcement learning (RL) achieves tremendous empirical successes. However, its theoretical
    understandings lag behind. In particular, it remains unclear how to provably attain the optimal
    policy with a finite regret or sample complexity. In this talk, we will present the two sides of the
    same coin, which demonstrates an intriguing duality between optimism and pessimism.
    - In the online setting, we aim to learn the optimal policy by actively interacting with an environment. To strike a balance between exploration and exploitation, we propose an optimistic least-squares value iteration algorithm, which achieves a \sqrt regret in the presence of linear, kernel, and neural function approximators.
    - In the offline setting, we aim to learn the optimal policy based on a dataset collected a priori.
    Due to a lack of active interactions with the environment, we suffer from the insufficient coverage of the dataset. To maximally exploit the dataset, we propose a pessimistic least-squares value iteration algorithm, which achieves a minimax-optimal sample complexity.
    Bio: Zhuoran Yang is a final-year Ph.D. student in the Department of Operations Research and
    Financial Engineering at Princeton University, advised by Professor Jianqing Fan and Professor Han Liu. Before attending Princeton, He obtained a Bachelor of Mathematics degree from Tsinghua University. His research interests lie in the interface between machine learning, statistics, and optimization. The primary goal of his research is to design a new generation of machine learning algorithms for large-scale and multi-agent decision-making problems, with both statistical and computational guarantees. Besides, he is also interested in the application of learning-based decision-making algorithms to real-world problems that arise in robotics, personalized medicine, and computational social science.
    Host: Roberta De Vito

To get notifications for all our events (and other data-related events and activities), please sign up for our newsletter

Decoding Pandemic Data:  A Series of Interactive Seminars:

These are lunchtime short talks by experts directly engaged in COVID-related data-driven research activities, with plenty of time for question and answer. Details here.

Faculty for Faculty Research Talks:

Informal opportunities for faculty to present their data science–related research to other faculty. Our goal is to provide a networking venue that promotes research collaborations between faculty across all disciplines; awareness of the breadth of data science–related research at Brown; and a forum for faculty to share their expertise with one another. Details here.

Data Wednesdays:

Our weekly seminar, hosted by DSI, CCMB, COBRE, 4-5 pm, at 164 Angell, 3rd floor

Data Science, Computing and Visualization Workshops:

[On hiatus] Weekly at noon on Fridays; see previous topics.