Skip to content

Commit

Permalink
Merge pull request #160 from KedoKudo/main
Browse files Browse the repository at this point in the history
update seminar and tutorial schedule
  • Loading branch information
williamfgc committed Sep 3, 2024
2 parents a50fe5c + f521d6a commit a0900d3
Show file tree
Hide file tree
Showing 2 changed files with 70 additions and 65 deletions.
68 changes: 36 additions & 32 deletions events/2024-AI-Seminar.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,38 +22,6 @@ Please reach out to the organizers if you would like to recommend a spearker or

# Next Presentation

**Causal representation learning in temporal settings**

<br> Microsoft Teams
<br> Time: 11:00 a.m. - 12 p.m. ET, Thursday, 08/22/2024
<br> [Dr. Sara Magliacane](https://saramagliacane.github.io)

**Abstract**

Causal inference reasons about the effect of unseen interventions or external manipulations on a system.
Similar to classic approaches to AI, it typically assumes that the causal variables of interest are given from the outset.
However, real-world data often comprises high-dimensional, low-level observations (e.g., pixels in a video) and is thus usually not structured into such meaningful causal units.
Causal representation learning aims at addressing this gap by learning high-level causal variables along with their causal relations directly from raw, unstructured data, e.g. images or videos.

In this talk I will focus on learning causal representations in temporal sequences, e.g. sequences of images.
In particular I will present some of our recent work on causal representation learning in environments in which we can perform interventions or actions.
I will start by presenting CITRIS (https://arxiv.org/abs/2202.03169), where we leveraged the knowledge of which variables are intervened in each timestep to learn a provably disentangled representation of the potentially multidimensional ground truth causal variables, as well as a dynamic bayesian network representing the causal relations between these variables.
I will then show iCITRIS (https://arxiv.org/abs/2206.06169), an extension that allows for instantaneous effects between variables.
Finally, I will focus on our most recent method, BISCUIT (https://arxiv.org/abs/2306.09643), which overcomes one of the biggest limitations of our previous methods: the need to know which variables are intervened.
In BISCUIT we instead leverage actions with unknown effects on an environment.
Assuming that each causal variable has exactly two distinct causal mechanisms, we prove that we can recover each ground truth variable from a sequence of images and actions up to permutation and element-wise transformations.
This allows us to apply BISCUIT to realistic simulated environments for embodied AI, where we can learn a latent representation that allows us to identify and manipulate each causal variable, as well as a mapping between each high-level action and its effects on the latent causal variables.

**Bio**

Dr. Sara Magliacane is an Assistant Professor at the Amsterdam Machine Learning Lab, University of Amsterdam.
Her research explores how causality can enhance machine learning (ML) algorithms in robustness, cross-domain generalization, and safety.
She focuses on causal representation learning, causal discovery, and causality-inspired ML, investigating how causal concepts help ML and reinforcement learning adapt to new domains and nonstationarity.

Previously, Dr. Magliacane was a Research Scientist at the MIT-IBM Watson AI Lab and a postdoctoral researcher at IBM Research NY.
She holds a PhD from VU Amsterdam, with internships at Google Zürich and NYC, and has a background in Computer Engineering from Politecnico di Milano, Politecnico di Torino, and the University of Trieste.
Her work is recognized through various publications and esteemed conference participation, establishing her as a leading expert in causality and machine learning.

# Upcoming Presentations

# Past Presentation
Expand Down Expand Up @@ -284,6 +252,42 @@ More details can be found on the homepage: https://sites.google.com/ncsu.edu/xia

<a href="#top"> &#10558; Back to top</a>

---

**Causal representation learning in temporal settings**

<br> Microsoft Teams
<br> Time: 11:00 a.m. - 12 p.m. ET, Thursday, 08/22/2024
<br> [Dr. Sara Magliacane](https://saramagliacane.github.io)

**Abstract**

Causal inference reasons about the effect of unseen interventions or external manipulations on a system.
Similar to classic approaches to AI, it typically assumes that the causal variables of interest are given from the outset.
However, real-world data often comprises high-dimensional, low-level observations (e.g., pixels in a video) and is thus usually not structured into such meaningful causal units.
Causal representation learning aims at addressing this gap by learning high-level causal variables along with their causal relations directly from raw, unstructured data, e.g. images or videos.

In this talk I will focus on learning causal representations in temporal sequences, e.g. sequences of images.
In particular I will present some of our recent work on causal representation learning in environments in which we can perform interventions or actions.
I will start by presenting CITRIS (https://arxiv.org/abs/2202.03169), where we leveraged the knowledge of which variables are intervened in each timestep to learn a provably disentangled representation of the potentially multidimensional ground truth causal variables, as well as a dynamic bayesian network representing the causal relations between these variables.
I will then show iCITRIS (https://arxiv.org/abs/2206.06169), an extension that allows for instantaneous effects between variables.
Finally, I will focus on our most recent method, BISCUIT (https://arxiv.org/abs/2306.09643), which overcomes one of the biggest limitations of our previous methods: the need to know which variables are intervened.
In BISCUIT we instead leverage actions with unknown effects on an environment.
Assuming that each causal variable has exactly two distinct causal mechanisms, we prove that we can recover each ground truth variable from a sequence of images and actions up to permutation and element-wise transformations.
This allows us to apply BISCUIT to realistic simulated environments for embodied AI, where we can learn a latent representation that allows us to identify and manipulate each causal variable, as well as a mapping between each high-level action and its effects on the latent causal variables.

**Bio**

Dr. Sara Magliacane is an Assistant Professor at the Amsterdam Machine Learning Lab, University of Amsterdam.
Her research explores how causality can enhance machine learning (ML) algorithms in robustness, cross-domain generalization, and safety.
She focuses on causal representation learning, causal discovery, and causality-inspired ML, investigating how causal concepts help ML and reinforcement learning adapt to new domains and nonstationarity.

Previously, Dr. Magliacane was a Research Scientist at the MIT-IBM Watson AI Lab and a postdoctoral researcher at IBM Research NY.
She holds a PhD from VU Amsterdam, with internships at Google Zürich and NYC, and has a background in Computer Engineering from Politecnico di Milano, Politecnico di Torino, and the University of Trieste.
Her work is recognized through various publications and esteemed conference participation, establishing her as a leading expert in causality and machine learning.

<a href="#top"> &#10558; Back to top</a>

# Schedule

<!---
Expand Down
67 changes: 34 additions & 33 deletions events/2024-AI-Tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,36 +38,6 @@ Therefore, please request an entry pass only if you need to attend virtually.

# Next Tutorial

**VAE-NCDE: a generative time series model for probabilistic multivariate time series forecasting**

<br>Time: 07/26/2024, 9:00 AM - 12:00 PM (ET)
<br> Virtual (Microsoft Teams)
<br>[William L Gurecky](https://www.ornl.gov/staff-profile/william-l-gurecky)
<br>Oak Ridge National Laboratory

| |
| ------- |
| [![WLG](https://www.ornl.gov/sites/default/files/styles/staff_profile_image_style/public/2023-02/2023-P01618.jpg?h=fe2061c8&itok=GnLFc3Sv)](https://www.ornl.gov/staff-profile/william-l-gurecky)|
| William L Gurecky<br> Research Scientist<br>Computing and Computational Sciences Directorate, ORNL |

**Abstract**

TBA

**Bio**

William is an R&D associate in the Nuclear Energy and Fuel Cycle Division at Oak Ridge National Laboratory.
He currently works with the Power Reactor Modeling group developing performant software solutions to address industry identified challenge problems.
His research interests include applying Bayesian statistics and machine learning to reactor physics problems.
As the principle investigator of a laboratory directed research and development project, William developed a flexible parallel optimization package, ML-PSA, that joins modern machine learning methods with multi-fidelity physics tools to solve multi-constrained and combinatorial optimization problems encountered in reactor core design.

William co-developed the crud simulation code, MAMBA, that is a coupled component of the Virtual Environment for Reactor Applications (VERA).
MAMBA allows prediction of crud-induced power shifts, constituting a key capability in the Consortium for the Advanced Simulation of LWRs' (CASL) technical portfolio.
Additionally, he develops reduced order models and applies statistical inference techniques to a variety of model calibration tasks in MAMBA and CTF.

William holds a B.Sc. in Mechanical Engineering (2013), an M.S.E. in Mechanical Engineering (2015) and a Ph.D. in Nuclear & Radiation Engineering (2018) from the University of Texas at Austin.
In his doctoral work, William developed a statistical downscaling method to capture the influence of fine-scale flow details down stream from spacer grids on the growth rate of crud in PWRs.

# Past Tutorial

**Material Property Prediction with Large Scale GNNs**
Expand All @@ -81,7 +51,6 @@ In his doctoral work, William developed a statistical downscaling method to capt
| [![MLP](https://www.ornl.gov/sites/default/files/styles/staff_profile_image_style/public/2019-11/MaxPortrait.jpg?h=e67f39ca&itok=0HIXThn1)](https://www.ornl.gov/staff-profile/massimiliano-lupo-pasini)|
| Massimiliano Lupo Pasini<br> Staff Scientist<br>Computing and Computational Sciences Directorate, ORNL |


**Abstract**

During the tutorial, we will show how to use HydraGNN (https://github.com/ORNL/HydraGNN), our scalable implementation of multi-task learning graph neural networks developed at Oak Ridge National Laboratory #ORNL as part of the ORNL Artificial Intelligence Initiative.
Expand Down Expand Up @@ -179,6 +148,38 @@ Siyan Liu is a research scientist at Oak Ridge National Laboratory (ORNL), worki
He finished his Ph.D. degree in Chemical & Petroleum Engineering and worked as a postdoc at ORNL before his current role.
He has broad research interests, including developing various AI models and foundational models, uncertainty quantification, HPC and scalable AI model training, AI for science applications, numerical simulation of fluid flow in porous media, computational fluid dynamics (CFD), reservoir simulation, apply data analytics and artificial intelligence on other science and engineering problems, and high-performance parallel computing.

---

**VAE-NCDE: a generative time series model for probabilistic multivariate time series forecasting**

<br>Time: 07/26/2024, 9:00 AM - 12:00 PM (ET)
<br> Virtual (Microsoft Teams)
<br>[William L Gurecky](https://www.ornl.gov/staff-profile/william-l-gurecky)
<br>Oak Ridge National Laboratory

| |
| ------- |
| [![WLG](https://www.ornl.gov/sites/default/files/styles/staff_profile_image_style/public/2023-02/2023-P01618.jpg?h=fe2061c8&itok=GnLFc3Sv)](https://www.ornl.gov/staff-profile/william-l-gurecky)|
| William L Gurecky<br> Research Scientist<br>Computing and Computational Sciences Directorate, ORNL |

**Abstract**

TBA

**Bio**

William is an R&D associate in the Nuclear Energy and Fuel Cycle Division at Oak Ridge National Laboratory.
He currently works with the Power Reactor Modeling group developing performant software solutions to address industry identified challenge problems.
His research interests include applying Bayesian statistics and machine learning to reactor physics problems.
As the principle investigator of a laboratory directed research and development project, William developed a flexible parallel optimization package, ML-PSA, that joins modern machine learning methods with multi-fidelity physics tools to solve multi-constrained and combinatorial optimization problems encountered in reactor core design.

William co-developed the crud simulation code, MAMBA, that is a coupled component of the Virtual Environment for Reactor Applications (VERA).
MAMBA allows prediction of crud-induced power shifts, constituting a key capability in the Consortium for the Advanced Simulation of LWRs' (CASL) technical portfolio.
Additionally, he develops reduced order models and applies statistical inference techniques to a variety of model calibration tasks in MAMBA and CTF.

William holds a B.Sc. in Mechanical Engineering (2013), an M.S.E. in Mechanical Engineering (2015) and a Ph.D. in Nuclear & Radiation Engineering (2018) from the University of Texas at Austin.
In his doctoral work, William developed a statistical downscaling method to capture the influence of fine-scale flow details down stream from spacer grids on the growth rate of crud in PWRs.

# Schedule

Please reach out if you are interested in presenting at a future event
Expand All @@ -189,10 +190,10 @@ Please reach out if you are interested in presenting at a future event
| 04/25/2024 | ExpM+NF: Are normalizing flows the key to unlocking the exponential mechanism? Our efforts to advance differentially private machine learning | [Robert A Bridges](https://www.ornl.gov/staff-profile/robert-bridges) |
| 06/28/2024 | PI3NN: Uncertainty Quantification for ML models | [Dan Lu](https://www.ornl.gov/staff-profile/dan-lu) <br/> [Siyan Liu](https://www.ornl.gov/staff-profile/siyan-liu) |
| 07/26/2024 | VAE-NCDE: a generative time series model for probabilistic multivariate time series forecasting | [William L Gurecky](https://www.ornl.gov/staff-profile/william-l-gurecky) |
| 08/23/2024 | ML-HSIR: Machine Learning based Hyperspectral Image Reconstruction the edge | [Narasinga Rao Miniskar](https://www.ornl.gov/staff-profile/narasinga-r-miniskar) |
| 09/27/2024 | Massively parallel training for large transformers | [Xiao Wang](https://www.ornl.gov/staff-profile/xiao-wang) |
| 10/04/2024 | PPSD: Privacy preservation for streaming data | [Olivera Kotevska](https://www.ornl.gov/staff-profile/olivera-kotevska) |
| 10/18/2024 | ML-HSIR: Machine Learning based Hyperspectral Image Reconstruction the edge | [Narasinga Rao Miniskar](https://www.ornl.gov/staff-profile/narasinga-r-miniskar) |
| 11/15/2024 | Intro to causal and explainable models in materials science | [Ayana Ghosh](https://www.ornl.gov/staff-profile/ayana-ghosh) |
| 12/13/2024 | PPSD: Privacy preservation for streaming data | [Olivera Kotevska](https://www.ornl.gov/staff-profile/olivera-kotevska) |

<a href="#top"> &#10558; Back to top</a>

Expand Down

0 comments on commit a0900d3

Please sign in to comment.