A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://github.com/yulun-rayn/variational-causal-inference below:

yulun-rayn/variational-causal-inference: Counterfactual Generative Modeling with Variational Causal Inference (ICLR 2025)

Variational Causal Inference

This repository is the official implementation of Variational Causal Inference.

@article{wu2024counterfactual,
  title={Counterfactual Generative Modeling with Variational Causal Inference},
  author={Wu, Yulun and McConnell, Louie and Iriondo, Claudia},
  journal={International Conference on Learning Representations},
  year={2025}
}
1. Create Conda Environment
conda config --append channels conda-forge
conda create -n vci-env python=3.9
conda activate vci-env
pip install -r requirements.txt
2. Install Learning Libraries

  * make sure to install the right versions for your toolkit

Visit our resource site to download the datasets.

Single-cell Perturbation Dataset

Download the contents of cell/ into datasets. To see how to process your own dataset, download the contents of raw/ into datasets and follow the examples. A clean example of data preparation can be found in SciplexPrep.ipynb. For an example of data preparation on a messier dataset with thorough analysis and visualizations, see MarsonPrep.ipynb.

In summary, the preparation procedure includes:

Once the environment is set up and the data are prepared, the function call to train the model is:

A list of flags may be found in these run files and main.py for experimentation with different network parameters. The run log and models are saved under *artifact_path*/saves, and the tensorboard log is saved under *artifact_path*/runs.

The VCI framework is about the innovation of training loss and propagation workflow, and not about a specific model architecture. Hence, practitioners are free to use the latest development in vision models, for example, or any model architecture of their liking by simply replacing encoder $q_\phi$ in the encoder constructor and decoder $p_\theta$ in the decoder constructor with the desired models. Note that if the desired models have different output format, other class methods might also need to be updated. For example, our hierarchical models return a tuple instead of a tensor, and several methods are adapted accordingly in the corresponding wrapper.

Alternatively, if researchers would like to incorporate our training loss and propagation workflow into their own codebase, they can do so by simply adapting the loss method and the forward method into their own module.

Contributions are welcome! All content here is licensed under the MIT license.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4