A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://python.arviz.org/en/latest/api/generated/arviz.InferenceData.add_groups.html below:

arviz.InferenceData.add_groups — ArviZ dev documentation

arviz.InferenceData.add_groups#
InferenceData.add_groups(group_dict=None, coords=None, dims=None, warn_on_custom_groups=False, **kwargs)[source]#

Add new groups to InferenceData object.

Parameters:
group_dictdict of {strdict or xarray.Dataset}, optional

Groups to be added

coordsdict of {strarray_like}, optional

Coordinates for the dataset

dimsdict of {strlist of str}, optional

Dimensions of each variable. The keys are variable names, values are lists of coordinates.

warn_on_custom_groupsbool, default False

Emit a warning when custom groups are present in the InferenceData. “custom group” means any group whose name isn’t defined in InferenceData schema specification

kwargsdict, optional

The keyword arguments form of group_dict. One of group_dict or kwargs must be provided.

See also

extend

Extend InferenceData with groups from another InferenceData.

concat

Concatenate InferenceData objects.

Examples

Add a log_likelihood group to the “rugby” example InferenceData after loading.

import arviz as az
idata = az.load_arviz_data("rugby")
del idata.log_likelihood
idata2 = idata.copy()
post = idata.posterior
obs = idata.observed_data
idata

Knowing the model, we can compute it manually. In this case however, we will generate random samples with the right shape.

import numpy as np
rng = np.random.default_rng(73)
ary = rng.normal(size=(post.sizes["chain"], post.sizes["draw"], obs.sizes["match"]))
idata.add_groups(
    log_likelihood={"home_points": ary},
    dims={"home_points": ["match"]},
)
idata

This is fine if we have raw data, but a bit inconvenient if we start with labeled data already. Why provide dims and coords manually again? Let’s generate a fake log likelihood (doesn’t match the model but it serves just the same for illustration purposes here) working from the posterior and observed_data groups manually:

import xarray as xr
from xarray_einstats.stats import XrDiscreteRV
from scipy.stats import poisson
dist = XrDiscreteRV(poisson, np.exp(post["atts"]))
log_lik = dist.logpmf(obs["home_points"]).to_dataset(name="home_points")
idata2.add_groups({"log_likelihood": log_lik})
idata2

Note that in the first example we have used the kwargs argument and in the second we have used the group_dict one.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4