-
Updated
May 2, 2021
probabilistic-programming
Here are 357 public repositories matching this topic...
-
Updated
Oct 27, 2021 - Python
-
Updated
Oct 22, 2019 - Jupyter Notebook
-
Updated
Oct 27, 2021 - Python
Are there any plans to add a Zero-Inflated Poisson (ZIP) and Zero-Inflated Negative Binomial (ZINB) to TFP? Those are usually very common distributions in other packages, and it shouldn't be hard to implement.
-
Updated
Jan 9, 2020 - Python
Ankit Shah and I are trying to use Gen to support a project and would love the addition of a dirichlet distribution
-
Updated
Jul 26, 2021 - Jupyter Notebook
-
Updated
Oct 22, 2021 - Julia
-
Updated
Mar 15, 2021 - Go
TruncatedDistribution has both low
and high
. Why do TruncatedNormal
and TruncatedCauchy
only have low
?
-
Updated
Feb 17, 2021 - Python
-
Updated
Aug 7, 2020 - Python
-
Updated
May 27, 2021
-
Updated
Jan 17, 2020 - Swift
-
Updated
Oct 25, 2021 - Python
-
Updated
Oct 27, 2021 - Python
-
Updated
Oct 2, 2020 - JavaScript
-
Updated
Aug 13, 2021 - Jupyter Notebook
The current example on MDN from Edward tutorials needs small modifications to run on edward2. Documentation covering these modifications will be appreciated.
-
Updated
Oct 27, 2021 - Julia
Hi,
Looks like there is support for lots of common distribution. There are a handful of other distributions which are not presently supported but could (fingers crossed) be easily implemented. Looking at [Stan's Function Reference] I see...
- Beta Binomial
- [Chi-Square](https://mc-stan.org/docs/2
Improve tests
-
Updated
Nov 2, 2020 - Haskell
There are a variety of interesting optimisations that can be performed on kernels of the form
k(x, z) = w_1 * k_1(x, z) + w_2 * k_2(x, z) + ... + w_L k_L(x, z)
A naive recursive implementation in terms of the current Sum
and Scaled
kernels hides opportunities for parallelism in the computation of each term, and the summation over terms.
Notable examples of kernels with th
Plotting Docs
GPU Support
-
Updated
Sep 12, 2019 - Scala
-
Updated
Oct 27, 2021 - JavaScript
See discussion #134. It is not clear enough what integration_steps
means in the context of NUTS. This is however very important for anyone who wants to know how "efficient" a sampler is in terms of the number of gradient evaluations. The docstring should be improved, and the quantity rename to respect what is done in the rest of the library.
-
Updated
Mar 17, 2021 - C#
See wikipedia table for example patterns: https://en.wikipedia.org/wiki/List_of_integrals_of_Gaussian_functions
Improve this page
Add a description, image, and links to the probabilistic-programming topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the probabilistic-programming topic, visit your repo's landing page and select "manage topics."
Pyro's HMC and NUTS implementations are feature-complete and well-tested, but they are quite slow in models like the one in our Bayesian regression tutorial that operate on small tensors for reasons that are largely beyond our control (mostly having to do with the design and implementation of
torch.autograd
), which is unfortunate because these