Blog category: Julia
How Probabilistic Programming Allows Automatic Causal Inference Generative models can be written with causal assumptions included. Comparing predictions from such a model gives valid estimates of causal effects, similar to more complicated estimators derived from theories of causal inference, but with more accurate accounting of uncertainty.
The Basic Idea of Causal Inference in Julia GLM Causal inference should be done in a mathematically rigourous way. A valid inference requires a long list of assumptions – like consistency, positivity, and exchangeability within levels of confounders – and procedures like standardization, inverse-probability, and g-estimation.
The Basic Idea of Probabilistic Programming in Julia Turing Uncertainty can be fully expressed in a mathematically rigorous way using probability distributions and the Bayes rule. Combining this with general-purpose programming, one can freely describe almost any kind of model with uncertainty that can be updated using observations.
The Basic Idea of Deep Learning in Julia Flux Deep learning methods seem to be able to fit almost any function when given enough data, parameters, and optimization cycles. And the first step is just to start nesting linear models and following the gradients of loss functions.