Jacks Car Rental as a Gym Environment

In this blogpost, we solve a famous sequential decision problem called Jacks Car Rental by first turning it into a Gym environment and then use a RL algorithm called Policy Iteration (a form of Dynamic Programming) to solve for the optimal decisions to take in this environment.

Using posterior predictive distributions to get the Average Treatment Effect (ATE) with uncertainty

Here we show how to use Stan and the brms R-package to calculate the posterior predictive distribution of a covariate-adjusted average treatment effect (ATE).

Building TensorFlow 2.2 on an old PC

With the commoditization of deep learning in the form of Keras, I felt it was about time that I jumped on the Deep Learning bandwagon.

Simulating Fake Data in R

This blog post is on simulating fake data using the R package simstudy. Motivation comes from my interest in converting real datasets into synthetic ones.

Exploring Process Mining in R

In this post, we’ll explore the BupaR suite of Process Mining packages created by Gert Janssenswillen of Hasselt University.

Designing an introductory course on Causal Inference

In this blog post, I describe the introductory course on Causal Inference I pieced together using various materials available online. It combines Pearl’s Causal Graph approach with statistics Gelman/mcElreath style.

The validation set approach in caret

In this blog post, we explore how to implement the validation set approach in caret. This is the most basic form of the train/test machine learning concept.

Arduino Weather Station with datalogging

In this post, I show how to create a Arduino-based atmospheric sensor prototype capable of storing large amounts of data on a microSD card.

BART vs Causal forests showdown

In this post, we test both Bayesian Additive Regression Trees (BART) and Causal forests (grf) on four simulated datasets of increasing complexity. May the best method win!

Improving a parametric regression model using machine learning

In this post, I explore how we can improve a parametric regression model by comparing its predictions to those of a Random Forest model. This might informs us in what ways the OLS model fails to capture all non-linearities and interactions between the predictors.