-
Couldn't load subscription status.
- Fork 83
Exploratory experiment class refactor, focussing on InterruptedTimeSeries
#524
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #524 +/- ##
=======================================
Coverage 95.19% 95.20%
=======================================
Files 28 28
Lines 2457 2462 +5
=======================================
+ Hits 2339 2344 +5
Misses 118 118 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
At the moment this is a bit of an experiment. I'm trying out a number of different ideas for refactoring of the experiment class. Just to test out the idea I'm focussing on the
InterruptedTimeSeriesclass.Main things I've done are:
__init__to thealgorithmmethod. This is not only more pythonic, but it also gives us a very nice and mostly readable method that captures the core logic of this quasi-experimental method.__init__to the_build_datamethod. Increases modularity, testability, and tidies things up.self.datawhich is anxarray.Dataset. This keeps things tidy but also aids discoverability of the information that people want.__init__is nice and minimal. We still automatically trigger the model fitting, by callingself.algorithm, but there is the potential to not do this if we want to enable a more traditional Bayesian workflow where we build a model and do prior/prior predictive checks before fitting the model. But I'm not doing that in this refactor because it's a major workflow/API change.self.impactfor example which has an aperioddimension. So if we want the post intervention impact, we can get that byresult.impact.sel(period=="post"). Mostly this will be invisible to the user, but for those doing manual interrogation of results then there might be slight changes in the API to document in the notebooks. I'm not wedded to this, and we could always have temporary accessor properties to replicate previous behaviour, which we could then deprecate.a. I've separated computation/processing of results and the plotting. So we have
get_plot_data_bayesianandget_plot_data_olswhich both return data frames. Now the plot functions only ingest these data framesb. We now just have one
plotmethod, and this deals with bayesian vs ols models with conditional logic. The motivation for that was to avoid massive duplication because the plots for each were so similar.c. What I have not yet done is to make the plot function only ingest the raw dataframe. At the moment it still gets a bunch of self attributes, but it would probably be better for the plot functions to just operate on data objects. I think the next step here would be to make this data an
xarray.Datasetrather than a dataframe for greater flexibility (i.e. you can add meta data), but it also comes with some good save/load functionality from xarray. This plot refactoring is inspired by what seems to work quite well on some client projects.📚 Documentation preview 📚: https://causalpy--524.org.readthedocs.build/en/524/