df <- readRDS('../../Data/Session_6_models.rds')
head(df) %>% select(-pred_F, -pred_S) %>% slice(1:2) %>% html_df()
Test | AAER | pred_FS | pred_BCE | pred_lmin | pred_l1se | pred_xgb |
---|---|---|---|---|---|---|
0 | 0 | 0.0395418 | 0.0661011 | 0.0301550 | 0.0296152 | 0.0478672 |
0 | 0 | 0.0173693 | 0.0344585 | 0.0328011 | 0.0309861 | 0.0616048 |
library(xgboost)
# Prep data
train_x <- model.matrix(AAER ~ ., data=df[df$Test==0,-1])[,-1]
train_y <- model.frame(AAER ~ ., data=df[df$Test==0,])[,"AAER"]
test_x <- model.matrix(AAER ~ ., data=df[df$Test==1,-1])[,-1]
test_y <- model.frame(AAER ~ ., data=df[df$Test==1,])[,"AAER"]
set.seed(468435) #for reproducibility
xgbCV <- xgb.cv(max_depth=5, eta=0.10, gamma=5, min_child_weight = 4,
subsample = 0.57, objective = "binary:logistic", data=train_x,
label=train_y, nrounds=100, eval_metric="auc", nfold=10,
stratified=TRUE, verbose=0)
fit_ens <- xgboost(params=xgbCV$params, data = train_x, label = train_y,
nrounds = which.max(xgbCV$evaluation_log$test_auc_mean),
verbose = 0)
aucs # Out of sample
## Ensemble Logit (BCE) Lasso (lambda.min)
## 0.8271003 0.7599594 0.7290185
## XGBoost
## 0.8083503
xgb.train.data = xgb.DMatrix(train_x, label = train_y, missing = NA)
col_names = attr(xgb.train.data, ".Dimnames")[[2]]
imp = xgb.importance(col_names, fit_ens)
# Variable importance
xgb.plot.importance(imp)
Recall the tradeoff between complexity and accuracy!
Example: In 2009, Netflix awarded a $1M prize to the BellKor’s Pragmatic Chaos team for beating Netflix’s own user preference algorithm by >10%. The alogorithm was so complex that Netflix never used it. It instead used a simpler algorithm with an 8% improvement.
Dark knowledge
How did they do this?
There are many ways to ensemble, and there is no specific guide as to what is best. It may prove useful in the group project, however.
What was the issue here? Where might similar issues crop up in business?
Fairness requires considering different perspectives and identifying which perspectives are most important from an ethical perspective
A good article of examples of the above: Algorithms are great and all, but they can also ruin lives
Compares a variety of unintended associations (top) and intended associations (bottom) across Global Vectors (GloVe) and USE
What risks does such a system pose?
How would you feel if a similar system was implemented in Singapore?
[Withheld from all public copies]
What could go wrong if the Uber data wasn’t anonymized?
Both Allman & Paxson, and Partridge warn against relying on the anonymisation of data since deanonymisation techniques are often surprisingly powerful. Robust anonymisation of data is difficult, particularly when it has high dimensionality, as the anonymisation is likely to lead to an unacceptable level of data loss [3]. – TPHCB 2017
Also, note the existence of the PDPA law in Singapore
What risks does this pose? Consider contexts outside Singapore as well.
By iterating repeatedly, the generative network can find a strategy that can generally circumvent the discriminitive network
“The collection, or use, of a dataset of illicit origin to support research can be advantageous. For example, legitimate access to data may not be possible, or the reuse of data of illicit origin is likely to require fewer resources than collecting data again from scratch. In addition, the sharing and reuse of existing datasets aids reproducibility, an important scientific goal. The disadvantage is that ethical and legal questions may arise as a result of the use of such data” (source)
For experiments, see The Belmont Report; for electronic data, see The Menlo Report
Today, we: