Alien-XGBoost
view release on metacpan or search on metacpan
xgboost/R-package/vignettes/discoverYourData.Rmd view on Meta::CPAN
vignette: >
%\VignetteIndexEntry{Discover your data}
%\VignetteEngine{knitr::rmarkdown}
\usepackage[utf8]{inputenc}
---
Understand your dataset with XGBoost
====================================
Introduction
------------
The purpose of this vignette is to show you how to use **Xgboost** to discover and understand your own dataset better.
This vignette is not about predicting anything (see [Xgboost presentation](https://github.com/dmlc/xgboost/blob/master/R-package/vignettes/xgboostPresentation.Rmd)). We will explain how to use **Xgboost** to highlight the *link* between the *features...
Package loading:
```{r libLoading, results='hold', message=F, warning=F}
require(xgboost)
require(Matrix)
require(data.table)
if (!require('vcd')) install.packages('vcd')
```
> **VCD** package is used for one of its embedded dataset only.
Preparation of the dataset
--------------------------
### Numeric v.s. categorical variables
**Xgboost** manages only `numeric` vectors.
What to do when you have *categorical* data?
A *categorical* variable has a fixed number of different values. For instance, if a variable called *Colour* can have only one of these three values, *red*, *blue* or *green*, then *Colour* is a *categorical* variable.
> In **R**, a *categorical* variable is called `factor`.
>
> Type `?factor` in the console for more information.
To answer the question above we will convert *categorical* variables to `numeric` one.
### Conversion from categorical to numeric variables
#### Looking at the raw data
In this Vignette we will see how to transform a *dense* `data.frame` (*dense* = few zeroes in the matrix) with *categorical* variables to a very *sparse* matrix (*sparse* = lots of zero in the matrix) of `numeric` features.
The method we are going to see is usually called [one-hot encoding](http://en.wikipedia.org/wiki/One-hot).
The first step is to load `Arthritis` dataset in memory and wrap it with `data.table` package.
```{r, results='hide'}
data(Arthritis)
df <- data.table(Arthritis, keep.rownames = F)
```
> `data.table` is 100% compliant with **R** `data.frame` but its syntax is more consistent and its performance for large dataset is [best in class](http://stackoverflow.com/questions/21435339/data-table-vs-dplyr-can-one-do-something-well-the-other-ca...
The first thing we want to do is to have a look to the first few lines of the `data.table`:
```{r}
head(df)
```
Now we will check the format of each column.
```{r}
str(df)
```
2 columns have `factor` type, one has `ordinal` type.
> `ordinal` variable :
>
> * can take a limited number of values (like `factor`) ;
> * these values are ordered (unlike `factor`). Here these ordered values are: `Marked > Some > None`
#### Creation of new features based on old ones
We will add some new *categorical* features to see if it helps.
##### Grouping per 10 years
For the first feature we create groups of age by rounding the real age.
Note that we transform it to `factor` so the algorithm treat these age groups as independent values.
Therefore, 20 is not closer to 30 than 60. To make it short, the distance between ages is lost in this transformation.
```{r}
head(df[,AgeDiscret := as.factor(round(Age/10,0))])
```
##### Random split into two groups
Following is an even stronger simplification of the real age with an arbitrary split at 30 years old. We choose this value **based on nothing**. We will see later if simplifying the information based on arbitrary values is a good strategy (you may al...
```{r}
head(df[,AgeCat:= as.factor(ifelse(Age > 30, "Old", "Young"))])
```
##### Risks in adding correlated features
These new features are highly correlated to the `Age` feature because they are simple transformations of this feature.
For many machine learning algorithms, using correlated features is not a good idea. It may sometimes make prediction less accurate, and most of the time make interpretation of the model almost impossible. GLM, for instance, assumes that the features ...
Fortunately, decision tree algorithms (including boosted trees) are very robust to these features. Therefore we have nothing to do to manage this situation.
##### Cleaning data
We remove ID as there is nothing to learn from this feature (it would just add some noise).
```{r, results='hide'}
df[,ID:=NULL]
```
xgboost/R-package/vignettes/discoverYourData.Rmd view on Meta::CPAN
All these things are nice, but it would be even better to plot the results.
```{r, fig.width=8, fig.height=5, fig.align='center'}
xgb.plot.importance(importance_matrix = importance)
```
Feature have automatically been divided in 2 clusters: the interesting features... and the others.
> Depending of the dataset and the learning parameters you may have more than two clusters. Default value is to limit them to `10`, but you can increase this limit. Look at the function documentation for more information.
According to the plot above, the most important features in this dataset to predict if the treatment will work are :
* the Age ;
* having received a placebo or not ;
* the sex is third but already included in the not interesting features group ;
* then we see our generated features (AgeDiscret). We can see that their contribution is very low.
### Do these results make sense?
Let's check some **Chi2** between each of these features and the label.
Higher **Chi2** means better correlation.
```{r, warning=FALSE, message=FALSE}
c2 <- chisq.test(df$Age, output_vector)
print(c2)
```
Pearson correlation between Age and illness disapearing is **`r round(c2$statistic, 2 )`**.
```{r, warning=FALSE, message=FALSE}
c2 <- chisq.test(df$AgeDiscret, output_vector)
print(c2)
```
Our first simplification of Age gives a Pearson correlation is **`r round(c2$statistic, 2)`**.
```{r, warning=FALSE, message=FALSE}
c2 <- chisq.test(df$AgeCat, output_vector)
print(c2)
```
The perfectly random split I did between young and old at 30 years old have a low correlation of **`r round(c2$statistic, 2)`**. It's a result we may expect as may be in my mind > 30 years is being old (I am 32 and starting feeling old, this may expl...
Morality: don't let your *gut* lower the quality of your model.
In *data science* expression, there is the word *science* :-)
Conclusion
----------
As you can see, in general *destroying information by simplifying it won't improve your model*. **Chi2** just demonstrates that.
But in more complex cases, creating a new feature based on existing one which makes link with the outcome more obvious may help the algorithm and improve the model.
The case studied here is not enough complex to show that. Check [Kaggle website](http://www.kaggle.com/) for some challenging datasets. However it's almost always worse when you add some arbitrary rules.
Moreover, you can notice that even if we have added some not useful new features highly correlated with other features, the boosting tree algorithm have been able to choose the best one, which in this case is the Age.
Linear model may not be that smart in this scenario.
Special Note: What about Random Forests�
-----------------------------------------
As you may know, [Random Forestsâ¢](http://en.wikipedia.org/wiki/Random_forest) algorithm is cousin with boosting and both are part of the [ensemble learning](http://en.wikipedia.org/wiki/Ensemble_learning) family.
Both trains several decision trees for one dataset. The *main* difference is that in Random Forestsâ¢, trees are independent and in boosting, the tree `N+1` focus its learning on the loss (<=> what has not been well modeled by the tree `N`).
This difference have an impact on a corner case in feature importance analysis: the *correlated features*.
Imagine two features perfectly correlated, feature `A` and feature `B`. For one specific tree, if the algorithm needs one of them, it will choose randomly (true in both boosting and Random Forestsâ¢).
However, in Random Forests⢠this random choice will be done for each tree, because each tree is independent from the others. Therefore, approximatively, depending of your parameters, 50% of the trees will choose feature `A` and the other 50% will c...
In boosting, when a specific link between feature and outcome have been learned by the algorithm, it will try to not refocus on it (in theory it is what happens, reality is not always that simple). Therefore, all the importance will be on feature `A`...
If you want to try Random Forests⢠algorithm, you can tweak Xgboost parameters!
For instance, to compute a model with 1000 trees, with a 0.5 factor on sampling rows and columns:
```{r, warning=FALSE, message=FALSE}
data(agaricus.train, package='xgboost')
data(agaricus.test, package='xgboost')
train <- agaricus.train
test <- agaricus.test
#Random Forest⢠- 1000 trees
bst <- xgboost(data = train$data, label = train$label, max_depth = 4, num_parallel_tree = 1000, subsample = 0.5, colsample_bytree =0.5, nrounds = 1, objective = "binary:logistic")
#Boosting - 3 rounds
bst <- xgboost(data = train$data, label = train$label, max_depth = 4, nrounds = 3, objective = "binary:logistic")
```
> Note that the parameter `round` is set to `1`.
> [**Random Forestsâ¢**](https://www.stat.berkeley.edu/~breiman/RandomForests/cc_papers.htm) is a trademark of Leo Breiman and Adele Cutler and is licensed exclusively to Salford Systems for the commercial release of the software.
( run in 2.586 seconds using v1.01-cache-2.11-cpan-39bf76dae61 )