Alien-XGBoost
view release on metacpan or search on metacpan
xgboost/plugin/updater_gpu/README.md view on Meta::CPAN
Training time time on 1,000,000 rows x 50 columns with 500 boosting iterations and 0.25/0.75 test/train split on i7-6700K CPU @ 4.00GHz and Pascal Titan X.
| tree_method | Time (s) |
| --- | --- |
| gpu_hist | 13.87 |
| hist | 63.55 |
| gpu_exact | 161.08 |
| exact | 1082.20 |
[See here](http://dmlc.ml/2016/12/14/GPU-accelerated-xgboost.html) for additional performance benchmarks of the 'gpu_exact' tree_method.
## Test
To run python tests:
```bash
$ python -m nose test/python/
```
Google tests can be enabled by specifying -DGOOGLE_TEST=ON when building with cmake.
## Dependencies
A CUDA capable GPU with at least compute capability >= 3.5
Building the plug-in requires CUDA Toolkit 7.5 or later (https://developer.nvidia.com/cuda-downloads)
## Build
From the command line on Linux starting from the xgboost directory:
On Linux, from the xgboost directory:
```bash
$ mkdir build
$ cd build
$ cmake .. -DPLUGIN_UPDATER_GPU=ON
$ make -j
```
On Windows using cmake, see what options for Generators you have for cmake, and choose one with [arch] replaced by Win64:
```bash
cmake -help
```
Then run cmake as:
```bash
$ mkdir build
$ cd build
$ cmake .. -G"Visual Studio 14 2015 Win64" -DPLUGIN_UPDATER_GPU=ON
```
Cmake will create an xgboost.sln solution file in the build directory. Build this solution in release mode as a x64 build.
Visual studio community 2015, supported by cuda toolkit (http://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/#axzz4isREr2nS), can be downloaded from: https://my.visualstudio.com/Downloads?q=Visual%20Studio%20Community%202015 . You ...
### For other nccl libraries
On some systems, nccl libraries are specific to a particular system (IBM Power or nvidia-docker) and can enable use of nvlink (between GPUs or even between GPUs and system memory). In that case, one wants to avoid the static nccl library by changing...
### For Developers!
In case you want to build only for a specific GPU(s), for eg. GP100 and GP102,
whose compute capability are 60 and 61 respectively:
```bash
$ cmake .. -DPLUGIN_UPDATER_GPU=ON -DGPU_COMPUTE_VER="60;61"
```
### Using make
Now, it also supports the usual 'make' flow to build gpu-enabled tree construction plugins. It's currently only tested on Linux. From the xgboost directory
```bash
# make sure CUDA SDK bin directory is in the 'PATH' env variable
$ make -j PLUGIN_UPDATER_GPU=ON
```
Similar to cmake, if you want to build only for a specific GPU(s):
```bash
$ make -j PLUGIN_UPDATER_GPU=ON GPU_COMPUTE_VER="60 61"
```
## Changelog
##### 2017/8/14
* Added GPU accelerated prediction. Considerably improved performance when using test/eval sets.
##### 2017/7/10
* Memory performance improved 4x for gpu_hist
##### 2017/6/26
* Change API to use tree_method parameter
* Increase required cmake version to 3.5
* Add compute arch 3.5 to default archs
* Set default n_gpus to 1
##### 2017/6/5
* Multi-GPU support for histogram method using NVIDIA NCCL.
##### 2017/5/31
* Faster version of the grow_gpu plugin
* Added support for building gpu plugin through 'make' flow too
##### 2017/5/19
* Further performance enhancements for histogram method.
##### 2017/5/5
* Histogram performance improvements
* Fix gcc build issues
##### 2017/4/25
* Add fast histogram algorithm
* Fix Linux build
* Add 'gpu_id' parameter
## References
[Mitchell, Rory, and Eibe Frank. Accelerating the XGBoost algorithm using GPU computing. No. e2911v1. PeerJ Preprints, 2017.](https://peerj.com/preprints/2911/)
## Author
Rory Mitchell
Jonathan C. McKinney
Shankara Rao Thejaswi Nanditale
Vinay Deshpande
... and the rest of the H2O.ai and NVIDIA team.
Please report bugs to the xgboost/issues page.
( run in 1.843 second using v1.01-cache-2.11-cpan-39bf76dae61 )