Alien-XGBoost
view release on metacpan - search on metacpan
view release on metacpan or search on metacpan
xgboost/dmlc-core/src/io/s3_filesys.h view on Meta::CPAN
namespace dmlc {
namespace io {
/*! \brief AWS S3 filesystem */
class S3FileSystem : public FileSystem {
public:
/*! \brief destructor */
virtual ~S3FileSystem() {}
/*!
* \brief Sets AWS access credentials
* \param aws_access_id The AWS Access Key ID
* \param aws_secret_key The AWS Secret Key
* \return the information about the file
*/
void SetCredentials(const std::string& aws_access_id,
const std::string& aws_secret_key);
/*!
* \brief get information about a path
* \param path the path to the file
xgboost/dmlc-core/tracker/yarn/src/main/java/org/apache/hadoop/yarn/dmlc/ApplicationMaster.java view on Meta::CPAN
private int numServer = 0;
// total number of tasks
private int numTasks;
// maximum number of attempts to try in each task
private int maxNumAttempt = 3;
// command to launch
private String command = "";
// username
private String userName = "";
// user credentials
private Credentials credentials = null;
// application tracker hostname
private String appHostName = "";
// tracker URL to do
private String appTrackerUrl = "";
// tracker port
private int appTrackerPort = 0;
// whether we start to abort the application, due to whatever fatal reasons
private boolean startAbort = false;
// worker resources
xgboost/doc/tutorials/aws_yarn.md view on Meta::CPAN
===============================
This is a step-by-step tutorial on how to setup and run distributed [XGBoost](https://github.com/dmlc/xgboost)
on an AWS EC2 cluster. Distributed XGBoost runs on various platforms such as MPI, SGE and Hadoop YARN.
In this tutorial, we use YARN as an example since this is a widely used solution for distributed computing.
Prerequisite
------------
We need to get a [AWS key-pair](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html)
to access the AWS services. Let us assume that we are using a key ```mykey``` and the corresponding permission file ```mypem.pem```.
We also need [AWS credentials](http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSGettingStartedGuide/AWSCredentials.html),
which includes an `ACCESS_KEY_ID` and a `SECRET_ACCESS_KEY`.
Finally, we will need a S3 bucket to host the data and the model, ```s3://mybucket/```
Setup a Hadoop YARN Cluster
---------------------------
This sections shows how to start a Hadoop YARN cluster from scratch.
You can skip this step if you have already have one.
We will be using [yarn-ec2](https://github.com/tqchen/yarn-ec2) to start the cluster.
view all matches for this distributionview release on metacpan - search on metacpan
( run in 0.750 second using v1.00-cache-2.02-grep-82fe00e-cpan-72ae3ad1e6da )