Last Updated on August 21, 2019
XGBoost is a library for developing very fast and accurate gradient boosting models.
It is a library at the center of many winning solutions in Kaggle data science competitions.
In this tutorial, you will discover how to install the XGBoost library for Python on macOS.
Kick-start your project with my new book XGBoost With Python, including step-by-step tutorials and the Python source code files for all examples.
Let’s get started.

How to Install XGBoost for Python on macOS
Photo by auntjojo, some rights reserved.
Tutorial Overview
This tutorial is divided into 3 parts; they are:
- Install MacPorts
- Build XGBoost
- Install XGBoost
Note: I have used this procedure for years on a range of different macOS versions and it has not changed. This tutorial was written and tested on macOS High Sierra (10.13.1).
1. Install MacPorts
You need GCC and a Python environment installed in order to build and install XGBoost for Python.
I recommend GCC 7 and Python 3.6 and I recommend installing these prerequisites using MacPorts.
- 1. For help installing MacPorts and a Python environment step-by-step, see this tutorial:
>> How to Install a Python 3 Environment on Mac OS X for Machine Learning and Deep Learning
- 2. After MacPorts and a working Python environment are installed, you can install and select GCC 7 as follows:
1 2 |
sudo port install gcc7 sudo port select --set gcc mp-gcc7 |
- 3. Confirm your GCC installation was successful as follows:
1 |
gcc -v |
You should see the version of GCC printed; for example:
1 2 |
.. gcc version 7.2.0 (MacPorts gcc7 7.2.0_0) |
What version did you see?
Let me know in the comments below.
2. Build XGBoost
The next step is to download and compile XGBoost for your system.
- 1. First, check out the code repository from GitHub:
1 |
git clone --recursive https://github.com/dmlc/xgboost |
- 2. Change into the xgboost directory.
1 |
cd xgboost/ |
- 3. Copy the configuration we intend to use to compile XGBoost into position.
1 |
cp make/config.mk ./config.mk |
- 4. Compile XGBoost; this requires that you specify the number of cores on your system (e.g. 8, change as needed).
1 |
make -j8 |
The build process may take a minute and should not produce any error messages, although you may see some warnings that you can safely ignore.
For example, the last snippet of the compilation might look as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
... a - build/learner.o a - build/logging.o a - build/c_api/c_api.o a - build/c_api/c_api_error.o a - build/common/common.o a - build/common/hist_util.o a - build/data/data.o a - build/data/simple_csr_source.o a - build/data/simple_dmatrix.o a - build/data/sparse_page_dmatrix.o a - build/data/sparse_page_raw_format.o a - build/data/sparse_page_source.o a - build/data/sparse_page_writer.o a - build/gbm/gblinear.o a - build/gbm/gbm.o a - build/gbm/gbtree.o a - build/metric/elementwise_metric.o a - build/metric/metric.o a - build/metric/multiclass_metric.o a - build/metric/rank_metric.o a - build/objective/multiclass_obj.o a - build/objective/objective.o a - build/objective/rank_obj.o a - build/objective/regression_obj.o a - build/predictor/cpu_predictor.o a - build/predictor/predictor.o a - build/tree/tree_model.o a - build/tree/tree_updater.o a - build/tree/updater_colmaker.o a - build/tree/updater_fast_hist.o a - build/tree/updater_histmaker.o a - build/tree/updater_prune.o a - build/tree/updater_refresh.o a - build/tree/updater_skmaker.o a - build/tree/updater_sync.o c++ -std=c++11 -Wall -Wno-unknown-pragmas -Iinclude -Idmlc-core/include -Irabit/include -I/include -O3 -funroll-loops -msse2 -fPIC -fopenmp -o xgboost build/cli_main.o build/learner.o build/logging.o build/c_api/c_api.o build/c_api/c_api_error.o build/common/common.o build/common/hist_util.o build/data/data.o build/data/simple_csr_source.o build/data/simple_dmatrix.o build/data/sparse_page_dmatrix.o build/data/sparse_page_raw_format.o build/data/sparse_page_source.o build/data/sparse_page_writer.o build/gbm/gblinear.o build/gbm/gbm.o build/gbm/gbtree.o build/metric/elementwise_metric.o build/metric/metric.o build/metric/multiclass_metric.o build/metric/rank_metric.o build/objective/multiclass_obj.o build/objective/objective.o build/objective/rank_obj.o build/objective/regression_obj.o build/predictor/cpu_predictor.o build/predictor/predictor.o build/tree/tree_model.o build/tree/tree_updater.o build/tree/updater_colmaker.o build/tree/updater_fast_hist.o build/tree/updater_histmaker.o build/tree/updater_prune.o build/tree/updater_refresh.o build/tree/updater_skmaker.o build/tree/updater_sync.o dmlc-core/libdmlc.a rabit/lib/librabit.a -pthread -lm -fopenmp |
Did this step work for you?
Let me know in the comments below.
3. Install XGBoost
You are now ready to install XGBoost on your system.
- 1. Change directory into the Python package of the xgboost project.
1 |
cd python-package |
- 2. Install the Python XGBoost package.
1 |
sudo python setup.py install |
The installation is very fast.
For example, at the end of the installation, you may see messages like the following:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
... Installed /opt/local/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/xgboost-0.6-py3.6.egg Processing dependencies for xgboost==0.6 Searching for scipy==1.0.0 Best match: scipy 1.0.0 Adding scipy 1.0.0 to easy-install.pth file Using /opt/local/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages Searching for numpy==1.13.3 Best match: numpy 1.13.3 Adding numpy 1.13.3 to easy-install.pth file Using /opt/local/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages Finished processing dependencies for xgboost==0.6 |
- 3. Confirm that the installation was successful by printing the xgboost version, which requires the library to be loaded.
Save the following code to a file called version.py.
1 2 |
import xgboost print("xgboost", xgboost.__version__) |
Run the script from the command line:
1 |
python version.py |
You should see the XGBoost version printed to screen:
1 |
xgboost 0.6 |
How did you do?
Post your results in the comments below.
Further Reading
This section provides more resources on the topic if you are looking to go deeper.
- How to Install a Python 3 Environment on Mac OS X for Machine Learning and Deep Learning
- MacPorts Installation Guide
- XGBoost Installation Guide
Summary
In this tutorial, you discovered how to install XGBoost for Python on macOS step-by-step.
Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.
can you put for windows?
Sorry, I don’t know about windows OS.
For Windows Installation,
https://ampersandacademy.com/tutorials/python-data-science/install-xgboost-on-windows-10-for-python-programming
Thanks for sharing.
Or can simply use it in Docker: https://github.com/petronetto/machine-learning-alpine
Thanks.
that was my choice.
After:
$ gcc -v
I did not get gcc version but:
Configured with: –prefix=/Library/Developer/CommandLineTools/usr –with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 9.0.0 (clang-900.0.39.2)
Target: x86_64-apple-darwin17.3.0
Thread model: posix
InstalledDir: /Library/Developer/CommandLineTools/usr/bin
Strange…
I didn’t try further (still)
Looks like you are still using the apple compiler.
You may need to tell macports that you want to use the newly installed compiler.
I had the same issue. However, I resolved it by using the command “hash -r” to get Terminal to recognize the change. According to this, https://stackoverflow.com/questions/8361002/how-to-use-the-gcc-installed-in-macports, opening a new Terminal window would have worked as well. After using “hash -r”, “gcc -v” gave the same result shown in this tutorial.
Nice.
Hi Jason
All worked great till
make -j8
I received fatal error and then – some errors
/opt/local/include/gcc7/c++/cwchar:44:10: fatal error: wchar.h: No such file or directory
#include
…..
Do you have any ideas?
Oh dear. No good ideas sorry.
Perhaps try posting to stack overflow or the xgboost issue tracker:
https://github.com/dmlc/xgboost/issues
Or try checking out the v0.6 version in case something changed in the recent 0.7 release?
Same error
Hi guys,
I have the same errors. Have you figured out how to resolve this please?Thank you in advance!
Perhaps try installing via pip instead:
Hi Jason,
I just used,
conda install -c conda-forge xgboost
So I did not follow steps 1, 2 and 3. However, it depends on tastes.
Cheers
Great tip, thanks!
Yes, this worked.
I upvote this one! It worked for me as well. Saved many steps!
Thanks.
Upvote! it worked for me, thank you cgironda!
Happy to hear that.
Great tip! Thanks
Thanks a lot!
Jason,
Why do you think that python is a better option for machine learning than R or Matlab?
They are all great, but more people want to use and are asking for developers to use in the work place:
https://machinelearningmastery.com/python-growing-platform-applied-machine-learning/
Thank you very much! Worked brilliantly.
You’re welcome.
Brilliant! I was previously trying to use homebrew to install gcc and then editing the config.mk file, as basically all other tutorials tell you to, but was constantly getting errors. This is the first method that worked for me (on mac OS Sierra 10.12.6); thank you!
Glad to hear it, well done Matt!
cgironda you are a star! You could spend a day trying to answer all the questions out there related to
unsupported option '-fopenmp'
and you wouldn’t even come close to getting to all of them. You rescued me after about 4 hours of fussing with this. Thanks!Hi Jason,
Have followed your instructions up to step 2 build XG boost and this is the message I saw on my terminal.
a – build/metric/metric.o
a – build/metric/multiclass_metric.o
a – build/metric/rank_metric.o
a – build/objective/multiclass_obj.o
a – build/objective/objective.o
a – build/objective/rank_obj.o
a – build/objective/regression_obj.o
a – build/predictor/cpu_predictor.o
a – build/predictor/predictor.o
a – build/tree/tree_model.o
a – build/tree/tree_updater.o
a – build/tree/updater_colmaker.o
a – build/tree/updater_fast_hist.o
a – build/tree/updater_histmaker.o
a – build/tree/updater_prune.o
a – build/tree/updater_refresh.o
a – build/tree/updater_skmaker.o
a – build/tree/updater_sync.o
ld: can’t open output file for writing: xgboost, errno=21 for architecture x86_64
collect2: error: ld returned 1 exit status
make: *** [xgboost] Error 1
make: *** Waiting for unfinished jobs….
Alvics-MacBook-Pro-2:xgboost alviceugenejosol$ a – b
Ouch. I have not seen this.
Perhaps try a make clean, then try building again?
Perhaps try posting the error to the xgboost user group?
Hi Jason,
Worked like a charm. Thanks for your detailed instructions.
-A
Great, I’m happy to hear that!
Step 2 did not work …
Pauls-MacBook-Pro:xgboost pmw$ make -j8
Makefile:31: MAKE [/Library/Developer/CommandLineTools/usr/bin/make] – checked OK
c++ -DDMLC_LOG_CUSTOMIZE=1 -std=c++11 -Wall -Wno-unknown-pragmas -Iinclude -Idmlc-core/include -Irabit/include -I/include -O3 -funroll-loops -msse2 -fPIC -fopenmp -MM -MT build/c_api/c_api.o src/c_api/c_api.cc >build/c_api/c_api.d
c++ -DDMLC_LOG_CUSTOMIZE=1 -std=c++11 -Wall -Wno-unknown-pragmas -Iinclude -Idmlc-core/include -Irabit/include -I/include -O3 -funroll-loops -msse2 -fPIC -fopenmp -MM -MT build/logging.o src/logging.cc >build/logging.d
c++ -DDMLC_LOG_CUSTOMIZE=1 -std=c++11 -Wall -Wno-unknown-pragmas -Iinclude -Idmlc-core/include -Irabit/include -I/include -O3 -funroll-loops -msse2 -fPIC -fopenmp -MM -MT build/learner.o src/learner.cc >build/learner.d
c++ -DDMLC_LOG_CUSTOMIZE=1 -std=c++11 -Wall -Wno-unknown-pragmas -Iinclude -Idmlc-core/include -Irabit/include -I/include -O3 -funroll-loops -msse2 -fPIC -fopenmp -MM -MT build/c_api/c_api_error.o src/c_api/c_api_error.cc >build/c_api/c_api_error.d
c++ -DDMLC_LOG_CUSTOMIZE=1 -std=c++11 -Wall -Wno-unknown-pragmas -Iinclude -Idmlc-core/include -Irabit/include -I/include -O3 -funroll-loops -msse2 -fPIC -fopenmp -MM -MT build/common/common.o src/common/common.cc >build/common/common.d
c++ -DDMLC_LOG_CUSTOMIZE=1 -std=c++11 -Wall -Wno-unknown-pragmas -Iinclude -Idmlc-core/include -Irabit/include -I/include -O3 -funroll-loops -msse2 -fPIC -fopenmp -MM -MT build/common/hist_util.o src/common/hist_util.cc >build/common/hist_util.d
c++ -DDMLC_LOG_CUSTOMIZE=1 -std=c++11 -Wall -Wno-unknown-pragmas -Iinclude -Idmlc-core/include -Irabit/include -I/include -O3 -funroll-loops -msse2 -fPIC -fopenmp -MM -MT build/common/host_device_vector.o src/common/host_device_vector.cc >build/common/host_device_vector.d
c++ -DDMLC_LOG_CUSTOMIZE=1 -std=c++11 -Wall -Wno-unknown-pragmas -Iinclude -Idmlc-core/include -Irabit/include -I/include -O3 -funroll-loops -msse2 -fPIC -fopenmp -MM -MT build/data/data.o src/data/data.cc >build/data/data.d
clang: clangclang: : error: errorerrorunsupported option ‘-fopenmp’: : unsupported option ‘-fopenmp’unsupported option ‘-fopenmp’
clang: clang: error: unsupported option ‘-fopenmp’error:
unsupported option ‘-fopenmp’
clang: error: unsupported option ‘-fopenmp’
clang: error: unsupported option ‘-fopenmp’
clang: error: unsupported option ‘-fopenmp’
make: *** [build/common/common.o] Error 1
make: *** Waiting for unfinished jobs….
make: *** [build/c_api/c_api_error.o] Error 1
make: *** [build/learner.o] Error 1
make: *** [build/logging.o] Error 1
make: *** [build/c_api/c_api.o] Error 1
make: *** [build/common/host_device_vector.o] Error 1
make: *** [build/common/hist_util.o] Error 1
make: *** [build/data/data.o] Error 1
Prhaps try installing with pip on your workstation instead?
E.g.:
After installing MacPorts, I tried port install gcc but got an error that no port named gcc7 existed. Tried several variants. Finally had to do a port install self update, and then the rest worked like a charm. So maybe add the self update as step one in the Installing MacPorts section. Everything else worked great. Thanks for this and all your other articles.
Thanks for the tip Chris!
When I ran the step:
make -j4
I got an error:
clang: error: unsupported option ‘-fopenmp’
clang: error: unsupported option ‘-fopenmp’
clang: clang: error: unsupported option ‘-fopenmp’error: unsupported option ‘-fopenmp’
make: *** [build/c_api/c_api_error.o] Error 1
make: *** Waiting for unfinished jobs….
make: *** [build/c_api/c_api.o] Error 1
make: *** [build/learner.o] Error 1
make: *** [build/logging.o] Error 1
any thoughts for how to fix that?
Perhaps change compiler from clang to gcc?
Hi Jason,
Thank you for your great jobs!
At the last step I got the belowmentioned error. Any fix Suggestion?
[email protected] python-package % sudo python setup.py install
Traceback (most recent call last):
File “setup.py”, line 277, in
encoding=’utf-8′).read(),
TypeError: ‘encoding’ is an invalid keyword argument for this function
Perhaps ensure you are using Python 3.6 or higher?
Thank you very much Jason.
After your comment I got the issue.
sudo python3 setup.py install solved the issue.
Best
Well done!
Hi Jason,
Thank you for your great jobs!
At the last step I got the belowmentioned error. Any fix Suggestion?
INFO:XGBoost build_ext:Building from source. /Users/smehta27/Desktop/Dental/xgboost/lib/libxgboost.dylib
error: [Errno 2] No such file or directory: ‘cmake’: ‘cmake’
Looks like you need to install make, perhaps via macports or brew.
Hi Jason,
Great tutorial as always!
I have Mac OS Catalina and I had problems installing MacPorts, but this link was helpful:
https://trac.macports.org/wiki/CatalinaProblems
Thank you
Well done!
Hi Jason, Thanks for sharing.
I worked from step 2 and step3
For stop 2, I did not find similar display as yours. Instead, it becomes:
Makefile:23: MAKE [/Library/Developer/CommandLineTools/usr/bin/make] – checked OK
c++ -c -DDMLC_LOG_CUSTOMIZE=1 -std=c++11 -Wall -Wno-unknown-pragmas -Iinclude -Idmlc-core/include -Irabit/include -I/include -O3 -funroll-loops amalgamation/xgboost-all0.cc -o amalgamation/xgboost-all0.o
In file included from amalgamation/xgboost-all0.cc:76:
amalgamation/../src/common/io.cc:102:8: warning: unused variable ‘ReadErr’
[-Wunused-variable]
auto ReadErr = [&fname]() {
^
In file included from amalgamation/xgboost-all0.cc:13:
In file included from amalgamation/../src/metric/metric.cc:6:
In file included from dmlc-core/include/dmlc/./registry.h:9:
In file included from /Library/Developer/CommandLineTools/usr/bin/../include/c++/v1/map:479:
In file included from /Library/Developer/CommandLineTools/usr/bin/../include/c++/v1/__tree:15:
/Library/Developer/CommandLineTools/usr/bin/../include/c++/v1/memory:2338:5: warning:
delete called on non-final ‘xgboost::tree::QuantileHistMaker::Builder’
that has virtual functions but non-virtual destructor
[-Wdelete-non-abstract-non-virtual-dtor]
delete __ptr;
^
/Library/Developer/CommandLineTools/usr/bin/../include/c++/v1/memory:2651:7: note:
in instantiation of member function
‘std::__1::default_delete::operator()’
requested here
__ptr_.second()(__tmp);
^
amalgamation/../src/tree/updater_quantile_hist.cc:74:14: note: in instantiation
of member function
‘std::__1::unique_ptr<xgboost::tree::QuantileHistMaker::Builder,
std::__1::default_delete
>::reset’ requested here
builder_.reset(new Builder(
^
In file included from amalgamation/xgboost-all0.cc:35:
In file included from amalgamation/../src/data/data.cc:24:
In file included from amalgamation/../src/data/./sparse_page_dmatrix.h:17:
In file included from amalgamation/../src/data/ellpack_page_source.h:13:
amalgamation/../src/metric/../common/hist_util.h:695:10: warning: private field
‘nthread_’ is not used [-Wunused-private-field]
size_t nthread_ { 0 };
^
3 warnings generated.
Then, when doing step 3: error message occurs:
INFO:XGBoost build_ext:Building from source. /Users/jingtan.wang/python-virtual-environments/xgboost/lib/libxgboost.dylib
INFO:XGBoost build_ext:Run CMake command: [‘cmake’, ‘xgboost’, ‘-GUnix Makefiles’, ‘-DUSE_OPENMP=1’, ‘-DUSE_CUDA=0’, ‘-DUSE_NCCL=0’, ‘-DBUILD_WITH_SHARED_NCCL=0’, ‘-DHIDE_CXX_SYMBOLS=1’, ‘-DUSE_HDFS=0’, ‘-DUSE_AZURE=0’, ‘-DUSE_S3=0’, ‘-DPLUGIN_LZ4=0’, ‘-DPLUGIN_DENSE_PARSER=0’]
error: [Errno 2] No such file or directory: ‘cmake’
I’m sorry to hear that. Perhaps try posting/searching on stackoverflow or the github issues for the xgboost project?
Hello, thank you for sharing. I encounter a problem within the second section.
I can’t find a solution for this. I am using Catalina and wonder if this is part of the problem? I did see your previous comment to someone else’s post on installing make via port or brew. Brew repeatedly fails on my system (internet time out problems). Do you happen to know how I can install make via macports?
Many thanks!
Danielles-MacBook-Pro:Desktop danielleturvill$ cd xgboost/
Danielles-MacBook-Pro:xgboost danielleturvill$ cp make/config.mk ./config.mk
cp: make/config.mk: No such file or directory
Danielles-MacBook-Pro:xgboost danielleturvill$ ls
CITATION README.md gputreeshap
CMakeLists.txt amalgamation include
CONTRIBUTORS.md appveyor.yml jvm-packages
Jenkinsfile cmake plugin
Jenkinsfile-win64 cub python-package
LICENSE demo rabit
Makefile dev src
NEWS.md dmlc-core tests
R-package doc
Newer versions do seem to cause problems. I recommend using an older version of XGBoost, such as 1.0.2
You can checkout this version directly or install it directly via pip, e.g.
Hi Jason,
I am trying to install xgboost integrated with GPU support, on my MacOS Mojave(10.14.6) from last 3 days, however, no success has been reached. I tried 2 approaches:
1. pip install xgboost
xgboost is installed here and it runs successfully without GPU option(i.e., without tree_method=’gpu_hist’).
I want to run with gpu_hist by giving “tree_method=’gpu_hist’ ” in tree parameters. When I gave “tree_method=’gpu_hist’ ” in tree parameters, following error has come:
XGBoostError: [12:10:34] /Users/travis/build/dmlc/xgboost/src/gbm/../common/common.h:153: XGBoost version not compiled with GPU support.
Stack trace:
[bt] (0) 1 libxgboost.dylib 0x000000012256ba60 dmlc::LogMessageFatal::~LogMessageFatal() + 112
[bt] (1) 2 libxgboost.dylib 0x00000001225f92b3 xgboost::gbm::GBTree::ConfigureUpdaters() + 531
[bt] (2) 3 libxgboost.dylib 0x00000001225f8b97 xgboost::gbm::GBTree::Configure(std::__1::vector<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator >, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator >, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > > > > const&) + 967
[bt] (3) 4 libxgboost.dylib 0x0000000122611a0c xgboost::LearnerConfiguration::Configure() + 1500
[bt] (4) 5 libxgboost.dylib 0x0000000122611e68 xgboost::LearnerImpl::UpdateOneIter(int, std::__1::shared_ptr) + 120
[bt] (5) 6 libxgboost.dylib 0x000000012256331d XGBoosterUpdateOneIter + 157
[bt] (6) 7 libffi.7.dylib 0x0000000102102ead ffi_call_unix64 + 85
[bt] (7) 8 ??? 0x00007ffeee291da0 0x0 + 140732894092704
2. My second approach:
git clone –recursive https://github.com/dmlc/xgboost
cd xgboost/
make -j4
cd python-package
python3 setup.py install
Though it installs xgboost, but throws following error whille running this statement:
dtrain=xgb.DMatrix(df_train_features,label=df_train_label)#,missing=-999)
AttributeError: dlsym(0x7ffe9aed62f0, XGDMatrixSetDenseInfo): symbol not found
Can you please help in installing xgboost with GPU support?
Try checking out an older version, such as 1.0.1.
I find the older version compile easily and work just as well.
Jason, xgboost 1.0.1 is installed with ‘pip install xgboost==1.0.1′ . And it works as well
However, it is without GPU support and I want to install xgboost with GPU Support
I got following error, when I ran xgboost with tree_method=’gpu_hist’:
(import xgboost as xgb
bst=xgb.XGBClassifier(n_estimators=20, max_depth=9, learning_rate=0.05,n_jobs=4,gamma=5,min_child_weight=20, eval_metric=’rmse’, missing=-999, tree_method=’gpu_hist’)
)
XGBoostError: [15:58:10] /private/var/folders/4n/s3z85_zs0z7_12y8mmg2yqd00000gn/T/pip-install-xq6d26qe/xgboost_a072b8d552e84085a86dcf21271bcc77/xgboost/include/xgboost/gbm.h:166: XGBoost version not compiled with GPU support.
Can you please help in installing xgboost with GPU supprt?
Sorry, I am not an expert on GPU support in XGBoost, perhaps contact the XGBoost developers directly, e.g. post an issue on their github project.