<?xml version="1.0" encoding="utf-8" ?><rss version="2.0"><channel><title>Bing: Xgboost Machine Learning Tree</title><link>http://www.bing.com:80/search?q=Xgboost+Machine+Learning+Tree</link><description>Search results</description><copyright>Copyright © 2026 Microsoft. All rights reserved. These XML results may not be used, reproduced or transmitted in any manner or for any purpose other than rendering Bing results within an RSS aggregator for your personal, non-commercial use. Any other use of these results requires express written permission from Microsoft Corporation. By accessing this web page or using these results in any manner whatsoever, you agree to be bound by the foregoing restrictions.</copyright><item><title>How to get feature importance in xgboost? - Stack Overflow</title><link>https://stackoverflow.com/questions/37627923/how-to-get-feature-importance-in-xgboost</link><description>19 According to this post there 3 different ways to get feature importance from Xgboost: use built-in feature importance, use permutation based importance, use shap based importance. Built-in feature importance Code example: ... Please be aware of what type of feature importance you are using. There are several types of importance, see the docs.</description><pubDate>Sun, 26 Apr 2026 02:04:00 GMT</pubDate></item><item><title>multioutput regression by xgboost - Stack Overflow</title><link>https://stackoverflow.com/questions/39540123/multioutput-regression-by-xgboost</link><description>Is it possible to train a model by xgboost that has multiple continuous outputs (multi-regression)? What would be the objective of training such a model?</description><pubDate>Fri, 24 Apr 2026 20:21:00 GMT</pubDate></item><item><title>Perform xgboost prediction with pyspark dataframe - Stack Overflow</title><link>https://stackoverflow.com/questions/77320042/perform-xgboost-prediction-with-pyspark-dataframe</link><description>From what I can see, you are trying to use the xgboost algorithm of the xgboost library in a spark context. Please note that there is a dedicated spark implementation within the xgboost library, which your code does not seem to use (from your predict_udf function I understand that you are trying to wrangle your pyspark data, perform predictions, and convert the predictions back into a pyspark ...</description><pubDate>Sat, 25 Apr 2026 18:26:00 GMT</pubDate></item><item><title>XGBoost for multiclassification and imbalanced data</title><link>https://stackoverflow.com/questions/67868420/xgboost-for-multiclassification-and-imbalanced-data</link><description>sample_weight parameter is useful for handling imbalanced data while using XGBoost for training the data. You can compute sample weights by using compute_sample_weight() of sklearn library.</description><pubDate>Sat, 25 Apr 2026 17:00:00 GMT</pubDate></item><item><title>How to install xgboost package in python (windows platform)?</title><link>https://stackoverflow.com/questions/33749735/how-to-install-xgboost-package-in-python-windows-platform</link><description>download xgboost whl file from here (make sure to match your python version and system architecture, e.g. "xgboost-0.6-cp35-cp35m-win_amd64.whl" for python 3.5 on 64-bit machine) open command prompt cd to your Downloads folder (or wherever you saved the whl file) pip install xgboost-0.6-cp35-cp35m-win_amd64.whl (or whatever your whl file is named)</description><pubDate>Thu, 23 Apr 2026 22:24:00 GMT</pubDate></item><item><title>XGBOOST Model predicting, with nan Input values - Stack Overflow</title><link>https://stackoverflow.com/questions/77000845/xgboost-model-predicting-with-nan-input-values</link><description>I am facing a weird behavior in the xgboost classifier. Reproducing the code from a response to this post import xgboost as xgb import numpy as np from sklearn.datasets import make_moons from sklearn.</description><pubDate>Sat, 25 Apr 2026 05:11:00 GMT</pubDate></item><item><title>GridSearchCV - XGBoost - Early Stopping - Stack Overflow</title><link>https://stackoverflow.com/questions/42993550/gridsearchcv-xgboost-early-stopping</link><description>GridSearchCV - XGBoost - Early Stopping Asked 9 years, 1 month ago Modified 1 year, 5 months ago Viewed 38k times</description><pubDate>Sat, 25 Apr 2026 03:24:00 GMT</pubDate></item><item><title>XGBoost Categorical Variables: Dummification vs encoding</title><link>https://stackoverflow.com/questions/34265102/xgboost-categorical-variables-dummification-vs-encoding</link><description>"When using XGBoost we need to convert categorical variables into numeric." Not always, no. If booster=='gbtree' (the default), then XGBoost can handle categorical variables encoded as numeric directly, without needing dummifying/one-hotting. Whereas if the label is a string (not an integer) then yes we need to comvert it.</description><pubDate>Sat, 25 Apr 2026 09:22:00 GMT</pubDate></item><item><title>How to check if XGBoost uses the GPU - Stack Overflow</title><link>https://stackoverflow.com/questions/70507099/how-to-check-if-xgboost-uses-the-gpu</link><description>For Tensorflow I can check this with tf.config.list_physical_devices(). For XGBoost I've so far checked it by looking at GPU utilization (nvdidia-smi) while running my software. But how can I check this in a simple test? Something similar to the test I have for Tensorflow would do.</description><pubDate>Sun, 26 Apr 2026 05:39:00 GMT</pubDate></item><item><title>python - How to grid search parameter for XGBoost with ...</title><link>https://stackoverflow.com/questions/60942564/how-to-grid-search-parameter-for-xgboost-with-multioutputregressor-wrapper</link><description>I'm trying to build a regressor to predict from a 6D input to a 6D output using XGBoost with the MultiOutputRegressor wrapper. I'm not sure how to do the parameter search. My code looks like this: ...</description><pubDate>Tue, 21 Apr 2026 03:14:00 GMT</pubDate></item></channel></rss>