<?xml version="1.0" encoding="utf-8" ?><rss version="2.0"><channel><title>Bing: Dimensionality Reduction Machine Learning</title><link>http://www.bing.com:80/search?q=Dimensionality+Reduction+Machine+Learning</link><description>Search results</description><copyright>Copyright © 2026 Microsoft. All rights reserved. These XML results may not be used, reproduced or transmitted in any manner or for any purpose other than rendering Bing results within an RSS aggregator for your personal, non-commercial use. Any other use of these results requires express written permission from Microsoft Corporation. By accessing this web page or using these results in any manner whatsoever, you agree to be bound by the foregoing restrictions.</copyright><item><title>What's the meaning of dimensionality and what is it for this data?</title><link>https://stats.stackexchange.com/questions/149845/whats-the-meaning-of-dimensionality-and-what-is-it-for-this-data</link><description>I've been told that dimensionality is usually referred to attributes or columns of the dataset. But in this case, does it include Class1 and Class2? and does dimensionality mean, the number of columns or, does it mean the names of columns?</description><pubDate>Fri, 17 Apr 2026 12:16:00 GMT</pubDate></item><item><title>Dimensionality reduction (SVD or PCA) on a large, sparse matrix</title><link>https://stats.stackexchange.com/questions/35185/dimensionality-reduction-svd-or-pca-on-a-large-sparse-matrix</link><description>Dimensionality reduction (SVD or PCA) on a large, sparse matrix Ask Question Asked 13 years, 7 months ago Modified 8 years, 4 months ago</description><pubDate>Mon, 20 Apr 2026 22:06:00 GMT</pubDate></item><item><title>Does SVM suffer from curse of high dimensionality? If no, Why?</title><link>https://stats.stackexchange.com/questions/484289/does-svm-suffer-from-curse-of-high-dimensionality-if-no-why</link><description>While I know that some of the classification techniques such as k-nearest neighbour classifier suffer from the curse of high dimensionality, I wonder does the same apply to the support vector machi...</description><pubDate>Fri, 17 Apr 2026 01:32:00 GMT</pubDate></item><item><title>What is the curse of dimensionality? - Cross Validated</title><link>https://stats.stackexchange.com/questions/15971/what-is-the-curse-of-dimensionality</link><description>I cannot expound, but I believe I've heard what sound like three different versions of the curse: 1) higher dimensions mean an exponentially-increasing amount of work, and 2) in higher dimensions you will get fewer and fewer examples in any part of your sample space, and 3) in high dimensions everything tends to be basically equi-distant making it hard to make any distinctions.</description><pubDate>Fri, 17 Apr 2026 23:22:00 GMT</pubDate></item><item><title>Why is Euclidean distance not a good metric in high dimensions?</title><link>https://stats.stackexchange.com/questions/99171/why-is-euclidean-distance-not-a-good-metric-in-high-dimensions</link><description>I read that 'Euclidean distance is not a good distance in high dimensions'. I guess this statement has something to do with the curse of dimensionality, but what exactly? Besides, what is 'high</description><pubDate>Thu, 23 Apr 2026 00:13:00 GMT</pubDate></item><item><title>Why is t-SNE not used as a dimensionality reduction technique for ...</title><link>https://stats.stackexchange.com/questions/340175/why-is-t-sne-not-used-as-a-dimensionality-reduction-technique-for-clustering-or</link><description>So often rather than adding a dimensionality reduction step as preprocessing before clustering/classification, one is better to use a different classifier/cluster-er that incorperates a useful projection.</description><pubDate>Tue, 21 Apr 2026 05:45:00 GMT</pubDate></item><item><title>Why is dimensionality reduction always done before clustering?</title><link>https://stats.stackexchange.com/questions/256172/why-is-dimensionality-reduction-always-done-before-clustering</link><description>I learned that it's common to do dimensionality reduction before clustering. But, is there any situation that it is better to do clustering first, and then do dimensionality reduction?</description><pubDate>Thu, 16 Apr 2026 09:47:00 GMT</pubDate></item><item><title>dimensionality reduction - Relationship between SVD and PCA. How to use ...</title><link>https://stats.stackexchange.com/questions/134282/relationship-between-svd-and-pca-how-to-use-svd-to-perform-pca</link><description>However, it can also be performed via singular value decomposition (SVD) of the data matrix $\mathbf X$. How does it work? What is the connection between these two approaches? What is the relationship between SVD and PCA? Or in other words, how to use SVD of the data matrix to perform dimensionality reduction?</description><pubDate>Thu, 23 Apr 2026 06:32:00 GMT</pubDate></item><item><title>machine learning - Why is dimensionality reduction used if it almost ...</title><link>https://stats.stackexchange.com/questions/559808/why-is-dimensionality-reduction-used-if-it-almost-always-reduces-the-explained-v</link><description>Why is dimensionality reduction used if it almost always reduces the explained variation? Ask Question Asked 4 years, 2 months ago Modified 4 years ago</description><pubDate>Tue, 31 Mar 2026 02:20:00 GMT</pubDate></item><item><title>What are the implications of the curse of dimensionality for ordinary ...</title><link>https://stats.stackexchange.com/questions/243446/what-are-the-implications-of-the-curse-of-dimensionality-for-ordinary-least-squa</link><description>I'm trying to determine how the number of data points needed for a statistically significant estimate in the context of an ordinary least squares linear regression varies with respect to the number of covariates. I'm wondering if it increases exponentially, or if there is perhaps some other function that relates the two things.</description><pubDate>Thu, 16 Apr 2026 21:35:00 GMT</pubDate></item></channel></rss>