<?xml version="1.0" encoding="utf-8" ?><rss version="2.0"><channel><title>Bing: Autoencoder Deep Learnign a Visual Approach</title><link>http://www.bing.com:80/search?q=Autoencoder+Deep+Learnign+a+Visual+Approach</link><description>Search results</description><copyright>Copyright © 2026 Microsoft. All rights reserved. These XML results may not be used, reproduced or transmitted in any manner or for any purpose other than rendering Bing results within an RSS aggregator for your personal, non-commercial use. Any other use of these results requires express written permission from Microsoft Corporation. By accessing this web page or using these results in any manner whatsoever, you agree to be bound by the foregoing restrictions.</copyright><item><title>What is an autoencoder? - Data Science Stack Exchange</title><link>https://datascience.stackexchange.com/questions/80389/what-is-an-autoencoder</link><description>The autoencoder then works by storing inputs in terms of where they lie on the linear image of . Observe that absent the non-linear activation functions, an autoencoder essentially becomes equivalent to PCA — up to a change in basis. A useful exercise might be to consider why this is.</description><pubDate>Mon, 13 Apr 2026 11:09:00 GMT</pubDate></item><item><title>Graph Autoencoder with PyTorch-Geometric - Stack Overflow</title><link>https://stackoverflow.com/questions/70905627/graph-autoencoder-with-pytorch-geometric</link><description>I would recommend to check out the graph autoencoder example, but this aims to reconstruct the adjacency matrix. Another point could be to exchange the GAT with models specifically designed for point clouds, see here.</description><pubDate>Sun, 05 Apr 2026 23:00:00 GMT</pubDate></item><item><title>using simple autoencoder for feature selection - Data Science Stack ...</title><link>https://datascience.stackexchange.com/questions/86460/using-simple-autoencoder-for-feature-selection</link><description>I am using a simple autoencoder to extract the informative features and I have multiple Q: I know that the features extracted will be a linear combination of the original features so I consider th...</description><pubDate>Mon, 13 Apr 2026 16:10:00 GMT</pubDate></item><item><title>Autoencoder for multi-label classification task - Stack Overflow</title><link>https://stackoverflow.com/questions/79463813/autoencoder-for-multi-label-classification-task</link><description>I'm working on a multi-label classification problem using an autoencoder-based neural network built in PyTorch. The overall idea of my approach is as follows: I load my dataset from a CSV file, per...</description><pubDate>Mon, 06 Apr 2026 01:02:00 GMT</pubDate></item><item><title>Extract encoder and decoder from trained autoencoder</title><link>https://stackoverflow.com/questions/52271644/extract-encoder-and-decoder-from-trained-autoencoder</link><description>Use this best model (manually selected by filename) and plot original image, the encoded representation made by the encoder of the autoencoder and the prediction using the decoder of the autoencoder. I have problems (see second step) to extract the encoder and decoder layers from the trained and saved autoencoder.</description><pubDate>Sat, 11 Apr 2026 10:07:00 GMT</pubDate></item><item><title>machine learning - How to reverse max pooling layer in autoencoder to ...</title><link>https://stackoverflow.com/questions/70572507/how-to-reverse-max-pooling-layer-in-autoencoder-to-return-the-original-shape-in</link><description>How to reverse max pooling layer in autoencoder to return the original shape in decoder? Asked 4 years, 3 months ago Modified 4 years, 3 months ago Viewed 5k times</description><pubDate>Sun, 12 Apr 2026 09:01:00 GMT</pubDate></item><item><title>Pytorch MNIST autoencoder to learn 10-digit classification</title><link>https://stackoverflow.com/questions/66667949/pytorch-mnist-autoencoder-to-learn-10-digit-classification</link><description>Autoencoder is technically not used as a classifier in general. They learn how to encode a given image into a short vector and reconstruct the same image from the encoded vector. It is a way of compressing image into a short vector: Since you want to train autoencoder with classification capabilities, we need to make some changes to model.</description><pubDate>Mon, 13 Apr 2026 10:19:00 GMT</pubDate></item><item><title>CNN Autoencoder takes a very long time to train - Stack Overflow</title><link>https://stackoverflow.com/questions/79515545/cnn-autoencoder-takes-a-very-long-time-to-train</link><description>I have been training a CNN Autoencoder on binary images (pixels are either 0 or 1) of size 64x64. The model is shown below: import torch import torch.nn as nn import torch.nn.functional as F class</description><pubDate>Tue, 14 Apr 2026 23:04:00 GMT</pubDate></item><item><title>python - LSTM Autoencoder problems - Stack Overflow</title><link>https://stackoverflow.com/questions/65205506/lstm-autoencoder-problems</link><description>TLDR: Autoencoder underfits timeseries reconstruction and just predicts average value. Question Set-up: Here is a summary of my attempt at a sequence-to-sequence autoencoder. This image was taken f...</description><pubDate>Tue, 07 Apr 2026 12:42:00 GMT</pubDate></item><item><title>machine learning - Why use Variational Autoencoders VAE instead of ...</title><link>https://datascience.stackexchange.com/questions/48533/why-use-variational-autoencoders-vae-instead-of-autoencoders-ae-in-anomaly-detec</link><description>My work is based on Anomaly Detection for Skin Disease Images Using Variational Autoencoder but i have a very small data set (about 5 pictures) that I am replicating to have about 8000 inputs.</description><pubDate>Tue, 17 Mar 2026 07:23:00 GMT</pubDate></item></channel></rss>