<?xml version="1.0" encoding="utf-8" ?><rss version="2.0"><channel><title>Bing: Autoencoder Visual Presentation</title><link>http://www.bing.com:80/search?q=Autoencoder+Visual+Presentation</link><description>Search results</description><copyright>Copyright © 2026 Microsoft. All rights reserved. These XML results may not be used, reproduced or transmitted in any manner or for any purpose other than rendering Bing results within an RSS aggregator for your personal, non-commercial use. Any other use of these results requires express written permission from Microsoft Corporation. By accessing this web page or using these results in any manner whatsoever, you agree to be bound by the foregoing restrictions.</copyright><item><title>What is an autoencoder? - Data Science Stack Exchange</title><link>https://datascience.stackexchange.com/questions/80389/what-is-an-autoencoder</link><description>The autoencoder then works by storing inputs in terms of where they lie on the linear image of . Observe that absent the non-linear activation functions, an autoencoder essentially becomes equivalent to PCA — up to a change in basis. A useful exercise might be to consider why this is.</description><pubDate>Sun, 05 Apr 2026 10:43:00 GMT</pubDate></item><item><title>Extract encoder and decoder from trained autoencoder</title><link>https://stackoverflow.com/questions/52271644/extract-encoder-and-decoder-from-trained-autoencoder</link><description>Use this best model (manually selected by filename) and plot original image, the encoded representation made by the encoder of the autoencoder and the prediction using the decoder of the autoencoder. I have problems (see second step) to extract the encoder and decoder layers from the trained and saved autoencoder.</description><pubDate>Sun, 29 Mar 2026 07:52:00 GMT</pubDate></item><item><title>Why my autoencoder model is not learning? - Stack Overflow</title><link>https://stackoverflow.com/questions/61211861/why-my-autoencoder-model-is-not-learning</link><description>If you want to create an autoencoder you need to understand that you're going to reverse process after encoding. That means that if you have three convolutional layers with filters in this order: 64, 32, 16; You should make the next group of convolutional layers to do the inverse: 16, 32, 64. That's the reason of why your algorithm is not learning.</description><pubDate>Mon, 30 Mar 2026 01:53:00 GMT</pubDate></item><item><title>Graph Autoencoder with PyTorch-Geometric - Stack Overflow</title><link>https://stackoverflow.com/questions/70905627/graph-autoencoder-with-pytorch-geometric</link><description>I would recommend to check out the graph autoencoder example, but this aims to reconstruct the adjacency matrix. Another point could be to exchange the GAT with models specifically designed for point clouds, see here.</description><pubDate>Sun, 05 Apr 2026 06:39:00 GMT</pubDate></item><item><title>python - Debugging autoencoder training (loss is low but reconstructed ...</title><link>https://stackoverflow.com/questions/77006924/debugging-autoencoder-training-loss-is-low-but-reconstructed-image-is-all-black</link><description>Debugging autoencoder training (loss is low but reconstructed image is all black) Asked 2 years, 7 months ago Modified 2 years, 6 months ago Viewed 940 times</description><pubDate>Wed, 01 Apr 2026 06:01:00 GMT</pubDate></item><item><title>python - LSTM Autoencoder problems - Stack Overflow</title><link>https://stackoverflow.com/questions/65205506/lstm-autoencoder-problems</link><description>TLDR: Autoencoder underfits timeseries reconstruction and just predicts average value. Question Set-up: Here is a summary of my attempt at a sequence-to-sequence autoencoder. This image was taken f...</description><pubDate>Sat, 28 Mar 2026 21:58:00 GMT</pubDate></item><item><title>CNN Autoencoder takes a very long time to train - Stack Overflow</title><link>https://stackoverflow.com/questions/79515545/cnn-autoencoder-takes-a-very-long-time-to-train</link><description>I have been training a CNN Autoencoder on binary images (pixels are either 0 or 1) of size 64x64. The model is shown below: import torch import torch.nn as nn import torch.nn.functional as F class</description><pubDate>Tue, 31 Mar 2026 19:03:00 GMT</pubDate></item><item><title>What is the difference between an autoencoder and an encoder-decoder?</title><link>https://datascience.stackexchange.com/questions/53979/what-is-the-difference-between-an-autoencoder-and-an-encoder-decoder</link><description>I want to know if there is a difference between an autoencoder and an encoder-decoder.</description><pubDate>Sun, 29 Mar 2026 17:03:00 GMT</pubDate></item><item><title>How is a linear autoencoder equal to PCA? - Stack Overflow</title><link>https://stackoverflow.com/questions/42607471/how-is-a-linear-autoencoder-equal-to-pca</link><description>How is a linear autoencoder equal to PCA? Asked 9 years ago Modified 7 years, 11 months ago Viewed 14k times</description><pubDate>Wed, 01 Apr 2026 15:19:00 GMT</pubDate></item><item><title>keras variational autoencoder loss function - Stack Overflow</title><link>https://stackoverflow.com/questions/60327520/keras-variational-autoencoder-loss-function</link><description>From appendix C in the original variational autoencoder paper: In variational auto-encoders, neural networks are used as probabilistic encoders and decoders. There are many possible choices of encoders and decoders, depending on the type of data and model. You can use a variational autoencoder (VAE) with continuous variables or with binary ...</description><pubDate>Sun, 05 Apr 2026 06:18:00 GMT</pubDate></item></channel></rss>