<?xml version="1.0" encoding="utf-8" ?><rss version="2.0"><channel><title>Bing: Autoencoder Diagram with Encoder/Decoder</title><link>http://www.bing.com:80/search?q=Autoencoder+Diagram+with+Encoder%2fDecoder</link><description>Search results</description><copyright>Copyright © 2026 Microsoft. All rights reserved. These XML results may not be used, reproduced or transmitted in any manner or for any purpose other than rendering Bing results within an RSS aggregator for your personal, non-commercial use. Any other use of these results requires express written permission from Microsoft Corporation. By accessing this web page or using these results in any manner whatsoever, you agree to be bound by the foregoing restrictions.</copyright><item><title>What is an autoencoder? - Data Science Stack Exchange</title><link>https://datascience.stackexchange.com/questions/80389/what-is-an-autoencoder</link><description>The autoencoder then works by storing inputs in terms of where they lie on the linear image of . Observe that absent the non-linear activation functions, an autoencoder essentially becomes equivalent to PCA — up to a change in basis. A useful exercise might be to consider why this is.</description><pubDate>Mon, 13 Apr 2026 11:09:00 GMT</pubDate></item><item><title>Extract encoder and decoder from trained autoencoder</title><link>https://stackoverflow.com/questions/52271644/extract-encoder-and-decoder-from-trained-autoencoder</link><description>Use this best model (manually selected by filename) and plot original image, the encoded representation made by the encoder of the autoencoder and the prediction using the decoder of the autoencoder. I have problems (see second step) to extract the encoder and decoder layers from the trained and saved autoencoder.</description><pubDate>Sat, 11 Apr 2026 10:07:00 GMT</pubDate></item><item><title>Reconstruction error per feature for autoencoders? - Stack Overflow</title><link>https://stackoverflow.com/questions/76204877/reconstruction-error-per-feature-for-autoencoders</link><description>A great resource for learning autoencoder is Deep Learning book (Goodfellow). Your code is not working because reconstructed_output = model.predict(x_test) actually is not the reconstructed output but the encoded output at the latent space level.</description><pubDate>Sat, 11 Apr 2026 12:44:00 GMT</pubDate></item><item><title>How is a linear autoencoder equal to PCA? - Stack Overflow</title><link>https://stackoverflow.com/questions/42607471/how-is-a-linear-autoencoder-equal-to-pca</link><description>How is a linear autoencoder equal to PCA? Asked 9 years, 1 month ago Modified 7 years, 11 months ago Viewed 14k times</description><pubDate>Tue, 14 Apr 2026 22:35:00 GMT</pubDate></item><item><title>Graph Autoencoder with PyTorch-Geometric - Stack Overflow</title><link>https://stackoverflow.com/questions/70905627/graph-autoencoder-with-pytorch-geometric</link><description>I would recommend to check out the graph autoencoder example, but this aims to reconstruct the adjacency matrix. Another point could be to exchange the GAT with models specifically designed for point clouds, see here.</description><pubDate>Sun, 05 Apr 2026 23:00:00 GMT</pubDate></item><item><title>python - LSTM Autoencoder - Stack Overflow</title><link>https://stackoverflow.com/questions/44647258/lstm-autoencoder</link><description>I'm trying to build a LSTM autoencoder with the goal of getting a fixed sized vector from a sequence, which represents the sequence as good as possible. This autoencoder consists of two parts: LSTM</description><pubDate>Mon, 13 Apr 2026 13:11:00 GMT</pubDate></item><item><title>Does it make sense to train a CNN as an autoencoder?</title><link>https://datascience.stackexchange.com/questions/17737/does-it-make-sense-to-train-a-cnn-as-an-autoencoder</link><description>So, does anyone know if I could just pretrain a CNN as if it was a "crippled" autoencoder, or would that be pointless? Should I be considering some other architecture, like a deep belief network, for instance?</description><pubDate>Fri, 10 Apr 2026 23:58:00 GMT</pubDate></item><item><title>What is the difference between an autoencoder and an encoder-decoder?</title><link>https://datascience.stackexchange.com/questions/53979/what-is-the-difference-between-an-autoencoder-and-an-encoder-decoder</link><description>I want to know if there is a difference between an autoencoder and an encoder-decoder.</description><pubDate>Sat, 11 Apr 2026 07:22:00 GMT</pubDate></item><item><title>python - Autoencoder general questions and poor loss - Data Science ...</title><link>https://datascience.stackexchange.com/questions/112130/autoencoder-general-questions-and-poor-loss</link><description>I'm trying to get a simple autoencoder working on the iris dataset to explore autoencoders at a basic level. However, I'm running into an issue where the model's loss is extremely high (&amp;gt;20). Can</description><pubDate>Fri, 10 Apr 2026 19:34:00 GMT</pubDate></item><item><title>How to extract features from the encoded layer of an autoencoder?</title><link>https://datascience.stackexchange.com/questions/64412/how-to-extract-features-from-the-encoded-layer-of-an-autoencoder</link><description>Basically, my idea was to use the autoencoder to extract the most relevant features from the original data set. However, so far I have only managed to get the autoencoder to compress the data, without really understanding what the most important features are though.</description><pubDate>Sat, 11 Apr 2026 03:48:00 GMT</pubDate></item></channel></rss>