<?xml version="1.0" encoding="utf-8" ?><rss version="2.0"><channel><title>Bing: Quantization Using a DC</title><link>http://www.bing.com:80/search?q=Quantization+Using+a+DC</link><description>Search results</description><copyright>Copyright © 2026 Microsoft. All rights reserved. These XML results may not be used, reproduced or transmitted in any manner or for any purpose other than rendering Bing results within an RSS aggregator for your personal, non-commercial use. Any other use of these results requires express written permission from Microsoft Corporation. By accessing this web page or using these results in any manner whatsoever, you agree to be bound by the foregoing restrictions.</copyright><item><title>Quantization (signal processing) - Wikipedia</title><link>https://en.wikipedia.org/wiki/Quantization_(signal_processing)</link><description>In mathematics and digital signal processing, quantization is the process of mapping input values from a large set (often a continuous set) to output values in a (countable) smaller set, often with a finite number of elements. Rounding and truncation are typical examples of quantization processes.</description><pubDate>Wed, 08 Apr 2026 00:16:00 GMT</pubDate></item><item><title>What is Quantization - GeeksforGeeks</title><link>https://www.geeksforgeeks.org/deep-learning/quantization-in-deep-learning/</link><description>Quantization is a model optimization technique that reduces the precision of numerical values such as weights and activations in models to make them faster and more efficient.</description><pubDate>Mon, 20 Apr 2026 04:13:00 GMT</pubDate></item><item><title>Model Quantization: Concepts, Methods, and Why It Matters</title><link>https://developer.nvidia.com/blog/model-quantization-concepts-methods-and-why-it-matters/</link><description>Quantization has emerged as a crucial technique to address this challenge, enabling resource-intensive models to run on constrained hardware. The NVIDIA TensorRT and Model Optimizer tools simplify the quantization process, maintaining model accuracy while improving efficiency.</description><pubDate>Tue, 21 Apr 2026 22:34:00 GMT</pubDate></item><item><title>What Is Quantization? | How It Works &amp; Applications</title><link>https://www.mathworks.com/discovery/quantization.html</link><description>Quantization is the process of mapping continuous infinite values to a smaller set of discrete finite values. In the context of simulation and embedded computing, it is about approximating real-world values with a digital representation that introduces limits on the precision and range of a value.</description><pubDate>Wed, 22 Apr 2026 05:15:00 GMT</pubDate></item><item><title>What is quantization? - IBM</title><link>https://www.ibm.com/think/topics/quantization</link><description>Quantization is the process of reducing the precision of a digital signal, typically from a higher-precision format to a lower-precision format. This technique is widely used in various fields, including signal processing, data compression and machine learning.</description><pubDate>Wed, 22 Apr 2026 05:08:00 GMT</pubDate></item><item><title>Quantization from the ground up | ngrok blog</title><link>https://ngrok.com/blog/quantization</link><description>A complete guide to what quantization is, how it works, and how it's used to compress large language models</description><pubDate>Tue, 21 Apr 2026 04:04:00 GMT</pubDate></item><item><title>LLM Quantization Explained: INT4, INT8, FP8, AWQ, and GPTQ in 2026</title><link>https://vrlatech.com/llm-quantization-explained-int4-int8-fp8-awq-and-gptq-in-2026/</link><description>Quantization Methods: AWQ, GPTQ, GGUF, and More AWQ — Activation-Aware Weight Quantization AWQ analyzes activation patterns during calibration to identify the most important weights — those that have the highest impact on output quality — and protects them from aggressive quantization. Less important weights are quantized more aggressively.</description><pubDate>Tue, 21 Apr 2026 15:24:00 GMT</pubDate></item><item><title>What Is Quantization? Optimizing Data Compression - Coursera</title><link>https://www.coursera.org/articles/quantization</link><description>Quantization converts high-precision data into lower-precision data by compressing it to reduce data loss. By optimizing quantization, you can reduce your model's computational burden while keeping key functions intact.</description><pubDate>Mon, 20 Apr 2026 17:20:00 GMT</pubDate></item><item><title>Digital Communication - Quantization - Online Tutorials Library</title><link>https://www.tutorialspoint.com/digital_communication/digital_communication_quantization.htm</link><description>The quantizing of an analog signal is done by discretizing the signal with a number of quantization levels. Quantization is representing the sampled values of the amplitude by a finite set of levels, which means converting a continuous-amplitude sample into a discrete-time signal.</description><pubDate>Tue, 21 Apr 2026 21:01:00 GMT</pubDate></item><item><title>Fundamentals of Quantization</title><link>https://ee.stanford.edu/~gray/shortcourse.pdf</link><description>Quantization ideas with weighted quadratic distortion measures have applications outside of traditional data compression, especially to statistical classification, clustering, and machine learning.</description><pubDate>Wed, 22 Apr 2026 03:49:00 GMT</pubDate></item></channel></rss>