<?xml version="1.0" encoding="utf-8" ?><rss version="2.0"><channel><title>Bing: Active Learning Graph</title><link>http://www.bing.com:80/search?q=Active+Learning+Graph</link><description>Search results</description><copyright>Copyright © 2026 Microsoft. All rights reserved. These XML results may not be used, reproduced or transmitted in any manner or for any purpose other than rendering Bing results within an RSS aggregator for your personal, non-commercial use. Any other use of these results requires express written permission from Microsoft Corporation. By accessing this web page or using these results in any manner whatsoever, you agree to be bound by the foregoing restrictions.</copyright><item><title>Class-Balanced and Reinforced Active Learning on Graphs</title><link>https://arxiv.org/abs/2402.10074</link><description>Graph neural networks (GNNs) have demonstrated significant success in various applications, such as node classification, link prediction, and graph classification. Active learning for GNNs aims to query the valuable samples from the unlabeled data for annotation to maximize the GNNs' performance at a lower cost. However, most existing algorithms for reinforced active learning in GNNs may lead ...</description><pubDate>Wed, 25 Mar 2026 20:36:00 GMT</pubDate></item><item><title>Disentangled Active Learning on Graphs - ScienceDirect</title><link>https://www.sciencedirect.com/science/article/pii/S0893608025000097</link><description>Active learning on graphs (ALG) has emerged as a compelling research field due to its capacity to address the challenge of label scarcity. Existing ALG methods incorporate diversity into their query strategies to maximize the gains from node sampling, improving robustness and reducing redundancy in graph learning.</description><pubDate>Thu, 02 Apr 2026 15:04:00 GMT</pubDate></item><item><title>When Contrastive Learning Meets Active Learning: A Novel Graph Active ...</title><link>https://arxiv.org/abs/2010.16091</link><description>This paper studies active learning (AL) on graphs, whose purpose is to discover the most informative nodes to maximize the performance of graph neural networks (GNNs). Previously, most graph AL methods focus on learning node representations from a carefully selected labeled dataset with large amount of unlabeled data neglected. Motivated by the success of contrastive learning (CL), we propose ...</description><pubDate>Wed, 01 Oct 2025 23:56:00 GMT</pubDate></item><item><title>Improving Graph Neural Networks by combining active learning ... - Springer</title><link>https://link.springer.com/article/10.1007/s10618-023-00959-z</link><description>In this paper, we propose a novel framework, called STAL, which makes use of unlabeled graph data, through a combination of Active Learning and Self-Training, in order to improve node labeling by Graph Neural Networks (GNNs). GNNs have been shown to perform well on many tasks, when sufficient labeled data are available. Such data, however, is often scarce, leading to the need for methods that ...</description><pubDate>Mon, 30 Mar 2026 08:27:00 GMT</pubDate></item><item><title>Alfa: active learning for graph neural network-based semantic schema ...</title><link>https://link.springer.com/article/10.1007/s00778-023-00822-z</link><description>Semantic schema alignment aims to match elements across a pair of schemas based on their semantic representation. It is a key primitive for data integration that facilitates the creation of a common data fabric across heterogeneous data sources. Deep learning approaches such as graph representation learning have shown promise for effective alignment of semantically rich schemas, often captured ...</description><pubDate>Fri, 03 Apr 2026 11:49:00 GMT</pubDate></item><item><title>Active Learning for Graph Neural Networks via Node Feature Propagation</title><link>https://grlearning.github.io/papers/46.pdf</link><description>Abstract Graph Neural Networks (GNNs) for prediction tasks like node classification or edge prediction have received increasing attention in recent machine learning from graphically structured data. However, a large quantity of node labels is difficult to obtain, which significantly limit the true success of GNNs. Although active learning has been widely studied for addressing label-sparse ...</description><pubDate>Sat, 04 Apr 2026 18:15:00 GMT</pubDate></item><item><title>GANDALF: Graph-based transformer and Data Augmentation Active Learning ...</title><link>https://www.sciencedirect.com/science/article/pii/S1361841523003353</link><description>We also anticipate that transformers will play a greater role in graph-based interpretability and active learning tasks. Hence, our future focus will be on exploiting the properties of graph attention transformers to learn more powerful graph representations on multi-omics dataset combining imaging and non-imaging information.</description><pubDate>Sun, 01 Mar 2026 21:47:00 GMT</pubDate></item><item><title>Partition and Learned Clustering with joined-training: Active learning ...</title><link>https://www.sciencedirect.com/science/article/pii/S0950705122011431</link><description>To address these problems, we propose a novel cluster-based active learning method of GNNs on large-scale graphs, called Partition and Learned Clustering with Joined-training (PLCJ). PLCJ first partitions the graph into several subgraphs, then clusters in each subgraph, and the cluster centers are selected.</description><pubDate>Thu, 12 Mar 2026 06:32:00 GMT</pubDate></item><item><title>Attention-based Graph Coreset Labeling for Active Learning</title><link>https://openreview.net/forum?id=t7vXubuady</link><description>The existing graph active learning methods employ different heuristic approaches, while efficiency sometimes, they fail to explicitly explore the influence of labeled data on unlabeled data, thus limiting the generalizability of graph models to various types of graph data.</description><pubDate>Fri, 20 Mar 2026 01:08:00 GMT</pubDate></item><item><title>Information Gain Propagation: A New Way to Graph Active Learning with ...</title><link>https://machinelearning.apple.com/research/gain-propagation</link><description>Graph Neural Networks (GNNs) have achieved great success in various tasks, but their performance highly relies on a large number of labeled nodes, which typically requires considerable human effort. GNN-based Active Learning (AL) methods are proposed to improve the labeling efficiency by selecting the most valuable nodes to label. Existing methods assume an oracle can correctly categorize all ...</description><pubDate>Sun, 05 Apr 2026 19:32:00 GMT</pubDate></item></channel></rss>