
Alongside text-based massive language fashions (LLMs), together with ChatGPT in industrial fields, GNN (Graph Neural Community)-based graph AI fashions that analyze unstructured information comparable to monetary transactions, shares, social media, and affected person data in graph kind are broadly used. Nonetheless, there’s a limitation, in that full graph studying—coaching your entire graph without delay—requires large reminiscence and GPU servers.
A KAIST analysis group has now succeeded in creating software program know-how that may prepare large-scale GNN fashions at most velocity utilizing solely a single GPU server. The research is revealed within the Proceedings of the thirty first ACM SIGKDD Convention on Information Discovery and Knowledge Mining V.2.
The analysis group, led by Professor Min-Soo Kim of the Faculty of Computing, developed FlexGNN, a GNN system that—not like current strategies utilizing a number of GPU servers—can rapidly prepare and infer large-scale full-graph AI fashions on a single GPU server. FlexGNN improves coaching velocity by as much as 95 occasions in comparison with current applied sciences.
Lately, in numerous fields comparable to local weather, finance, drugs, prescription drugs, manufacturing, and distribution, there was a rising variety of instances the place information is transformed into graph kind, consisting of nodes and edges, for evaluation and prediction.
Whereas the total graph strategy, which makes use of your entire graph for coaching, achieves increased accuracy, it has the disadvantage of regularly operating out of reminiscence as a result of era of large intermediate information throughout coaching, in addition to extended coaching occasions attributable to information communication between a number of servers.
To beat these issues, FlexGNN performs optimum AI mannequin coaching on a single GPU server by using SSDs (solid-state drives) and important reminiscence as a substitute of a number of GPU servers.
By means of AI question optimization coaching—which optimizes the standard of database techniques—the group developed a brand new coaching optimization know-how that calculates mannequin parameters, coaching information, and intermediate information between GPU, important reminiscence, and SSD layers on the optimum timing and methodology.
Because of this, FlexGNN flexibly generates optimum coaching execution plans in line with obtainable assets comparable to information measurement, mannequin scale, and GPU reminiscence, thereby reaching excessive useful resource effectivity and coaching velocity.
Consequently, it turned potential to coach GNN fashions on information far exceeding important reminiscence capability, and coaching might be as much as 95 occasions sooner even on a single GPU server. Particularly, the belief of full-graph AI, able to extra exact evaluation than supercomputers in purposes comparable to local weather prediction, has grow to be a actuality.
Professor Kim acknowledged, “As full-graph GNN fashions are actively used to resolve complicated issues comparable to climate prediction and new materials discovery, the significance of associated applied sciences is rising.”
He added, “Since FlexGNN has dramatically solved the longstanding issues of coaching scale and velocity in graph AI fashions, we count on it to be broadly utilized in numerous industries.”
On this analysis, Jeongmin Bae, a doctoral pupil within the Faculty of Computing at KAIST, participated as the primary creator, Donghyoung Han, CTO of GraphAI Co. (based by Professor Kim) participated because the second creator, and Professor Kim served because the corresponding creator.
Extra data:
Jeongmin Bae et al, FlexGNN: A Excessive-Efficiency, Massive-Scale Full-Graph GNN System with Finest-Effort Coaching Plan Optimization, Proceedings of the thirty first ACM SIGKDD Convention on Information Discovery and Knowledge Mining V.2 (2025). DOI: 10.1145/3711896.3736964
The Korea Superior Institute of Science and Expertise (KAIST)
Quotation:
Graph evaluation AI mannequin achieves coaching as much as 95 occasions sooner on a single GPU (2025, August 15)
retrieved 21 August 2025
from https://techxplore.com/information/2025-08-graph-analysis-ai-faster-gpu.html
This doc is topic to copyright. Other than any truthful dealing for the aim of personal research or analysis, no
half could also be reproduced with out the written permission. The content material is supplied for data functions solely.