• Home
  • Committee
  • CFP
  • Speaker
  • Submission
  • Register
  • Schedule
  • Info
  • News
  • History
  • 中文
KEYNOTE SPEAKERS

 

Ce Zhu, University of Electronic Science & Technology of China
IEEE/Optica/IET/AAIA Fellow, ChangJiang Distinguished

Dean, Glasgow College, University of Electronic Science and Technology of China (UESTC), China

 

Ce ZHU is a Changjiang (Cheung Kong) Distinguished Professor at the University of Electronic Science and Technology of China (UESTC), China, and has been serving as the Dean of Glasgow College, a joint school between the University of Glasgow, UK and UESTC, China, since 2022. He has also been an Affiliate Professor of James Watt School of Engineering, University of Glasgow, UK, since 2023. His research interest lies in the general areas of visual information processing, multimedia signal processing and systems, specializing in image/video coding and processing, 3D video, (AI-enabled) visual analysis, perception and applications.

He is a Fellow of IEEE (2017) and a Fellow of Optica (2024). He was an APSIPA Distinguished Lecturer (2021-2022), and also an IEEE Distinguished Lecturer of Circuits and Systems Society (2019-2020). He is now serving as the Chair of IEEE ICME Steering Committee (2024-2025), and the Chair of IEEE Chengdu Section (2024-2028). He is a co-recipient of over 10 paper/demo awards at international conferences, including the most recent Best Demo Award in IEEE ICME 2025 and in IEEE MMSP 2022, Best Paper Award in IEEE BMSB 2025, and Best Paper Runner-Up Award in IEEE ICME 2020.

 

Title: Deep-Learning-Empowered Super-Resolution: Architectures and Efficiency

Abstract: The pursuit of higher performance in deep-learning-empowered super-resolution has led to increasingly complex models, creating a central challenge of balancing reconstruction quality with computational efficiency. The talk begins a systematic review of the evolution in the field, highlighting the key architectural shifts and the resulting trade-offs between model performance and complexity. The talk subsequently presents a novel architecture designed to improve this trade-off, achieving higher reconstruction quality with greater efficiency. Finally, the talk explores the topic of model compression, introducing an effective post-training quantization strategy that minimizes performance loss, thereby improving the practicality of super-resolution models.

 

 

 

 

Hongyan Zhang, China University of Geosciences, Wuhan, China

Dean, School of Computer Science, CUG (Wuhan)

Hongyan Zhang, male, born in 1983, Ph.D., professor and doctor of Wuhan University, young scholar of the “Changjiang Scholars Award Program” of the Ministry of Education. Mainly engaged in high-resolution remote sensing, intelligent remote sensing information processing and agricultural remote sensing research. He has presided over 4 national natural science fund projects and 2 provincial and ministerial level scientific research projects. More than 90 papers have been published/received in academic journals and conferences at home and abroad, including 48 SCI papers, 26 EI search papers, 2 ESI hot papers (0.1% globally in geosciences), and 5 high-cited ESI papers ( The world's top 1% of the field of geology and Elsevier's annual hot papers, published 1 academic monograph, applied for / approved 5 national invention patents, the paper has been cited more than 2,300 times. The research results have won the first prize of the 2017 National Surveying and Mapping Science and Technology Progress Award, the second prize of the 2018 Hubei Natural Science Award and the 2019 IEEE Earth Science and Remote Sensing Data Fusion Competition. He has been selected into the “Changjiang Scholars Award Program” of the Ministry of Education, the first batch of “Future Scientist Program” by the China Scholarship Council, and the “351 Plan” of Wuhan University. Senior Member of the Institute of Electrical and Electronics Engineers (IEEE) is invited to serve as the deputy editor of SCI journals such as PE & RS, Computers & Geosciences, IEEE Access, and the chair of the international conferences such as IEEE IGARSS and IEEE WHISPERS, and serves as IEEE TIP, IEEE TGRS, and IEEE. Reviewers of 38 international SCI journals such as TCYB and IEEE JSTARS.

 

 

iNVITED SPEAKERS

 

 

 

Yao Lu

Sun Yat-sen University, China

Yao Lu is a full professor at the School of Computer Science and Engineering, Sun Yat-sen University in Guangzhou, China. He is a director of the Institute of Scientific Computing, and a member of the Standing Committee of the 10th Guangdong Provincial Association for Science and Technology. Prof. Yao Lu studied at the University of Science and Technology of China, the Institute of Mathematics and Systems Science of the Chinese Academy of Sciences, and Syracuse University. After graduating with a Ph.D., he conducted research on breast cancer medical imaging at the University of Michigan. He was a Postdoctoral Research Fellow and Research Investigator at the University of Michigan Medical School. His research interests are in medical imaging, image processing and medical image analysis. His research has been supported by 16 grants, including the National Science Foundation of China, the Department of Education of China, and the Department of Science and Technology of Guangdong Province, China. He has published above 140 journal papers and conference proceeding papers. He has been selected by the National Talent Plan Youth Project and "Yixian Outstanding Scholar" of Sun Yat-sen University.

Invited Speech Title: Applications of Generative AI in Medical Imaging
Abstract: Thanks to its superior soft-tissue contrast and absence of ionizing radiation, magnetic resonance imaging (MRI) is widely used in tumor diagnosis and MR-guided radiotherapy. However, clinically routine 1.5 T images often suffer from insufficient quality, and the lack of contrast-enhanced sequences limits accurate contouring. To address this, we introduce a generative-AI approach that synthesizes 3.0 T MR images from standard 1.5 T acquisitions and generates contrast-enhanced images from non-contrast scans, while exploiting complementary multi-sequence features to enhance model robustness. By integrating feature-consistency constraints, attention mechanisms, and multi-task learning, the proposed method significantly improves both tumor delineation and image quality, offering a viable technical route for MR-guided adaptive radiotherapy.

 

 

 

Hongyuan Jing

Beijing Union University, China

Hongyuan Jing is currently an Associate Professor at the School of Artificial Intelligence, Beijing Union University, China. He received his Ph.D. in Engineering from the Communications and Embedded Systems Laboratory at the University of Leicester, UK in 2019. He serves as a technical program committee member and session chair for several international conferences such as ICGIP and ICCCS. His research interests include computer vision, deep learning, object detection, image restoration, and multi-sensor fusion. He has published more than 30 papers in SCI-indexed journals including IEEE TIM and IoT. He has led or participated in multiple research projects funded by the EU FP7 program, China's National Key R&D Program, the Beijing Natural Science Foundation, and industry partners. He won the championship in the CVPR-NTIRE Multi-Scenario Raindrop Removal Challenge. He also serves as a reviewer for journals such as MS and TVC et.al.

Invited Speech Title: KA-DMSF: Kernel Attention and Dynamic Multi-Shape Filtering for Real-World Image Dehazing
Abstract: This paper proposes an efficient image dehazing network named KA-DMSF. By introducing depth-wise separable convolutions and channel attention mechanisms, this work significantly reduces the number of parameters and computational cost while maintaining strong feature representation capabilities. Additionally, we design a composite loss function that combines multi-scale L1 loss and frequency domain (FFT) loss, enabling the model to simultaneously recover detailed textures and global structural information in images. Experimental results on multiple public datasets demonstrate that the proposed method achieves state-of-the-art performance in both subjective visual quality and objective metrics (such as PSNR and SSIM), while also providing faster inference speed. This offers an efficient and practical solution for image dehazing tasks.

 

 

 

Yuanyuan Wu

Chengdu University of Technology, China

Yuanyuan Wu received the Ph.D. degree in Communication and Information Systems from Sichuan University, Chengdu, China, in 2016. From 2012 to 2014, she was an Exchange Student with University of California, San Diego, CA, USA. She is an Associate Professor with the College of Computer Science and Cyber Security (Model Software College), Chengdu University of Technology. Her current research interests include Computer vision, Video and Image Processing, Microwave Plasma Visual Diagnosis.

Invited Speech Title: Research on Fine-Grained Image Recognition Based on Collaboration of Large and Small Models
Abstract: Commodity image recognition in retail environments presents significant challenges due to the extensive diversity of product categories, high visual similarity among items, and various environmental disturbances. Conventional unimodal vision-based approaches often inadequately utilize textual information on product packaging, resulting in limited recognition robustness. This paper proposes a large model-guided multi-modal collaborative recognition paradigm, which employs a coarse-to-fine two-stage strategy. Initially, a large-scale vision model performs object localization and coarse-grained classification. Subsequently, for samples with low confidence scores, a lightweight adaptive fusion module is activated to integrate visual features with semantically processed OCR-derived text. A dynamic weighting mechanism evaluates the reliability of each modality to achieve optimal decision-making, thereby significantly enhancing recognition accuracy in complex scenarios. Experimental results on both the public RP2K dataset and our self-constructed large-scale commodity dataset demonstrate that the proposed method maintains high operational efficiency while substantially improving recognition accuracy and generalization capability, thereby providing an effective solution for fine-grained recognition in open-world environments.

 

 

 

Philippe Durand

CNAM- Paris, France

Durand is Senior Lecturer in the Mathematics and Statistics Department of the National Conservatory of Arts and Crafts in the Mathematical and Numerical Modeling Department (M2N), and he works on the interaction between mathematical engineering and theoretical tools of mathematics like algebraic topology which usage has been increasing since the introduction of modern mathematics in the early sixties. He is interested in the mathematization of gauge theories in physics and string theory, and the application of topological and statistical methods to image processing. In the past, he invested in different methods of pattern recognition, and in particular the tools of mathematical morphology for the extraction of texture information. His recent works currently concern the use of topological data analysis and different approaches to applying classical or quantum neural networks to image processing. He published his results in various journals of applied mathematics, mathematic engineering, computer science and various proceedings of image processing conferences.

Invited Speech Title: Prediction of meteoric phenomena by a Quantum Neural and TDA approach

Abstract: In this presentation, we propose original methods to predict the evolution of an exceptional storm disturbance, evolving in the form of Arcus between 6 a.m. and 8 a.m. and having caused a lot of damage on the west coast of Corsica on August 18, 2022. We compare the results combining a hybrid architecture consisting of a LSTM neural network and a classical CNN and a quantum neural network (QNN) which exploits quantum circuits on a series of Eumetstat satellite data from METEO France and C.A.P.E. data. The prediction in the case of the hybrid quantum neural network is of better quality than those given by the classical approach and above all faster. This approach is intended to be crossed by previous works from the topological data analysis (TDA) and persistence diagrams.

 

 

 

Shuaifeng Zhi

National University of Defense Technology (NUDT), China

Homepage: https://shuaifengzhi.

Shuaifeng Zhi currently a Lecturer (Assistant Professor) at the Department of Electronic Science and Technology, National University of Defense Technology (NUDT), China. He is selected for the 10th Youth Talent Support Program of China Association for Science and Technology (CAST) and the grantee of Hunan Provincial Natural Science Foundation for the Excellent Young Scientists Fund. He received the PhD degree in computing research from the Dyson Robotics Laboratory, Imperial College London in 2021, supervised by Prof. Andrew J. Davison and Prof. Stefan Leutenegger. He obtained the MSc.Eng and B.Eng from NUDT in 2017 and 2015, respectively. He was also a CSC-funded 6-month visiting student in 5GIC, University of Surrey in 2015. His current research interests focus on robot vision, particularly on scene understanding, neural scene representation, and semantic SLAM. He has published more than 10 papers from ICCV (Oral), CVPR, ICLR, IEEE T-PAMI, IEEE RA-L, etc.

Invited Speech Title: Neural 3D Scene Reconstruction and Understanding from Sparse Images

Abstract: Practical 3D scene perception of autonomous agents has long been hindered by a fundamental reality: our visual observations are inherently sparse and noisy. In this talk, we will present our recent investigations in how to use limited in-situ observations as well as imperfect priors to infer dense semantic maps and conduct novel view synthesis. We will also discuss outlooks in combining foundation models and geometric reasoning to achieve robust and generalizable scene understanding.

 

 

 

Lin Gao

Institute of Computing Technology, Chinese Academy of Sciences, China

Homepage: http://www.geometrylearning.com/

Gao Lin serves as the Deputy Director of the Ubiquitous Computing Systems Research Center at the Institute of Computing Technology, Chinese Academy of Sciences (ICT, CAS), Professor, and Doctoral Supervisor. He also holds a professorship at the University of Chinese Academy of Sciences. His research focuses on computer graphics and 3D computer vision. He has published over 100 papers in journals and conferences such as SIGGRAPH, TPAMI, and TVCG. DeepFaceDrawing developed by his team is used by users in more than 180 countries and regions worldwide. He currently serves or has served as the Co-Chair of the GDC Conference, Co-Chair of the SGP Conference, Co-Chair of the China 3DV Program Committee, a member of the SIGGRAPH Technical Paper Program Committee, Area Chair of CVPR and NeurIPS, an editorial board member of IEEE TVCG, and Secretary-General of the Intelligent Graphics Professional Committee of CSIG (China Society of Image and Graphics). His distinguished honors encompass the National Natural Science Foundation of China (NSFC) Outstanding Young Scholar Award, Beijing Municipal Outstanding Young Scholar Award, and the Royal Society Newton Advanced Fellowship. He has also won the Asia Graphics Association (AG) Young Researcher Award, Wu Wenjun Artificial Intelligence Outstanding Young Scholar Award, CCF (China Computer Federation) Technology Invention First Prize, and CCF CAD&CG Open Source Software Award, among other honors.

Invited Speech Title: Modeling and Neural Rendering with Hybrid Representations
Abstract: In this talk, we will analyze the latest advancements and corresponding challenges in Neural Radiance Fields (NeRF) and 3D Gaussian Splatting. We will explore methods for geometric reconstruction, large-scale editing, and material disentanglement of these representations. Additionally, the talk will examine the opportunities that video generation models present for 3D reconstruction and generation, sharing some implicit modeling approaches based on these video generation models.

 

 

Suphongsa Khetkeeree

Mahanakorn University of Technology, Thailand

Dr. Suphongsa Khetkeeree is the Director of the Excellence Center on Satellite and Communication and a Physics Lecturer at Mahanakorn University of Technology (MUT), Thailand. He specializes in remote sensing, satellite development, UAV and satellite applications. Additionally, he serves as a consultant for both the space industry and government agencies, providing expertise in strategic planning, technology development, and policy implementation.
Dr. Khetkeeree holds a Ph.D. and was promoted to Assistant Professor in Electrical Engineering at MUT. He also earned certifications in small satellite development from Beihang University (BUAA), Shanghai Jiao Tong University (SJTU), and Middle East Technical University (METU). He has served on numerous technical committees focused on image and signal processing as well as remote sensing, and has published over 30 international research papers. His academic and professional contributions continue to advance innovation and practical applications in space technology and related fields.

Actively engaged in satellite internet research in Thailand, Dr. Khetkeeree contributes to the enhancement of the nation’s digital infrastructure. He has also participated in multiple international space cooperation projects.

Invited Speech Title: Forest Fire & Agricultural Burning Mitigation: An Integrated AIoT and Remote Sensing Approach to Combat PM 2.5 in Thailand
Abstract: Wildfires and pervasive open agricultural burning are critical contributors to PM 2.5 air pollution in Thailand. This project introduces a comprehensive, evolving framework leveraging Artificial Intelligence of Things (AIoT) and advanced Remote Sensing to address these challenges across three wildfire management phases: before, during, and after, while providing dedicated solutions for agricultural burning identification and enforcement. This holistic approach aims to foster healthier, safer communities.
For wildfire management, our multi-faceted approach combines tiered detection with dynamic operational support. Before fires, we use open-source satellite data for broad-area hotspot monitoring, complemented by rough, real-time fire prediction from IoT gas/dust sensors via LoRa. For precise, localized alerts, we are developing AI-powered deep learning algorithms on edge computing devices via CCTV cameras, transmitting notifications and snapshots via LoRa communication. Real-time insights from these ground sensors and hotspot data guide targeted optical and SAR satellite imagery searches for fire movement and burn area maps. To overcome satellite limitations, we're developing drone-based systems for on-demand, real-time fire mapping from suitable sensors. During active wildfires, IoT devices support ground operations with firefighter tracking, voice communication and real-time environmental data (wind, temperature, humidity, water reserve) via muti-bands communication, including a 78MHz radio communication system. Post-wildfire, our system supports damage assessment and future prevention planning.
A significant focus targets open agricultural burning, with a distinct approach centered on identification, monitoring, and legal enforcement. Our research on sugarcane field monitoring uses open-source optical and SAR imagery (i.e., Sentinel-1 and Sentinel-2) to identify post-harvest burned areas. These outputs serve as vital evidence for compliance monitoring and legal enforcement to mitigate PM 2.5. Select real-time AIoT detection systems from wildfire management are adaptable for immediate agricultural burning alerts.
Looking ahead, we aim to integrate all components into a seamless, interconnected system. A cornerstone is the planned LoRa satellite gateway, serving as a backbone to collect and relay data from all LoRa-enabled sensors (gas/dust, field support, AI-CCTV) to central command. This comprehensive communication infrastructure will support enhanced monitoring and alert dissemination in remote areas for various disaster missions, including wildfires and floods. Ultimately, this holistic, data-driven project transforms how we predict, monitor, respond to, and prevent open burning, directly contributing to mitigating pervasive PM 2.5 air pollution in Thailand.

 

 

 

微信秘书: 扫码并发送ICGIP 2025

          

Categories

Paper Template Call for papers LeafletNanjing | 南京Submission

Contact ICGIP 2025

  • Contact Secretary: Ms. Robin Luo
  • Email : icgip_conf@vip.163.com
  • Tel : +86-182-2760-9313

Copyright  Reserved - ICGIP - www.icgip.org