Query:
学者姓名:冯金超
Refining:
Year
Type
Indexed by
Source
Complex
Former Name
Co-Author
Language
Clean All
Abstract :
Addressing the challenges posed by colorectal polyp variability and imaging inconsistencies in endoscopic images, we propose the multiscale feature fusion booster network (MFFB-Net), a novel deep learning (DL) framework for the semantic segmentation of colorectal polyps to aid in early colorectal cancer detection. Unlike prior models, such as the pyramid vision transformer-based cascaded attention decoder (PVT-CASCADE) and the parallel reverse attention network (PraNet), MFFB-Net enhances segmentation accuracy and efficiency through a unique fusion of multiscale feature extraction in both the encoder and decoder stages, coupled with a booster module for refining fine-grained details and a bottleneck module for efficient feature compression. The network leverages multipath feature extraction with skip connections, capturing both local and global contextual information, and is rigorously evaluated on seven benchmark datasets, including Kvasir, CVC-ClinicDB, CVC-ColonDB, ETIS, CVC-300, BKAI-IGH, and EndoCV2020. MFFB-Net achieves state-of-the-art (SOTA) performance, with Dice scores of 94.38%, 91.92%, 91.21%, 80.34%, 82.67%, 76.92%, and 74.29% on CVC-ClinicDB, Kvasir, CVC-300, ETIS, CVC-ColonDB, EndoCV2020, and BKAI-IGH, respectively, outperforming existing models in segmentation accuracy and computational efficiency. MFFB-Net achieves real-time processing speeds of 26 FPS with only 1.41 million parameters, making it well suited for real-world clinical applications. The results underscore the robustness of MFFB-Net, demonstrating its potential for real-time deployment in computer-aided diagnosis systems and setting a new benchmark for automated polyp segmentation.
Keyword :
booster module booster module semantic segmentation semantic segmentation multiscale network multiscale network deep learning deep learning polyp segmentation polyp segmentation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Manan, Malik Abdul , Feng, Jinchao , Ahmed, Shahzad et al. Multiscale Feature Fusion Booster Network for Segmentation of Colorectal Polyp [J]. | INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY , 2025 , 35 (2) . |
MLA | Manan, Malik Abdul et al. "Multiscale Feature Fusion Booster Network for Segmentation of Colorectal Polyp" . | INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY 35 . 2 (2025) . |
APA | Manan, Malik Abdul , Feng, Jinchao , Ahmed, Shahzad , Raheem, Abdul . Multiscale Feature Fusion Booster Network for Segmentation of Colorectal Polyp . | INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY , 2025 , 35 (2) . |
Export to | NoteExpress RIS BibTex |
Abstract :
Magnetic Resonance Imaging (MRI) is a crucial tool in medical diagnostics, yet reconstructing high-quality images from under-sampled k-space data poses significant challenges. This study introduces Federated Adversarial MRI Enhancement (FAME), a novel framework combining Federated Learning (FL) with Generative Adversarial Networks (GANs) to enhance MRI reconstruction while maintaining patient privacy. FAME utilizes a hybrid model aggregation strategy that dynamically weights updates from local generators, ensuring a balanced contribution based on dataset size and quality. Each local generator is trained on-site-specific data, while a global discriminator evaluates and refines the aggregated updates to improve image quality. FAME addresses key issues in medical imaging, including data privacy, model generalization, and robustness, by integrating advanced GAN architectures such as multi-scale convolutions, attention mechanisms, and Graph Neural Networks (GNNs). Differential privacy and secure aggregation protocols are implemented to protect sensitive data during training. Extensive experiments using the fastMRI Brain and Knee datasets, along with the BraTS 2020 and IXI dataset, show that FAME outperforms existing models, achieving superior PSNR and SSIM values. This decentralized framework offers scalable, privacy-preserving MRI reconstruction, making it a promising solution for diverse clinical applications.
Keyword :
Generative adversarial networks Generative adversarial networks Federated learning Federated learning Medical imaging Medical imaging MRI reconstruction MRI reconstruction Data privacy Data privacy
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Ahmed, Shahzad , Feng, Jinchao , Ferzund, Javed et al. FAME: A Federated Adversarial Learning Framework for Privacy-Preserving MRI Reconstruction [J]. | APPLIED MAGNETIC RESONANCE , 2025 . |
MLA | Ahmed, Shahzad et al. "FAME: A Federated Adversarial Learning Framework for Privacy-Preserving MRI Reconstruction" . | APPLIED MAGNETIC RESONANCE (2025) . |
APA | Ahmed, Shahzad , Feng, Jinchao , Ferzund, Javed , Yaqub, Muhammad , Ali, Muhammad Usman , Manan, Malik Abdul et al. FAME: A Federated Adversarial Learning Framework for Privacy-Preserving MRI Reconstruction . | APPLIED MAGNETIC RESONANCE , 2025 . |
Export to | NoteExpress RIS BibTex |
Abstract :
Magnetic Resonance Imaging (MRI) is essential for high-resolution soft-tissue imaging but suffers from long acquisition times, limiting its clinical efficiency. Accelerating MRI through undersampling k-space data leads to ill-posed inverse problems, introducing noise and artifacts that degrade image quality. Conventional deep learning models, including conditional and unconditional approaches, often face challenges in generalization, particularly with variations in imaging operators or domain shifts. In this study, we propose PINN-DADif, a Physics-Informed Neural Network integrated with deep adaptive diffusion priors, to address these challenges in MRI reconstruction. PINN-DADif employs a two-phase inference strategy: an initial rapid-diffusion phase for fast preliminary reconstructions, followed by an adaptive phase where the diffusion prior is refined to ensure consistency with MRI physics and data fidelity. The inclusion of physics-based regularization through PINNs enhances the model's adherence to k-space constraints and gradient smoothness, leading to more accurate reconstructions. This adaptive approach reduces the number of iterations required compared to traditional diffusion models, improving both speed and image quality. We validated PINN-DADif on a private MRI dataset and the public fastMRI dataset, where it outperformed state-of-the-art methods. The model achieved PSNR values of 41.2, 39.5, and 41.5, and SSIM values of 98.7, 98.0, and 98.5 for T1, T2, and Proton Density-weighted images at R = 4x on the private dataset. Similar high performance was observed on the fastMRI dataset, even in scenarios involving domain shifts. PINN-DADif marks a significant advancement in MRI reconstruction by providing an efficient, adaptive, and physics-informed solution.
Keyword :
Domain shift Domain shift MRI reconstruction MRI reconstruction Physics-informed neural networks (PINN) Physics-informed neural networks (PINN) Adaptive diffusion Adaptive diffusion Undersampled k -space Undersampled k -space
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Ahmed, Shahzad , Feng, Jinchao , Mehmood, Atif et al. PINN-DADif: Physics-informed deep adaptive diffusion network for robust and efficient MRI reconstruction [J]. | DIGITAL SIGNAL PROCESSING , 2025 , 160 . |
MLA | Ahmed, Shahzad et al. "PINN-DADif: Physics-informed deep adaptive diffusion network for robust and efficient MRI reconstruction" . | DIGITAL SIGNAL PROCESSING 160 (2025) . |
APA | Ahmed, Shahzad , Feng, Jinchao , Mehmood, Atif , Ali, Muhammad Usman , Yaqub, Muhammad , Manan, Malik Abdul et al. PINN-DADif: Physics-informed deep adaptive diffusion network for robust and efficient MRI reconstruction . | DIGITAL SIGNAL PROCESSING , 2025 , 160 . |
Export to | NoteExpress RIS BibTex |
Abstract :
Diffuse optical tomography (DOT) is a promising non-invasive optical imaging technology that can provide functional information of biological tissues. Since the diffused light undergoes multiple scattering in biological tissues, and the boundary measurements are limited, the inverse problem of DOT is ill-posed and ill-conditioned. To overcome these limitations, inverse problems in DOT are often mitigated using regularization techniques, which use data fitting and regularization terms to suppress the effects of measurement noise and modeling errors. Tikhonov regularization, utilizing the L2 norm as its regularization term, often leads to images that are excessively smooth. In recent years, with the continuous development of deep learning algorithms, many researchers have used Model-based deep learning methods for reconstruction. However, the reconstruction of DOT is solved on mesh, arising from a finite element method for inverse problems, it is difficult to use it directly for convolutional network. Therefore, we propose a model-based graph convolutional network (Model-GCN). Overall, Model-GCN achieves better image reconstruction results compared to Tikhonov, with lower absolute bias error (ABE). Specifically, for total hemoglobin (HbT) and water, the average reduction in ABE is 68.3% and 77.3%, respectively. Additionally, the peak signal-to-noise (PSNR) values are on average increased by 6.4dB and 7.0dB.
Keyword :
diffuse optical tomography diffuse optical tomography graph convolutional network graph convolutional network model-based model-based
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wei, Chengpu , Li, Zhe , Hu, Ting et al. Model-based graph convolutional network for diffuse optical tomography [J]. | MULTIMODAL BIOMEDICAL IMAGING XIX , 2024 , 12834 . |
MLA | Wei, Chengpu et al. "Model-based graph convolutional network for diffuse optical tomography" . | MULTIMODAL BIOMEDICAL IMAGING XIX 12834 (2024) . |
APA | Wei, Chengpu , Li, Zhe , Hu, Ting , Sun, Zhonghua , Jia, Kebin , Feng, Jinchao . Model-based graph convolutional network for diffuse optical tomography . | MULTIMODAL BIOMEDICAL IMAGING XIX , 2024 , 12834 . |
Export to | NoteExpress RIS BibTex |
Abstract :
Colorectal polyps are structural abnormalities of the gastrointestinal tract that can potentially become cancerous in some cases. The study introduces a novel framework for colorectal polyp segmentation named the Multi-Scale and Multi-Path Cascaded Convolution Network (MMCC-Net), aimed at addressing the limitations of existing models, such as inadequate spatial dependence representation and the absence of multi-level feature integration during the decoding stage by integrating multi-scale and multi-path cascaded convolutional techniques and enhances feature aggregation through dual attention modules, skip connections, and a feature enhancer. MMCCNet achieves superior performance in identifying polyp areas at the pixel level. The Proposed MMCC-Net was tested across six public datasets and compared against eight SOTA models to demonstrate its efficiency in polyp segmentation. The MMCC-Net's performance shows Dice scores with confidence interval ranging between 77.43 +/- 0.12, (77.08, 77.56) and 94.45 +/- 0.12, (94.19, 94.71) and Mean Intersection over Union (MIoU) scores with confidence interval ranging from 72.71 +/- 0.19, (72.20, 73.00) to 90.16 +/- 0.16, (89.69, 90.53) on the six databases. These results highlight the model's potential as a powerful tool for accurate and efficient polyp segmentation, contributing to early detection and prevention strategies in colorectal cancer.
Keyword :
Feature aggregation Feature aggregation Semantic segmentation Semantic segmentation Colorectal polyp Colorectal polyp Attention modules Attention modules Cascaded convolution network Cascaded convolution network
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Manan, Malik Abdul , Feng, Jinchao , Yaqub, Muhammad et al. Multi-scale and multi-path cascaded convolutional network for semantic segmentation of colorectal polyps [J]. | ALEXANDRIA ENGINEERING JOURNAL , 2024 , 105 : 341-359 . |
MLA | Manan, Malik Abdul et al. "Multi-scale and multi-path cascaded convolutional network for semantic segmentation of colorectal polyps" . | ALEXANDRIA ENGINEERING JOURNAL 105 (2024) : 341-359 . |
APA | Manan, Malik Abdul , Feng, Jinchao , Yaqub, Muhammad , Ahmed, Shahzad , Imran, Syed Muhammad Ali , Chuhan, Imran Shabir et al. Multi-scale and multi-path cascaded convolutional network for semantic segmentation of colorectal polyps . | ALEXANDRIA ENGINEERING JOURNAL , 2024 , 105 , 341-359 . |
Export to | NoteExpress RIS BibTex |
Abstract :
Purpose: This study introduces GraFMRI, a novel framework designed to address the challenges of reconstructing high-quality MRI images from undersampled k-space data. Traditional methods often suffer from noise amplification and loss of structural detail, leading to suboptimal image quality. GraFMRI leverages Graph Neural Networks (GNNs) to transform multi-modal MRI data (T1, T2, PD) into a graph-based representation, enabling the model to capture intricate spatial relationships and inter-modality dependencies. Methods: The framework integrates Graph-Based Non-Local Means (NLM) Filtering for effective noise suppression and Adversarial Training to reduce artifacts. A dynamic attention mechanism enables the model to focus on key anatomical regions, even when fully-sampled reference images are unavailable. GraFMRI was evaluated on the IXI and fastMRI datasets using Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) as metrics for reconstruction quality. Results: GraFMRI consistently outperforms traditional and self-supervised reconstruction techniques. Significant improvements in multi-modal fusion were observed, with better preservation of information across modalities. Noise suppression through NLM filtering and artifact reduction via adversarial training led to higher PSNR and SSIM scores across both datasets. The dynamic attention mechanism further enhanced the accuracy of the reconstructions by focusing on critical anatomical regions. Conclusion: GraFMRI provides a scalable, robust solution for multi-modal MRI reconstruction, addressing noise and artifact challenges while enhancing diagnostic accuracy. Its ability to fuse information from different MRI modalities makes it adaptable to various clinical applications, improving the quality and reliability of reconstructed images.
Keyword :
MRI reconstruction MRI reconstruction Generative adversarial network Generative adversarial network Medical imaging Medical imaging Zero-shot learning Zero-shot learning Graph neural network Graph neural network
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Ahmed, Shahzad , Jinchao, Feng , Ferzund, Javed et al. GraFMRI: A graph-based fusion framework for robust multi-modal MRI reconstruction [J]. | MAGNETIC RESONANCE IMAGING , 2024 , 116 . |
MLA | Ahmed, Shahzad et al. "GraFMRI: A graph-based fusion framework for robust multi-modal MRI reconstruction" . | MAGNETIC RESONANCE IMAGING 116 (2024) . |
APA | Ahmed, Shahzad , Jinchao, Feng , Ferzund, Javed , Ali, Muhammad Usman , Yaqub, Muhammad , Manan, Malik Abdul et al. GraFMRI: A graph-based fusion framework for robust multi-modal MRI reconstruction . | MAGNETIC RESONANCE IMAGING , 2024 , 116 . |
Export to | NoteExpress RIS BibTex |
Abstract :
Brain tumors, which are uncontrolled growths of brain cells, pose a threat to people worldwide. However, accurately classifying brain tumors through computerized methods has been difficult due to differences in size, shape, and location of the tumors and limitations in the medical field. Improved precision is critical in detecting brain tumors, as small errors in human judgments can result in increased mortality rates. This paper proposes a new method for improving early detection and decision-making in brain tumor severity using learning methodologies. Clinical data -sets are used to obtain benchmark images of brain tumors, which undergo pre-processing, data aug-mentation with a Generative Adversarial Network, and classification with an Adaptive Layer Cascaded ResNet (ALCResNet) optimized with Improved Border Collie Optimization (IBCO). The abnormal images are then segmented using the DeepLabV3 model and fed into the ALCRes-Net for final classification into Meningioma, Glioma, or Pituitary. The IBCO algorithm-based ALCResNet model outperforms other heuristic classifiers for brain tumor classification and severity estimation, with improvements ranging from 1.3% to 4.4% over COA-ALCResNet, DHOA-ALCResNet, MVO-ALCResNet, and BCO-ALCResNet. The IBCO algorithm-based ALCResNet model also achieves higher accuracy than non-heuristic classifiers such as CNN, DNN, SVM, and ResNet, with improvements ranging from 2.4% to 3.6% for brain tumor classification and 0.9% to 3.8% for severity estimation. The proposed method offers an automated classification and grading system for brain tumors and improves the accuracy of brain tumor classification and severity estimation, promoting more precise decision-making regarding diagnosis and treatment.& COPY; 2023 The Authors. Published by Elsevier BV on behalf of Faculty of Engineering, Alexandria University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/ licenses/by-nc-nd/4.0/).
Keyword :
Adaptive Layer Cascaded ResNet Adaptive Layer Cascaded ResNet Generative Adversarial Net-work Generative Adversarial Net-work Brain Tumor Grading Sys-tem Brain Tumor Grading Sys-tem DeepLabV3 DeepLabV3 Brain Tumor Classification System Brain Tumor Classification System Improved Border Collie Optimization Improved Border Collie Optimization
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Yaqub, Muhammad , Jinchao, Feng , Ahmed, Shahzad et al. DeepLabV3, IBCO-based ALCResNet: A fully automated classification, and grading system for brain tumor [J]. | ALEXANDRIA ENGINEERING JOURNAL , 2023 , 76 : 609-627 . |
MLA | Yaqub, Muhammad et al. "DeepLabV3, IBCO-based ALCResNet: A fully automated classification, and grading system for brain tumor" . | ALEXANDRIA ENGINEERING JOURNAL 76 (2023) : 609-627 . |
APA | Yaqub, Muhammad , Jinchao, Feng , Ahmed, Shahzad , Mehmood, Atif , Chuhan, Imran Shabir , Manan, Malik Abdul et al. DeepLabV3, IBCO-based ALCResNet: A fully automated classification, and grading system for brain tumor . | ALEXANDRIA ENGINEERING JOURNAL , 2023 , 76 , 609-627 . |
Export to | NoteExpress RIS BibTex |
Abstract :
Exudates are a common sign of diabetic retinopathy, which is a disease that affects the blood vessels in the retina. Early detection of exudates is critical to avoiding vision problems through continuous screening and treatment. In traditional clinical practice, the involved lesions are manually detected using photographs of the fundus. However, this task is cumbersome and time-consuming and requires intense effort due to the small size of the lesion and the low contrast of the images. Thus, computer-assisted diagnosis of retinal disease based on the detection of red lesions has been actively explored recently. In this paper, we present a comparison of deep convolutional neural network (CNN) architectures and propose a residual CNN with residual skip connections to reduce the parameter for the semantic segmentation of exudates in retinal images. A suitable image augmentation technique is used to improve the performance of network architecture. The proposed network can robustly segment exudates with high accuracy, which makes it suitable for diabetic retinopathy screening. A comparative performance analysis of three benchmark databases: E-ophtha, DIARETDB1, and Hamilton Ophthalmology Institute's Macular Edema, is presented. The proposed method achieves a precision of 0.95, 0.92, 0.97, accuracy of 0.98, 0.98, 0.98, sensitivity of 0.97, 0.95, 0.95, specificity of 0.99, 0.99, 0.99, and area under the curve of 0.97, 0.94, and 0.96, respectively.
Keyword :
data augmentation data augmentation semantic segmentation semantic segmentation residual network residual network diabetic retinopathy diabetic retinopathy retinal image retinal image convolution neural network convolution neural network exudates exudates
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Manan, Malik Abdul , Jinchao, Feng , Khan, Tariq M. M. et al. Semantic segmentation of retinal exudates using a residual encoder-decoder architecture in diabetic retinopathy [J]. | MICROSCOPY RESEARCH AND TECHNIQUE , 2023 , 86 (11) : 1443-1460 . |
MLA | Manan, Malik Abdul et al. "Semantic segmentation of retinal exudates using a residual encoder-decoder architecture in diabetic retinopathy" . | MICROSCOPY RESEARCH AND TECHNIQUE 86 . 11 (2023) : 1443-1460 . |
APA | Manan, Malik Abdul , Jinchao, Feng , Khan, Tariq M. M. , Yaqub, Muhammad , Ahmed, Shahzad , Chuhan, Imran Shabir . Semantic segmentation of retinal exudates using a residual encoder-decoder architecture in diabetic retinopathy . | MICROSCOPY RESEARCH AND TECHNIQUE , 2023 , 86 (11) , 1443-1460 . |
Export to | NoteExpress RIS BibTex |
Abstract :
本发明公开了基于图卷积神经网络的近红外光谱层析成像重建方法,本发明提出一种基于图卷积的深度学习网络框架,该深度学习网络框架对具有不规则结构的成像域建立图模型,并将图结构信息加入到带有注意力机制的图卷积神经网络中,以提取图节点上的光学特征参数的特征,将采集到的光学信号作为网络输入进行端到端的训练,同时恢复出含氧血红蛋白,脱氧血红蛋白和水三种发色团的浓度。实验结果表明,本发明能够实现NIRST图像的准确重建。
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 冯金超 , 苏琳轩 , 魏承朴 et al. 基于图卷积神经网络的近红外光谱层析成像重建方法 : CN202310513333.3[P]. | 2023-05-09 . |
MLA | 冯金超 et al. "基于图卷积神经网络的近红外光谱层析成像重建方法" : CN202310513333.3. | 2023-05-09 . |
APA | 冯金超 , 苏琳轩 , 魏承朴 , 贾克斌 , 李哲 , 孙中华 . 基于图卷积神经网络的近红外光谱层析成像重建方法 : CN202310513333.3. | 2023-05-09 . |
Export to | NoteExpress RIS BibTex |
Abstract :
本发明公开了一种基于深度学习的扩散相关光谱无创血压连续监测方法,具体包括:首先,基于扩散相关光谱技术获取被测试者手臂部位的光强自相关函数数据,利用传统非线性拟合方法计算出组织血流指数;然后,基于所提出的U‑net网络将拟合出的组织血流指数数据进行训练,建立从组织血流指数到血压之间的端到端网络模型;最后,将测试集数据送入训练好的网络模型中,实现血压的预测,从而得到连续血压波形。本发明直接建立了组织血流指数与血压间的端到端关系,为无创血压连续监测提供了新方法,克服了现有无创血压连续监测方法操作繁琐、因袖带充气而导致不适等不足,为人们了解血压的起伏变化提供了方便。
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 李哲 , 白江涛 , 姜敏楠 et al. 一种基于深度学习的扩散相关光谱无创血压连续监测方法 : CN202310317145.3[P]. | 2023-03-26 . |
MLA | 李哲 et al. "一种基于深度学习的扩散相关光谱无创血压连续监测方法" : CN202310317145.3. | 2023-03-26 . |
APA | 李哲 , 白江涛 , 姜敏楠 , 冯金超 , 贾克斌 . 一种基于深度学习的扩散相关光谱无创血压连续监测方法 : CN202310317145.3. | 2023-03-26 . |
Export to | NoteExpress RIS BibTex |
Export
Results: |
Selected to |
Format: |