• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:冯金超

Refining:

Source

Submit Unfold

Co-Author

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 13 >
FAME: A Federated Adversarial Learning Framework for Privacy-Preserving MRI Reconstruction SCIE
期刊论文 | 2025 | APPLIED MAGNETIC RESONANCE
Abstract&Keyword Cite

Abstract :

Magnetic Resonance Imaging (MRI) is a crucial tool in medical diagnostics, yet reconstructing high-quality images from under-sampled k-space data poses significant challenges. This study introduces Federated Adversarial MRI Enhancement (FAME), a novel framework combining Federated Learning (FL) with Generative Adversarial Networks (GANs) to enhance MRI reconstruction while maintaining patient privacy. FAME utilizes a hybrid model aggregation strategy that dynamically weights updates from local generators, ensuring a balanced contribution based on dataset size and quality. Each local generator is trained on-site-specific data, while a global discriminator evaluates and refines the aggregated updates to improve image quality. FAME addresses key issues in medical imaging, including data privacy, model generalization, and robustness, by integrating advanced GAN architectures such as multi-scale convolutions, attention mechanisms, and Graph Neural Networks (GNNs). Differential privacy and secure aggregation protocols are implemented to protect sensitive data during training. Extensive experiments using the fastMRI Brain and Knee datasets, along with the BraTS 2020 and IXI dataset, show that FAME outperforms existing models, achieving superior PSNR and SSIM values. This decentralized framework offers scalable, privacy-preserving MRI reconstruction, making it a promising solution for diverse clinical applications.

Keyword :

Generative adversarial networks Generative adversarial networks Federated learning Federated learning Medical imaging Medical imaging MRI reconstruction MRI reconstruction Data privacy Data privacy

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Ahmed, Shahzad , Feng, Jinchao , Ferzund, Javed et al. FAME: A Federated Adversarial Learning Framework for Privacy-Preserving MRI Reconstruction [J]. | APPLIED MAGNETIC RESONANCE , 2025 .
MLA Ahmed, Shahzad et al. "FAME: A Federated Adversarial Learning Framework for Privacy-Preserving MRI Reconstruction" . | APPLIED MAGNETIC RESONANCE (2025) .
APA Ahmed, Shahzad , Feng, Jinchao , Ferzund, Javed , Yaqub, Muhammad , Ali, Muhammad Usman , Manan, Malik Abdul et al. FAME: A Federated Adversarial Learning Framework for Privacy-Preserving MRI Reconstruction . | APPLIED MAGNETIC RESONANCE , 2025 .
Export to NoteExpress RIS BibTex
PINN-DADif: Physics-informed deep adaptive diffusion network for robust and efficient MRI reconstruction SCIE
期刊论文 | 2025 , 160 | DIGITAL SIGNAL PROCESSING
Abstract&Keyword Cite

Abstract :

Magnetic Resonance Imaging (MRI) is essential for high-resolution soft-tissue imaging but suffers from long acquisition times, limiting its clinical efficiency. Accelerating MRI through undersampling k-space data leads to ill-posed inverse problems, introducing noise and artifacts that degrade image quality. Conventional deep learning models, including conditional and unconditional approaches, often face challenges in generalization, particularly with variations in imaging operators or domain shifts. In this study, we propose PINN-DADif, a Physics-Informed Neural Network integrated with deep adaptive diffusion priors, to address these challenges in MRI reconstruction. PINN-DADif employs a two-phase inference strategy: an initial rapid-diffusion phase for fast preliminary reconstructions, followed by an adaptive phase where the diffusion prior is refined to ensure consistency with MRI physics and data fidelity. The inclusion of physics-based regularization through PINNs enhances the model's adherence to k-space constraints and gradient smoothness, leading to more accurate reconstructions. This adaptive approach reduces the number of iterations required compared to traditional diffusion models, improving both speed and image quality. We validated PINN-DADif on a private MRI dataset and the public fastMRI dataset, where it outperformed state-of-the-art methods. The model achieved PSNR values of 41.2, 39.5, and 41.5, and SSIM values of 98.7, 98.0, and 98.5 for T1, T2, and Proton Density-weighted images at R = 4x on the private dataset. Similar high performance was observed on the fastMRI dataset, even in scenarios involving domain shifts. PINN-DADif marks a significant advancement in MRI reconstruction by providing an efficient, adaptive, and physics-informed solution.

Keyword :

Domain shift Domain shift MRI reconstruction MRI reconstruction Physics-informed neural networks (PINN) Physics-informed neural networks (PINN) Adaptive diffusion Adaptive diffusion Undersampled k -space Undersampled k -space

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Ahmed, Shahzad , Feng, Jinchao , Mehmood, Atif et al. PINN-DADif: Physics-informed deep adaptive diffusion network for robust and efficient MRI reconstruction [J]. | DIGITAL SIGNAL PROCESSING , 2025 , 160 .
MLA Ahmed, Shahzad et al. "PINN-DADif: Physics-informed deep adaptive diffusion network for robust and efficient MRI reconstruction" . | DIGITAL SIGNAL PROCESSING 160 (2025) .
APA Ahmed, Shahzad , Feng, Jinchao , Mehmood, Atif , Ali, Muhammad Usman , Yaqub, Muhammad , Manan, Malik Abdul et al. PINN-DADif: Physics-informed deep adaptive diffusion network for robust and efficient MRI reconstruction . | DIGITAL SIGNAL PROCESSING , 2025 , 160 .
Export to NoteExpress RIS BibTex
Model-based graph convolutional network for diffuse optical tomography CPCI-S
期刊论文 | 2024 , 12834 | MULTIMODAL BIOMEDICAL IMAGING XIX
Abstract&Keyword Cite

Abstract :

Diffuse optical tomography (DOT) is a promising non-invasive optical imaging technology that can provide functional information of biological tissues. Since the diffused light undergoes multiple scattering in biological tissues, and the boundary measurements are limited, the inverse problem of DOT is ill-posed and ill-conditioned. To overcome these limitations, inverse problems in DOT are often mitigated using regularization techniques, which use data fitting and regularization terms to suppress the effects of measurement noise and modeling errors. Tikhonov regularization, utilizing the L2 norm as its regularization term, often leads to images that are excessively smooth. In recent years, with the continuous development of deep learning algorithms, many researchers have used Model-based deep learning methods for reconstruction. However, the reconstruction of DOT is solved on mesh, arising from a finite element method for inverse problems, it is difficult to use it directly for convolutional network. Therefore, we propose a model-based graph convolutional network (Model-GCN). Overall, Model-GCN achieves better image reconstruction results compared to Tikhonov, with lower absolute bias error (ABE). Specifically, for total hemoglobin (HbT) and water, the average reduction in ABE is 68.3% and 77.3%, respectively. Additionally, the peak signal-to-noise (PSNR) values are on average increased by 6.4dB and 7.0dB.

Keyword :

diffuse optical tomography diffuse optical tomography graph convolutional network graph convolutional network model-based model-based

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wei, Chengpu , Li, Zhe , Hu, Ting et al. Model-based graph convolutional network for diffuse optical tomography [J]. | MULTIMODAL BIOMEDICAL IMAGING XIX , 2024 , 12834 .
MLA Wei, Chengpu et al. "Model-based graph convolutional network for diffuse optical tomography" . | MULTIMODAL BIOMEDICAL IMAGING XIX 12834 (2024) .
APA Wei, Chengpu , Li, Zhe , Hu, Ting , Sun, Zhonghua , Jia, Kebin , Feng, Jinchao . Model-based graph convolutional network for diffuse optical tomography . | MULTIMODAL BIOMEDICAL IMAGING XIX , 2024 , 12834 .
Export to NoteExpress RIS BibTex
GraFMRI: A graph-based fusion framework for robust multi-modal MRI reconstruction SCIE
期刊论文 | 2024 , 116 | MAGNETIC RESONANCE IMAGING
Abstract&Keyword Cite

Abstract :

Purpose: This study introduces GraFMRI, a novel framework designed to address the challenges of reconstructing high-quality MRI images from undersampled k-space data. Traditional methods often suffer from noise amplification and loss of structural detail, leading to suboptimal image quality. GraFMRI leverages Graph Neural Networks (GNNs) to transform multi-modal MRI data (T1, T2, PD) into a graph-based representation, enabling the model to capture intricate spatial relationships and inter-modality dependencies. Methods: The framework integrates Graph-Based Non-Local Means (NLM) Filtering for effective noise suppression and Adversarial Training to reduce artifacts. A dynamic attention mechanism enables the model to focus on key anatomical regions, even when fully-sampled reference images are unavailable. GraFMRI was evaluated on the IXI and fastMRI datasets using Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) as metrics for reconstruction quality. Results: GraFMRI consistently outperforms traditional and self-supervised reconstruction techniques. Significant improvements in multi-modal fusion were observed, with better preservation of information across modalities. Noise suppression through NLM filtering and artifact reduction via adversarial training led to higher PSNR and SSIM scores across both datasets. The dynamic attention mechanism further enhanced the accuracy of the reconstructions by focusing on critical anatomical regions. Conclusion: GraFMRI provides a scalable, robust solution for multi-modal MRI reconstruction, addressing noise and artifact challenges while enhancing diagnostic accuracy. Its ability to fuse information from different MRI modalities makes it adaptable to various clinical applications, improving the quality and reliability of reconstructed images.

Keyword :

MRI reconstruction MRI reconstruction Generative adversarial network Generative adversarial network Medical imaging Medical imaging Zero-shot learning Zero-shot learning Graph neural network Graph neural network

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Ahmed, Shahzad , Jinchao, Feng , Ferzund, Javed et al. GraFMRI: A graph-based fusion framework for robust multi-modal MRI reconstruction [J]. | MAGNETIC RESONANCE IMAGING , 2024 , 116 .
MLA Ahmed, Shahzad et al. "GraFMRI: A graph-based fusion framework for robust multi-modal MRI reconstruction" . | MAGNETIC RESONANCE IMAGING 116 (2024) .
APA Ahmed, Shahzad , Jinchao, Feng , Ferzund, Javed , Ali, Muhammad Usman , Yaqub, Muhammad , Manan, Malik Abdul et al. GraFMRI: A graph-based fusion framework for robust multi-modal MRI reconstruction . | MAGNETIC RESONANCE IMAGING , 2024 , 116 .
Export to NoteExpress RIS BibTex
Multi-scale and multi-path cascaded convolutional network for semantic segmentation of colorectal polyps SCIE
期刊论文 | 2024 , 105 , 341-359 | ALEXANDRIA ENGINEERING JOURNAL
WoS CC Cited Count: 1
Abstract&Keyword Cite

Abstract :

Colorectal polyps are structural abnormalities of the gastrointestinal tract that can potentially become cancerous in some cases. The study introduces a novel framework for colorectal polyp segmentation named the Multi-Scale and Multi-Path Cascaded Convolution Network (MMCC-Net), aimed at addressing the limitations of existing models, such as inadequate spatial dependence representation and the absence of multi-level feature integration during the decoding stage by integrating multi-scale and multi-path cascaded convolutional techniques and enhances feature aggregation through dual attention modules, skip connections, and a feature enhancer. MMCCNet achieves superior performance in identifying polyp areas at the pixel level. The Proposed MMCC-Net was tested across six public datasets and compared against eight SOTA models to demonstrate its efficiency in polyp segmentation. The MMCC-Net's performance shows Dice scores with confidence interval ranging between 77.43 +/- 0.12, (77.08, 77.56) and 94.45 +/- 0.12, (94.19, 94.71) and Mean Intersection over Union (MIoU) scores with confidence interval ranging from 72.71 +/- 0.19, (72.20, 73.00) to 90.16 +/- 0.16, (89.69, 90.53) on the six databases. These results highlight the model's potential as a powerful tool for accurate and efficient polyp segmentation, contributing to early detection and prevention strategies in colorectal cancer.

Keyword :

Feature aggregation Feature aggregation Semantic segmentation Semantic segmentation Colorectal polyp Colorectal polyp Attention modules Attention modules Cascaded convolution network Cascaded convolution network

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Manan, Malik Abdul , Feng, Jinchao , Yaqub, Muhammad et al. Multi-scale and multi-path cascaded convolutional network for semantic segmentation of colorectal polyps [J]. | ALEXANDRIA ENGINEERING JOURNAL , 2024 , 105 : 341-359 .
MLA Manan, Malik Abdul et al. "Multi-scale and multi-path cascaded convolutional network for semantic segmentation of colorectal polyps" . | ALEXANDRIA ENGINEERING JOURNAL 105 (2024) : 341-359 .
APA Manan, Malik Abdul , Feng, Jinchao , Yaqub, Muhammad , Ahmed, Shahzad , Imran, Syed Muhammad Ali , Chuhan, Imran Shabir et al. Multi-scale and multi-path cascaded convolutional network for semantic segmentation of colorectal polyps . | ALEXANDRIA ENGINEERING JOURNAL , 2024 , 105 , 341-359 .
Export to NoteExpress RIS BibTex
Semantic segmentation of retinal exudates using a residual encoder-decoder architecture in diabetic retinopathy SCIE
期刊论文 | 2023 , 86 (11) , 1443-1460 | MICROSCOPY RESEARCH AND TECHNIQUE
WoS CC Cited Count: 4
Abstract&Keyword Cite

Abstract :

Exudates are a common sign of diabetic retinopathy, which is a disease that affects the blood vessels in the retina. Early detection of exudates is critical to avoiding vision problems through continuous screening and treatment. In traditional clinical practice, the involved lesions are manually detected using photographs of the fundus. However, this task is cumbersome and time-consuming and requires intense effort due to the small size of the lesion and the low contrast of the images. Thus, computer-assisted diagnosis of retinal disease based on the detection of red lesions has been actively explored recently. In this paper, we present a comparison of deep convolutional neural network (CNN) architectures and propose a residual CNN with residual skip connections to reduce the parameter for the semantic segmentation of exudates in retinal images. A suitable image augmentation technique is used to improve the performance of network architecture. The proposed network can robustly segment exudates with high accuracy, which makes it suitable for diabetic retinopathy screening. A comparative performance analysis of three benchmark databases: E-ophtha, DIARETDB1, and Hamilton Ophthalmology Institute's Macular Edema, is presented. The proposed method achieves a precision of 0.95, 0.92, 0.97, accuracy of 0.98, 0.98, 0.98, sensitivity of 0.97, 0.95, 0.95, specificity of 0.99, 0.99, 0.99, and area under the curve of 0.97, 0.94, and 0.96, respectively.

Keyword :

data augmentation data augmentation semantic segmentation semantic segmentation residual network residual network diabetic retinopathy diabetic retinopathy retinal image retinal image convolution neural network convolution neural network exudates exudates

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Manan, Malik Abdul , Jinchao, Feng , Khan, Tariq M. M. et al. Semantic segmentation of retinal exudates using a residual encoder-decoder architecture in diabetic retinopathy [J]. | MICROSCOPY RESEARCH AND TECHNIQUE , 2023 , 86 (11) : 1443-1460 .
MLA Manan, Malik Abdul et al. "Semantic segmentation of retinal exudates using a residual encoder-decoder architecture in diabetic retinopathy" . | MICROSCOPY RESEARCH AND TECHNIQUE 86 . 11 (2023) : 1443-1460 .
APA Manan, Malik Abdul , Jinchao, Feng , Khan, Tariq M. M. , Yaqub, Muhammad , Ahmed, Shahzad , Chuhan, Imran Shabir . Semantic segmentation of retinal exudates using a residual encoder-decoder architecture in diabetic retinopathy . | MICROSCOPY RESEARCH AND TECHNIQUE , 2023 , 86 (11) , 1443-1460 .
Export to NoteExpress RIS BibTex
DeepLabV3, IBCO-based ALCResNet: A fully automated classification, and grading system for brain tumor SCIE
期刊论文 | 2023 , 76 , 609-627 | ALEXANDRIA ENGINEERING JOURNAL
WoS CC Cited Count: 14
Abstract&Keyword Cite

Abstract :

Brain tumors, which are uncontrolled growths of brain cells, pose a threat to people worldwide. However, accurately classifying brain tumors through computerized methods has been difficult due to differences in size, shape, and location of the tumors and limitations in the medical field. Improved precision is critical in detecting brain tumors, as small errors in human judgments can result in increased mortality rates. This paper proposes a new method for improving early detection and decision-making in brain tumor severity using learning methodologies. Clinical data -sets are used to obtain benchmark images of brain tumors, which undergo pre-processing, data aug-mentation with a Generative Adversarial Network, and classification with an Adaptive Layer Cascaded ResNet (ALCResNet) optimized with Improved Border Collie Optimization (IBCO). The abnormal images are then segmented using the DeepLabV3 model and fed into the ALCRes-Net for final classification into Meningioma, Glioma, or Pituitary. The IBCO algorithm-based ALCResNet model outperforms other heuristic classifiers for brain tumor classification and severity estimation, with improvements ranging from 1.3% to 4.4% over COA-ALCResNet, DHOA-ALCResNet, MVO-ALCResNet, and BCO-ALCResNet. The IBCO algorithm-based ALCResNet model also achieves higher accuracy than non-heuristic classifiers such as CNN, DNN, SVM, and ResNet, with improvements ranging from 2.4% to 3.6% for brain tumor classification and 0.9% to 3.8% for severity estimation. The proposed method offers an automated classification and grading system for brain tumors and improves the accuracy of brain tumor classification and severity estimation, promoting more precise decision-making regarding diagnosis and treatment.& COPY; 2023 The Authors. Published by Elsevier BV on behalf of Faculty of Engineering, Alexandria University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/ licenses/by-nc-nd/4.0/).

Keyword :

Adaptive Layer Cascaded ResNet Adaptive Layer Cascaded ResNet Generative Adversarial Net-work Generative Adversarial Net-work Brain Tumor Grading Sys-tem Brain Tumor Grading Sys-tem DeepLabV3 DeepLabV3 Brain Tumor Classification System Brain Tumor Classification System Improved Border Collie Optimization Improved Border Collie Optimization

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yaqub, Muhammad , Jinchao, Feng , Ahmed, Shahzad et al. DeepLabV3, IBCO-based ALCResNet: A fully automated classification, and grading system for brain tumor [J]. | ALEXANDRIA ENGINEERING JOURNAL , 2023 , 76 : 609-627 .
MLA Yaqub, Muhammad et al. "DeepLabV3, IBCO-based ALCResNet: A fully automated classification, and grading system for brain tumor" . | ALEXANDRIA ENGINEERING JOURNAL 76 (2023) : 609-627 .
APA Yaqub, Muhammad , Jinchao, Feng , Ahmed, Shahzad , Mehmood, Atif , Chuhan, Imran Shabir , Manan, Malik Abdul et al. DeepLabV3, IBCO-based ALCResNet: A fully automated classification, and grading system for brain tumor . | ALEXANDRIA ENGINEERING JOURNAL , 2023 , 76 , 609-627 .
Export to NoteExpress RIS BibTex
Deep Learning-Based Image Reconstruction for Different Medical Imaging Modalities SCIE
期刊论文 | 2022 , 2022 | COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE
WoS CC Cited Count: 21
Abstract&Keyword Cite

Abstract :

Image reconstruction in magnetic resonance imaging (MRI) and computed tomography (CT) is a mathematical process that generates images at many different angles around the patient. Image reconstruction has a fundamental impact on image quality. In recent years, the literature has focused on deep learning and its applications in medical imaging, particularly image reconstruction. Due to the performance of deep learning models in a wide variety of vision applications, a considerable amount of work has recently been carried out using image reconstruction in medical images. MRI and CT appear as the ultimate scientifically appropriate imaging mode for identifying and diagnosing different diseases in this ascension age of technology. This study demonstrates a number of deep learning image reconstruction approaches and a comprehensive review of the most widely used different databases. We also give the challenges and promising future directions for medical image reconstruction.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yaqub, Muhammad , Jinchao, Feng , Arshid, Kaleem et al. Deep Learning-Based Image Reconstruction for Different Medical Imaging Modalities [J]. | COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE , 2022 , 2022 .
MLA Yaqub, Muhammad et al. "Deep Learning-Based Image Reconstruction for Different Medical Imaging Modalities" . | COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022 (2022) .
APA Yaqub, Muhammad , Jinchao, Feng , Arshid, Kaleem , Ahmed, Shahzad , Zhang, Wenqian , Nawaz, Muhammad Zubair et al. Deep Learning-Based Image Reconstruction for Different Medical Imaging Modalities . | COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE , 2022 , 2022 .
Export to NoteExpress RIS BibTex
A novelty Convolutional Neural Network Based Direct Reconstruction for MRI Guided Diffuse Optical Tomography CPCI-S
期刊论文 | 2022 , 11952 | MULTIMODAL BIOMEDICAL IMAGING XVII
Abstract&Keyword Cite

Abstract :

Diffuse Optical Tomography ( DOT) is a promising non-invasive and relatively low-cost biomedical image technology. The aim of DOT is to reconstruct optical properties of the tissue from boundary measurements. However, the DOT reconstruction is a severely ill-posed problem. To reduce the ill-posedness of DOT and to improve image quality, image-guided DOT has attracted more attention. In this paper, a reconstruction algorithm for DOT is proposed based on the convolutional neural network (CNN). It uses both optical measurements and magnetic resonance imaging (MRI) images as the input of the CNN, and directly reconstructs the distribution of absorption coefficient. The merits of the proposed algorithm are without segmenting MRI images and modeling light propagation. The performance of the proposed algorithm is evaluated using numerical simulation experiments. Our results reveal that the proposed method can achieve superior performance compared with conventional reconstruction algorithms and other deep learning methods. Our result shows that the average SSIM of reconstructed images is above 0.88, and the average PSNR is more than 35 dB.

Keyword :

image-guided reconstruction image-guided reconstruction Diffuse optical tomography Diffuse optical tomography convolutional neural network convolutional neural network deep learning deep learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Wanlong , Li, Zhe , Sun, Zhonghua et al. A novelty Convolutional Neural Network Based Direct Reconstruction for MRI Guided Diffuse Optical Tomography [J]. | MULTIMODAL BIOMEDICAL IMAGING XVII , 2022 , 11952 .
MLA Zhang, Wanlong et al. "A novelty Convolutional Neural Network Based Direct Reconstruction for MRI Guided Diffuse Optical Tomography" . | MULTIMODAL BIOMEDICAL IMAGING XVII 11952 (2022) .
APA Zhang, Wanlong , Li, Zhe , Sun, Zhonghua , Jia, Kebin , Feng, Jinchao . A novelty Convolutional Neural Network Based Direct Reconstruction for MRI Guided Diffuse Optical Tomography . | MULTIMODAL BIOMEDICAL IMAGING XVII , 2022 , 11952 .
Export to NoteExpress RIS BibTex
Diffusion equation engine deep learning for diffuse optical tomography CPCI-S
期刊论文 | 2022 , 11952 | MULTIMODAL BIOMEDICAL IMAGING XVII
Abstract&Keyword Cite

Abstract :

Diffuse optical tomography (DOT) is a promising non-invasive optical imaging technique that can provide functional information of biological tissues. Since diffuse light undergoes multiple scattering in biological tissues and boundary measurements are limited, DOT reconstruction is ill-posedness and ill-conditioned. To overcome these limitations, Tikhonov regularization is the most popular algorithm. Recently, deep learning based reconstruction methods have attracted increasing attention, and promising results have been reported. However, they lack generalization for unstructured physical model. Therefore, a model-base convolution neural network framework (Model-CNN) is developed. It composes of two layers: data consistency layer and depth layer, which increases the interpretability of the model. Its performance is evaluated with numerical simulations. Our results demonstrate that Model-CNN can get better reconstructed results than those obtained by Tikhonov Regularization in terms of ABE, MSE, and PSNR.

Keyword :

Convolutional neural network Convolutional neural network diffuse optical tomography diffuse optical tomography model-engine model-engine

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wei, Chengpu , Li, Zhe , Sun, Zhonghua et al. Diffusion equation engine deep learning for diffuse optical tomography [J]. | MULTIMODAL BIOMEDICAL IMAGING XVII , 2022 , 11952 .
MLA Wei, Chengpu et al. "Diffusion equation engine deep learning for diffuse optical tomography" . | MULTIMODAL BIOMEDICAL IMAGING XVII 11952 (2022) .
APA Wei, Chengpu , Li, Zhe , Sun, Zhonghua , Jia, Kebin , Feng, Jinchao . Diffusion equation engine deep learning for diffuse optical tomography . | MULTIMODAL BIOMEDICAL IMAGING XVII , 2022 , 11952 .
Export to NoteExpress RIS BibTex
10| 20| 50 per page
< Page ,Total 13 >

Export

Results:

Selected

to

Format:
Online/Total:520/9272616
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.