• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:毋立芳

Refining:

Source

Submit Unfold

Co-Author

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 31 >
EIKA: Explicit & Implicit Knowledge-Augmented Network for entity-aware sports video captioning SCIE
期刊论文 | 2025 , 274 | EXPERT SYSTEMS WITH APPLICATIONS
Abstract&Keyword Cite

Abstract :

Sports video captioning in real application scenarios requires both entities and specific scenes. However, it is difficult to extract this fine-grained information solely from the video content. This paper introduces an Explicit & Implicit Knowledge-Augmented Network for Entity-Aware Sports Video Captioning (EIKA), which leverages both explicit game-related knowledge (i.e., the set of involved player entities) and implicit visual scene knowledge extracted from the training set. Our innovative Entity-Video Interaction Module (EVIM) and Video-Knowledge Interaction Module (VKIM) are instrumental in enhancing the extraction of entity-related and scene-specific video features, respectively. The spatiotemporal information in video is encoded by introducing the Spatial-Temporal Modeling Module (STMM). And the designed Scene-To-Entity (STE) decoder fully utilizes the two kinds of knowledge to generate informative captions with the distributed decoding approach. Extensive evaluations on the VC-NBA-2022, Goal and NSVA datasets demonstrate that our method has the leading performance compared with existing methods.

Keyword :

Explicit knowledge Explicit knowledge Implicit knowledge Implicit knowledge Entity-aware sports video captioning Entity-aware sports video captioning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xi, Zeyu , Shi, Ge , Sun, Haoying et al. EIKA: Explicit & Implicit Knowledge-Augmented Network for entity-aware sports video captioning [J]. | EXPERT SYSTEMS WITH APPLICATIONS , 2025 , 274 .
MLA Xi, Zeyu et al. "EIKA: Explicit & Implicit Knowledge-Augmented Network for entity-aware sports video captioning" . | EXPERT SYSTEMS WITH APPLICATIONS 274 (2025) .
APA Xi, Zeyu , Shi, Ge , Sun, Haoying , Zhang, Bowen , Li, Shuyi , Wu, Lifang . EIKA: Explicit & Implicit Knowledge-Augmented Network for entity-aware sports video captioning . | EXPERT SYSTEMS WITH APPLICATIONS , 2025 , 274 .
Export to NoteExpress RIS BibTex
Hybrid physical model and status data-driven approach for quality-reliable digital light processing 3D printing SCIE
期刊论文 | 2025 , 20 (1) | VIRTUAL AND PHYSICAL PROTOTYPING
Abstract&Keyword Cite

Abstract :

Existing methods for detecting anomalies in digital light processing (DLP) 3D printing and performing in-situ repairs can reduce most defects and improve success rates. However, since printing control parameters cannot adapt to real-time printing conditions, anomalies may persist across successive layers, and continuous repairs could ultimately lead to printing failure. Therefore, achieving stable printing quality requires integrating anomaly detection with the dynamic adjustment of control parameters. In this paper, we propose a hybrid approach that combines physical models with real-time status data to achieve quality-reliable DLP 3D printing. We developed a status data acquisition scheme to monitor printing status, including the downward force exerted on the printing platform, curing temperatures, resin levels, and surface morphology. Analyzing the collected data provides both status and anomaly information, enabling in-situ repair strategies to address abnormalities with minimal disruption to the printing process. Additionally, an Extended Kalman Filter integrates status data with physical models to dynamically optimise printing parameters. Experimental results show that the proposed scheme effectively addresses typical anomalies, optimises printing times, and significantly improves success rates while preserving the mechanical performance of printed models. Furthermore, the approach adapts to varying printing status, resin materials, and models.

Keyword :

status monitoring status monitoring control parameters control parameters physical model physical model Digital light processing Digital light processing dynamic control dynamic control

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhao, Lidong , Zhang, Xueyun , Zhao, Zhi et al. Hybrid physical model and status data-driven approach for quality-reliable digital light processing 3D printing [J]. | VIRTUAL AND PHYSICAL PROTOTYPING , 2025 , 20 (1) .
MLA Zhao, Lidong et al. "Hybrid physical model and status data-driven approach for quality-reliable digital light processing 3D printing" . | VIRTUAL AND PHYSICAL PROTOTYPING 20 . 1 (2025) .
APA Zhao, Lidong , Zhang, Xueyun , Zhao, Zhi , Ma, Limin , Wu, Lifang . Hybrid physical model and status data-driven approach for quality-reliable digital light processing 3D printing . | VIRTUAL AND PHYSICAL PROTOTYPING , 2025 , 20 (1) .
Export to NoteExpress RIS BibTex
Quality-Invariant Domain Generalization for Face Anti-Spoofing SCIE
期刊论文 | 2024 , 132 (11) , 5239-5254 | INTERNATIONAL JOURNAL OF COMPUTER VISION
Abstract&Keyword Cite

Abstract :

Face Anti-Spoofing (FAS) plays a critical role in safeguarding face recognition systems, while previous FAS methods suffer from poor generalization when applied to unseen domains. Although recent methods have made progress via domain generalization technology, they are still sensitive to variations in face quality caused by task-irrelevant factors like camera and illumination. In this paper, we propose a novel Quality-Invariant Domain Generalization method (QIDG) with a teacher-student architecture, which aligns liveness features into a quality-invariant space to alleviate interference from task-irrelated factors. Specifically, QIDG utilizes the teacher model to produce face quality representations, which serve as the guidance for the student model to explore the quality-invariant space. To seek this space, the student model devises two novel modules, i.e., a dual adversarial learning module (DAL) and a quality feature assembly module (QFA). The former produces domain-invariant liveness features and task-irrelated quality features. While the latter assembles these two features from the same faces into complete quality representations, as well as assembles these two features from living faces in different domains. In this way, QIDG not only achieves the alignment of the domain-invariant liveness features to the quality-invariant space, but also promotes compactness of living faces from different domains in the feature space. Extensive cross-domain experiments demonstrate the superiority of our method on five public databases.

Keyword :

Quality-invariant space Quality-invariant space Adversarial learning Adversarial learning Face anti-spoofing Face anti-spoofing Domain generalization Domain generalization

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Liu, Yongluo , Li, Zun , Xu, Yaowen et al. Quality-Invariant Domain Generalization for Face Anti-Spoofing [J]. | INTERNATIONAL JOURNAL OF COMPUTER VISION , 2024 , 132 (11) : 5239-5254 .
MLA Liu, Yongluo et al. "Quality-Invariant Domain Generalization for Face Anti-Spoofing" . | INTERNATIONAL JOURNAL OF COMPUTER VISION 132 . 11 (2024) : 5239-5254 .
APA Liu, Yongluo , Li, Zun , Xu, Yaowen , Guo, Zhizhi , Zou, Zhaofan , Wu, Lifang . Quality-Invariant Domain Generalization for Face Anti-Spoofing . | INTERNATIONAL JOURNAL OF COMPUTER VISION , 2024 , 132 (11) , 5239-5254 .
Export to NoteExpress RIS BibTex
Multi-view and region reasoning semantic enhancement for image-text retrieval SCIE
期刊论文 | 2024 , 30 (4) | MULTIMEDIA SYSTEMS
Abstract&Keyword Cite

Abstract :

Image and text retrieval is a crucial topic in the fields of language and vision. The key to successful Image-Text retrieval is achieving accurate cross-modal representation and capturing essential correlations between image-sentence or words-regions. While existing work has designed intricate interactions to capture these correlations, challenges remain due to inadequate feature representations, such as insufficient text descriptions of image and ambiguous region representations. To address these challenges, we propose a novel approach, multi-view and region reasoning semantic enhancement, for image and text retrieval, which aims to enhance the semantic representation of features from both textual and visual modalities. Specifically, considering that an image can have multiple corresponding texts from different perspectives, with each text describing a single view, we devise a multi-view textual semantic enhancement module. This module takes advantage of the positive textual cues provided by corresponding image to make up for the limited knowledge in single-text views and produce a comprehensive image-based textual representation. Then, to address the semantic diversity of an image, we design a region reasoning semantic enhancement module that employs a graph structure to integrate both semantic and spatial reasoning knowledge from different regions, thereby clarifying the semantics of image regions and enhancing the overall semantic understanding of these areas. Extensive experiments and analyses demonstrate the superior performance of the proposed method on the Flickr30K and MSCOCO datasets, validating the effectiveness of the proposed solution.

Keyword :

Semantic enhancement Semantic enhancement Image-text retrieval Image-text retrieval Graph Reasoning Graph Reasoning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Cheng, Wengang , Han, Ziyi , He, Di et al. Multi-view and region reasoning semantic enhancement for image-text retrieval [J]. | MULTIMEDIA SYSTEMS , 2024 , 30 (4) .
MLA Cheng, Wengang et al. "Multi-view and region reasoning semantic enhancement for image-text retrieval" . | MULTIMEDIA SYSTEMS 30 . 4 (2024) .
APA Cheng, Wengang , Han, Ziyi , He, Di , Wu, Lifang . Multi-view and region reasoning semantic enhancement for image-text retrieval . | MULTIMEDIA SYSTEMS , 2024 , 30 (4) .
Export to NoteExpress RIS BibTex
Developing the optimized control scheme for digital light processing 3D printing by combining numerical simulation and machine learning-guided temperature prediction SCIE
期刊论文 | 2024 , 132 , 363-374 | JOURNAL OF MANUFACTURING PROCESSES
Abstract&Keyword Cite

Abstract :

Digital light processing (DLP) 3D printing has attracted significant attention for its rapid printing speed, high accuracy, and diverse applications. However, the continuous DLP printing process releases substantial heat, resulting in a swift temperature rise in the curing area, which may lead to printing failures. Due to the lack of effective means to measure real-time temperature changes of the curing surface during continuous DLP 3D printing, the prevailing approach is to predict temperature variations during printing via numerical simulation. Nevertheless, temperature prediction methods relying solely on numerical simulation tend to be slow and overlook heat exchange dynamics during printing, potentially resulting in prediction inaccuracies, particularly for complex models. To address these issues, this paper proposes a method to combine numerical simulation and a machine learning approach for temperature prediction in the DLP 3D printing process, along with a printing control scheme generation method. Firstly, the ( m + n )th order autocatalytic kinetic model considering the light intensity and the Beer-Lambert law are employed to formulate the heat calculation equation for the photopolymer resin curing reaction. Subsequently, a heat exchange calculation equation is established based on Fourier heat conduction law and Newton's cooling equation. A numerical simulation model for temperature changes during the printing process is then developed by integrating the heat calculation equation, heat exchange calculation equation, and measurement data from Photo-DSC. Furthermore, a temperature measurement device for the printing process is designed to validate the accuracy of the numerical simulation. Following this, an improved Long Short-term Memory (LSTM) network is proposed, using temperature change data generated by the numerical simulation model to train the network for rapid (2 x 10 -4 s/layer) prediction of temperature changes during printing. Finally, aiming for the shortest printing time, an optimized control scheme planning algorithm and a target function are designed based on the model's temperature change data and the monomer's flash point to ensure the temperature remains below this threshold. This algorithm can automatically generate the optimal printing control scheme for any model. Experimental results demonstrate that the proposed temperature prediction method can predict temperature variation accurately. Based on this, the generated printing control scheme can guarantee efficient and high-quality manufacturing for any model.

Keyword :

Temperature prediction Temperature prediction Numerical simulation Numerical simulation Machine learning Machine learning Digital light processing Digital light processing

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhao, Lidong , Zhao, Zhi , Ma, Limin et al. Developing the optimized control scheme for digital light processing 3D printing by combining numerical simulation and machine learning-guided temperature prediction [J]. | JOURNAL OF MANUFACTURING PROCESSES , 2024 , 132 : 363-374 .
MLA Zhao, Lidong et al. "Developing the optimized control scheme for digital light processing 3D printing by combining numerical simulation and machine learning-guided temperature prediction" . | JOURNAL OF MANUFACTURING PROCESSES 132 (2024) : 363-374 .
APA Zhao, Lidong , Zhao, Zhi , Ma, Limin , Li, Shuyi , Men, Zening , Wu, Lifang . Developing the optimized control scheme for digital light processing 3D printing by combining numerical simulation and machine learning-guided temperature prediction . | JOURNAL OF MANUFACTURING PROCESSES , 2024 , 132 , 363-374 .
Export to NoteExpress RIS BibTex
Developing the optimized control scheme for continuous and layer-wise DLP 3D printing by CFD simulation SCIE
期刊论文 | 2023 , 125 (3-4) , 1511-1529 | INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY
WoS CC Cited Count: 10
Abstract&Keyword Cite

Abstract :

In recent years, a series of continuous fabrication technologies based on digital light processing (DLP) 3D printing have emerged, which have significantly improved the speed of 3D printing. However, limited by the resin filling speed, those technologies are only suitable to print hollow structures. In this paper, an optimized protocol for developing continuous and layer-wise hybrid DLP 3D printing mode is proposed based on computational fluid dynamics (CFD). Volume of the fluid method is used to simulate the behavior of resin flow while Poiseuille flow, Jacobs working curve, and Beer-Lambert law are used to optimize the key control parameters for continuous and layer-wise printing. This strategy provides a novel simulation-based method development scenario to establish printing control parameters that are applicable to arbitrary structures. Experiments verified that the printing control parameters obtained by simulations can effectively improve the printing efficiency and the applicability of DLP 3D printing.

Keyword :

Computational fluid dynamics Computational fluid dynamics Resin filling Resin filling Control parameters Control parameters DLP 3D printing DLP 3D printing Continuous printing Continuous printing Layer-wise printing Layer-wise printing

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhao, Lidong , Zhang, Yan , Wu, Lifang et al. Developing the optimized control scheme for continuous and layer-wise DLP 3D printing by CFD simulation [J]. | INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY , 2023 , 125 (3-4) : 1511-1529 .
MLA Zhao, Lidong et al. "Developing the optimized control scheme for continuous and layer-wise DLP 3D printing by CFD simulation" . | INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY 125 . 3-4 (2023) : 1511-1529 .
APA Zhao, Lidong , Zhang, Yan , Wu, Lifang , Zhao, Zhi , Men, Zening , Yang, Feng . Developing the optimized control scheme for continuous and layer-wise DLP 3D printing by CFD simulation . | INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY , 2023 , 125 (3-4) , 1511-1529 .
Export to NoteExpress RIS BibTex
Improving Face Anti-spoofing via Advanced Multi-perspective Feature Learning SCIE
期刊论文 | 2023 , 19 (6) | ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS
Abstract&Keyword Cite

Abstract :

Face anti-spoofing (FAS) plays a vital role in securing face recognition systems. Previous approaches usually learn spoofing features from a single perspective, in which only universal cues shared by all attack types are explored. However, such single-perspective-based approaches ignore the differences among various attacks and commonness between certain attacks and bona fides, thus tending to neglect some non-universal cues that contain strong discernibility against certain types. As a result, when dealing with multiple types of attacks, the above approaches may suffer from the uncomprehensive representation of bona fides and spoof faces. In this work, we propose a novel Advanced Multi-Perspective Feature Learning network (AMPFL), in which multiple perspectives are adopted to learn discriminative features, to improve the performance of FAS. Specifically, the proposed network first learns universal cues and several perspective-specific cues from multiple perspectives, then aggregates the above features and further enhances them to perform face anti-spoofing. In this way, AMPFL obtains features that are difficult to be captured by single-perspective-based methods and provides more comprehensive information on bona fides and spoof faces, thus achieving better performance for FAS. Experimental results show that our AMPFL achieves promising results in public databases, and it effectively solves the issues of single-perspective-based approaches.

Keyword :

multi-perspective multi-perspective universal cues universal cues Face anti-spoofing Face anti-spoofing

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Zhuming , Xu, Yaowen , Wu, Lifang et al. Improving Face Anti-spoofing via Advanced Multi-perspective Feature Learning [J]. | ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS , 2023 , 19 (6) .
MLA Wang, Zhuming et al. "Improving Face Anti-spoofing via Advanced Multi-perspective Feature Learning" . | ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS 19 . 6 (2023) .
APA Wang, Zhuming , Xu, Yaowen , Wu, Lifang , Han, Hu , Ma, Yukun , Li, Zun . Improving Face Anti-spoofing via Advanced Multi-perspective Feature Learning . | ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS , 2023 , 19 (6) .
Export to NoteExpress RIS BibTex
基于深度学习的端到端全局和局部运动估计方法 incoPat zhihuiya
专利 | 2023-01-09 | CN202310029285.0
Abstract&Keyword Cite

Abstract :

基于深度学习的端到端全局和局部运动估计方法属于图像处理领域。从原始视频中估计全局和局部运动是很有必要的。现有的全局和局部运动估计方法都不能以端到端的形式同时对视频帧中的两种运动进行估计。本发明提出了一种分别进行全局和局部运动估计的三模块运动估计网络,提出了基于特征维度变换和全局运动基的全局运动估计器,来约束全局运动估计模块关注全局低秩信息,并排除非全局信息的干扰。利用混合重构损失、全局重构损失和局部重构损失三个损失函数对网络进行无监督深度学习。在单应性估计数据集DHE和行为识别数据集NCAA上验证了本发明的有效性。实验结果表明,本发明具有比以往的方法更好的性能。

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 毋立芳 , 郑祎豪 , 李尊 et al. 基于深度学习的端到端全局和局部运动估计方法 : CN202310029285.0[P]. | 2023-01-09 .
MLA 毋立芳 et al. "基于深度学习的端到端全局和局部运动估计方法" : CN202310029285.0. | 2023-01-09 .
APA 毋立芳 , 郑祎豪 , 李尊 , 相叶 . 基于深度学习的端到端全局和局部运动估计方法 : CN202310029285.0. | 2023-01-09 .
Export to NoteExpress RIS BibTex
Improved Dual Attention for Anchor-Free Object Detection SCIE
期刊论文 | 2022 , 22 (13) | SENSORS
WoS CC Cited Count: 2
Abstract&Keyword Cite

Abstract :

In anchor-free object detection, the center regions of bounding boxes are often highly weighted to enhance detection quality. However, the central area may become less significant in some situations. In this paper, we propose a novel dual attention-based approach for the adaptive weight assignment within bounding boxes. The proposed improved dual attention mechanism allows us to thoroughly untie spatial and channel attention and resolve the confusion issue, thus it becomes easier to obtain the proper attention weights. Specifically, we build an end-to-end network consisting of backbone, feature pyramid, adaptive weight assignment based on dual attention, regression, and classification. In the adaptive weight assignment module based on dual attention, a parallel framework with the depthwise convolution for spatial attention and the 1D convolution for channel attention is applied. The depthwise convolution, instead of standard convolution, helps prevent the interference between spatial and channel attention. The 1D convolution, instead of fully connected layer, is experimentally proved to be both efficient and effective. With the adaptive and proper attention, the correctness of object detection can be further improved. On public MS-COCO dataset, our approach obtains an average precision of 52.7%, achieving a great increment compared with other anchor-free object detectors.

Keyword :

dual attention dual attention anchor-free object detection anchor-free object detection adaptive weight assignment adaptive weight assignment

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xiang, Ye , Zhao, Boxuan , Zhao, Kuan et al. Improved Dual Attention for Anchor-Free Object Detection [J]. | SENSORS , 2022 , 22 (13) .
MLA Xiang, Ye et al. "Improved Dual Attention for Anchor-Free Object Detection" . | SENSORS 22 . 13 (2022) .
APA Xiang, Ye , Zhao, Boxuan , Zhao, Kuan , Wu, Lifang , Wang, Xiangdong . Improved Dual Attention for Anchor-Free Object Detection . | SENSORS , 2022 , 22 (13) .
Export to NoteExpress RIS BibTex
Object-Level Visual-Text Correlation Graph Hashing for Unsupervised Cross-Modal Retrieval SCIE
期刊论文 | 2022 , 22 (8) | SENSORS
WoS CC Cited Count: 4
Abstract&Keyword Cite

Abstract :

The core of cross-modal hashing methods is to map high dimensional features into binary hash codes, which can then efficiently utilize the Hamming distance metric to enhance retrieval efficiency. Recent development emphasizes the advantages of the unsupervised cross-modal hashing technique, since it only relies on relevant information of the paired data, making it more applicable to real-world applications. However, two problems, that is intro-modality correlation and inter-modality correlation, still have not been fully considered. Intra-modality correlation describes the complex overall concept of a single modality and provides semantic relevance for retrieval tasks, while inter-modality correction refers to the relationship between different modalities. From our observation and hypothesis, the dependency relationship within the modality and between different modalities can be constructed at the object level, which can further improve cross-modal hashing retrieval accuracy. To this end, we propose a Visual-textful Correlation Graph Hashing (OVCGH) approach to mine the fine-grained object-level similarity in cross-modal data while suppressing noise interference. Specifically, a novel intra-modality correlation graph is designed to learn graph-level representations of different modalities, obtaining the dependency relationship of the image region to image region and the tag to tag in an unsupervised manner. Then, we design a visual-text dependency building module that can capture correlation semantic information between different modalities by modeling the dependency relationship between image object region and text tag. Extensive experiments on two widely used datasets verify the effectiveness of our proposed approach.

Keyword :

hashing retrieval hashing retrieval deep model deep model cross-modal hash learning cross-modal hash learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Shi, Ge , Li, Feng , Wu, Lifang et al. Object-Level Visual-Text Correlation Graph Hashing for Unsupervised Cross-Modal Retrieval [J]. | SENSORS , 2022 , 22 (8) .
MLA Shi, Ge et al. "Object-Level Visual-Text Correlation Graph Hashing for Unsupervised Cross-Modal Retrieval" . | SENSORS 22 . 8 (2022) .
APA Shi, Ge , Li, Feng , Wu, Lifang , Chen, Yukun . Object-Level Visual-Text Correlation Graph Hashing for Unsupervised Cross-Modal Retrieval . | SENSORS , 2022 , 22 (8) .
Export to NoteExpress RIS BibTex
10| 20| 50 per page
< Page ,Total 31 >

Export

Results:

Selected

to

Format:
Online/Total:841/9275284
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.