Query:
学者姓名:左国玉
Refining:
Year
Type
Indexed by
Source
Complex
Co-Author
Language
Clean All
Abstract :
Many of the detection methods used in robotic grasping tasks do not fully consider the multiscale information of graspable objects, thus reducing the accuracy of grasp detection in complex scenarios. To overcome this problem, we propose a new hybrid grasp-detection network model with a skip-connected encoder-decoder structure called HBGNet. To realize the complete fusion of different feature information dimensions, a multiscale skip connection mechanism is designed to add the features of various convolutional neural network (CNN) blocks in the encoder to the corresponding concat blocks in the decoder. The HBGNet encoder is designed as a multiscale hybrid (MCT) encoder, by combining multiple CNN blocks and transformer layers to acquire high- and low-level features simultaneously. The proposed HBGNet was trained and tested on the publicly available Cornell and Jacquard grasping datasets with accuracies of 99.75% and 97.36%, respectively. The performance of HBGNet was demonstrated via comparison and ablation experiments. A real-world robot grasp experiment was performed on the AUBO-i5 robotic platform to verify the generalization ability of HBGNet, with an average grasp success rate of 97.1%. Experimental results indicate that HBGNet can fully acquire multiscale information of grasped objects using hybrid networks and effectively complete grasp detection tasks in cluttered scenes in a generalizable manner.
Keyword :
grasp detection grasp detection Real-time systems Real-time systems Accuracy Accuracy hybrid networks hybrid networks multiscale information multiscale information Grasping Grasping skip connection skip connection Feature extraction Feature extraction Image segmentation Image segmentation Transformers Transformers Robots Robots Convolutional neural networks Convolutional neural networks Grippers Grippers Cluttered scene Cluttered scene Kernel Kernel
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zuo, Guoyu , Shen, Zhihui , Yu, Shuangyue et al. HBGNet: Robotic Grasp Detection Using a Hybrid Network [J]. | IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT , 2025 , 74 . |
MLA | Zuo, Guoyu et al. "HBGNet: Robotic Grasp Detection Using a Hybrid Network" . | IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT 74 (2025) . |
APA | Zuo, Guoyu , Shen, Zhihui , Yu, Shuangyue , Luo, Yongkang , Zhao, Min . HBGNet: Robotic Grasp Detection Using a Hybrid Network . | IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT , 2025 , 74 . |
Export to | NoteExpress RIS BibTex |
Abstract :
PurposeWorkers face a notable risk of musculoskeletal injuries when performing squatting tasks. Knee exoskeletons offer a promising solution to mitigate muscle strain through squat assistance. However, existing studies on knee exoskeletons lack a comprehensive study that meets the multifaceted requirements of squatting assistance in terms of portability, efficiency, and muscle strain mitigation. Furthermore, another open research question pertains to the control strategy of squat assistance, which should be adaptable to various postures and cadences for different individuals. In particular, the effect of controlling negative power assistance during the squat-down phase is not studied.MethodsTo fill these two gaps, first, we develop a simple (computationally efficient and implementable in a microcontroller) and generalizable (for different postures, cadences, and individuals) torque controller for portable knee exoskeletons that delivers both negative and positive power. Our portable knee exoskeleton can benefit users by enhancing efficiency (reducing metabolic cost, heart rate, breathing ventilation), mitigating muscle strain (reducing EMG), and reducing perceived exertion (reducing Borg 6-20 scale) during squatting. Second, we study the effect of three levels of negative power assistance during the squat-down phase.ResultsThis study integrates comprehensive biomechanics and physiology analyses that evaluate our exoskeleton's effectiveness using four objective and two subjective metrics with a group of able-bodied subjects (n = 7). The exoskeleton reduced metabolic cost by 12.8%, heart rate by 13.8%, breathing ventilation by 8.9%, and reduced extensor muscle activity by 39.4-43.2%, flexor muscle activity by 18.9-20.3%, and Borg perceived exertion rate by 1.8 during squatting compare with not wearing the robot.ConclusionDifferent from the musculoskeletal model predictions that suggest increasing benefit with a higher level of negative power assistance, we find that the best performances were achieved with a moderate level of negative power assistance, followed by no assistance and then high assistance.
Keyword :
Negative power Negative power Wearable robotics Wearable robotics Squatting Squatting Dynamic model Dynamic model Portable knee exoskeleton Portable knee exoskeleton
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Yu, Shuangyue , Liu, Lu , Zhang, Sainan et al. Controlling Negative and Positive Power for Efficiency Enhancement and Muscle Strain Mitigation During Squatting with a Portable Knee Exoskeleton [J]. | ANNALS OF BIOMEDICAL ENGINEERING , 2025 . |
MLA | Yu, Shuangyue et al. "Controlling Negative and Positive Power for Efficiency Enhancement and Muscle Strain Mitigation During Squatting with a Portable Knee Exoskeleton" . | ANNALS OF BIOMEDICAL ENGINEERING (2025) . |
APA | Yu, Shuangyue , Liu, Lu , Zhang, Sainan , Di Lallo, Antonio , Zhu, Junxi , Wu, Qifei et al. Controlling Negative and Positive Power for Efficiency Enhancement and Muscle Strain Mitigation During Squatting with a Portable Knee Exoskeleton . | ANNALS OF BIOMEDICAL ENGINEERING , 2025 . |
Export to | NoteExpress RIS BibTex |
Abstract :
Category-level 6D pose estimation aims to accurately predict the spatial position, orientation and scale of unseen objects belonging to a specific category. Existing methods often fall into two categories: prior-based approaches, which typically utilize the Umeyama algorithm and achieve high accuracy but suffer from training limitations and computational overhead, and end-to-end methods, which offer efficient training but often underperform due to a lack of category-specific prior knowledge. To bridge this gap, we propose a novel framework SCA-Pose to leverage the advantages of both approaches. SCA-Pose consists of one main network for efficient inference and an auxiliary network for enhanced accuracy. The main network, featuring local and global feature fusion modules (CNN and HS-Net) and a trainable pose regressor, enables end-to-end learning for real-time applications. The auxiliary network further refines the pose prediction by incorporating intrinsic geometric consistency constraints between Normalized Object Coordinate Space (NOCS) coordinates and object pose and size, while adaptively accounting for intra-class shape variations. Experimental results on the REAL275 and CAMERA25 datasets show that SCA-Pose has significant performance improvement compared to the existing baseline method (RBP-Pose) and can achieve real-time operation (30FPS).
Keyword :
Category-level Category-level Adaptive shape prior Adaptive shape prior 6D pose estimation 6D pose estimation End-to-end End-to-end
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zuo, Guoyu , Yu, Shan , Yu, Shuangyue et al. Sca-pose: category-level 6D pose estimation with adaptive shape prior based on CNN and graph convolution [J]. | INTELLIGENT SERVICE ROBOTICS , 2025 . |
MLA | Zuo, Guoyu et al. "Sca-pose: category-level 6D pose estimation with adaptive shape prior based on CNN and graph convolution" . | INTELLIGENT SERVICE ROBOTICS (2025) . |
APA | Zuo, Guoyu , Yu, Shan , Yu, Shuangyue , Liu, Hong , Zhao, Min . Sca-pose: category-level 6D pose estimation with adaptive shape prior based on CNN and graph convolution . | INTELLIGENT SERVICE ROBOTICS , 2025 . |
Export to | NoteExpress RIS BibTex |
Abstract :
Trajectory planning is an important stage in robot operation. Many imitation learning methods have been researched for learning operation skills from demonstrated trajectories. However, it is still a challenge to use the learned skill models to generate motion trajectories suitable for various changing conditions. In this paper, a closed-loop dynamical evaluation and optimization mechanism is proposed for imitation learning model to generate the optimal trajectories that can adapt to multiple conditions. This mechanism works by integrating the following parts: (1) imitation learning based on an improved dynamic motion primitive; (2) constructing the trajectory similarity evaluation function; (3) presenting an enhanced whale optimization algorithm(EWOA) by introducing the piecewise decay rate and inertia weight for avoiding getting stuck in local optima. The EWOA iteratively optimizes the key parameter of the skill learning model based on the cost function of the trajectory similarity evaluation for generating the trajectory with the highest similarity to the teaching trajectory. The effectiveness of the EWOA is validated using 10 functions by comparing with the other two methods. And the feasibility of the dynamical optimization mechanism is proved under different motion primitives and various generation conditions.
Keyword :
Trajectory dynamic optimization Trajectory dynamic optimization Enhanced whale optimization algorithm Enhanced whale optimization algorithm Evaluation mechanism Evaluation mechanism Operational skill learning Operational skill learning
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Liu, Chunfang , Li, Changfeng , Li, Xiaoli et al. An effective dynamical evaluation and optimization mechanism for accurate motion primitives learning [J]. | APPLIED INTELLIGENCE , 2025 , 55 (2) . |
MLA | Liu, Chunfang et al. "An effective dynamical evaluation and optimization mechanism for accurate motion primitives learning" . | APPLIED INTELLIGENCE 55 . 2 (2025) . |
APA | Liu, Chunfang , Li, Changfeng , Li, Xiaoli , Zuo, Guoyu , Yu, Pan . An effective dynamical evaluation and optimization mechanism for accurate motion primitives learning . | APPLIED INTELLIGENCE , 2025 , 55 (2) . |
Export to | NoteExpress RIS BibTex |
Abstract :
Gradual correction with parallel external fixators (PEFs) is a common treatment strategy for foot-ankle deformities. Designing appropriate PEFs and developing correction-assisted software (CAS) with comprehensive functions are clinically important to assure a successful treatment and the proper recovery of the patient. However, existing PEFs are inadequately targeted for specific deformity types of foot-ankle, and CAS is not available for detecting multiple interference phenomena, thus, the safety and accuracy of the corrective process cannot be guaranteed. In this article, an electromechanical drive PEF prototype with good deformity-targeting property is proposed for a common type of foot-ankle deformity with five corrective degree-of-freedom. The system is supported with the CAS that involves digital reconstruction technology, trajectory planning technology, and safety inspection technology. In safety inspection technology, collision inspection is carried out based on the oriented bounding box method, and inspection of the distraction rod overstroke and of the singularity are realized based on the kinematic model. In this way, an interference-free correction scheme can be generated and executed using the proposed PEF system. Case simulation demonstrates the feasibility of correction-assisted technologies, the applicability of the prototype, and the postural synergy between the virtual model and substantiation.
Keyword :
Assembly Assembly Medical services Medical services Correction-assisted software (CAS) Correction-assisted software (CAS) external fixator external fixator Safety Safety mechatronic orthopedic system mechatronic orthopedic system foot-ankle deformity correction foot-ankle deformity correction Inspection Inspection Mechatronics Mechatronics safety inspection technology safety inspection technology Visualization Visualization Trajectory Trajectory
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Ju, Jie , Dong, Mingjie , Zuo, Guoyu et al. Mechatronic 5-DOF Parallel External Fixator With Correction-Assisted Software for Correcting Foot-Ankle Deformities [J]. | IEEE-ASME TRANSACTIONS ON MECHATRONICS , 2024 . |
MLA | Ju, Jie et al. "Mechatronic 5-DOF Parallel External Fixator With Correction-Assisted Software for Correcting Foot-Ankle Deformities" . | IEEE-ASME TRANSACTIONS ON MECHATRONICS (2024) . |
APA | Ju, Jie , Dong, Mingjie , Zuo, Guoyu , Li, Jianfeng , Zuo, Shiping . Mechatronic 5-DOF Parallel External Fixator With Correction-Assisted Software for Correcting Foot-Ankle Deformities . | IEEE-ASME TRANSACTIONS ON MECHATRONICS , 2024 . |
Export to | NoteExpress RIS BibTex |
Abstract :
Learning from visual observations is a significant yet challenging problem in Reinforcement Learning (RL). Two respective problems, representation learning and task learning, need to solve to infer an optimal policy. Some methods have been proposed to utilize data augmentation in reinforcement learning to directly learn from images. Although these methods can improve generation in RL, they are often found to make the task learning unsteady and can even lead to divergence. We investigate the causes of instability and find it is usually rooted in high-variance of Q-functions. In this paper, we propose an easy-to-implement and unified method to solve above-mentioned problems, Data-augmented Reinforcement Learning with Ensemble Exploration and Exploitation (DAR-EEE). Bootstrap ensembles are incorporated into data augmented reinforcement learning and provide uncertainty estimation of both original and augmented states, which can be utilized to stabilize and accelerate the task learning. Specially, a novel strategy called uncertainty-weighted exploitation is designed for rational utilization of transition tuples. Moreover, an efficient exploration method using the highest upper confidence is used to balance exploration and exploitation. Our experimental evaluation demonstrates the improved sample efficiency and final performance of our method on a range of difficult image-based control tasks. Especially, our method has achieved the new state-of-the-art performance on Reacher-easy and Cheetah-run tasks.
Keyword :
Bootstrap ensembles Bootstrap ensembles Reinforcement learning from images Reinforcement learning from images Robot learning Robot learning Data augmentation Data augmentation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zuo, Guoyu , Tian, Zhipeng , Huang, Gao . A stable data-augmented reinforcement learning method with ensemble exploration and exploitation [J]. | APPLIED INTELLIGENCE , 2023 , 53 (21) : 24792-24803 . |
MLA | Zuo, Guoyu et al. "A stable data-augmented reinforcement learning method with ensemble exploration and exploitation" . | APPLIED INTELLIGENCE 53 . 21 (2023) : 24792-24803 . |
APA | Zuo, Guoyu , Tian, Zhipeng , Huang, Gao . A stable data-augmented reinforcement learning method with ensemble exploration and exploitation . | APPLIED INTELLIGENCE , 2023 , 53 (21) , 24792-24803 . |
Export to | NoteExpress RIS BibTex |
Abstract :
The positional information of objects is crucial to enable robots to perform grasping and pushing manipulations in clutter. To effectively perform grasping and pushing manipulations, robots need to perceive the position information of objects, including the coordinates and spatial relationship between objects (e.g., proximity, adjacency). The authors propose an end-to-end position-aware deep Q-learning framework to achieve efficient collaborative pushing and grasping in clutter. Specifically, a pair of conjugate pushing and grasping attention modules are proposed to capture the position information of objects and generate high-quality affordance maps of operating positions with features of pushing and grasping operations. In addition, the authors propose an object isolation metric and clutter metric based on instance segmentation to measure the spatial relationships between objects in cluttered environments. To further enhance the perception capacity of position information of the objects, the authors associate the change in the object isolation metric and clutter metric in cluttered environment before and after performing the action with reward function. A series of experiments are carried out in simulation and real-world which indicate that the method improves sample efficiency, task completion rate, grasping success rate and action efficiency compared to state-of-the-art end-to-end methods. Noted that the authors' system can be robustly applied to real-world use and extended to novel objects. Supplementary material is available at .
Keyword :
deep neural networks deep neural networks deep learning deep learning intelligent robots intelligent robots
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zhao, Min , Zuo, Guoyu , Yu, Shuangyue et al. Position-aware pushing and grasping synergy with deep reinforcement learning in clutter [J]. | CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY , 2023 , 9 (3) : 738-755 . |
MLA | Zhao, Min et al. "Position-aware pushing and grasping synergy with deep reinforcement learning in clutter" . | CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY 9 . 3 (2023) : 738-755 . |
APA | Zhao, Min , Zuo, Guoyu , Yu, Shuangyue , Gong, Daoxiong , Wang, Zihao , Sie, Ouattara . Position-aware pushing and grasping synergy with deep reinforcement learning in clutter . | CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY , 2023 , 9 (3) , 738-755 . |
Export to | NoteExpress RIS BibTex |
Abstract :
Grasping in cluttered scenes is an important issue in robotic manipulation. The cooperation of grasping and pushing actions based on reinforcement learning is an effective means to obtain the target object when it is completely blocked or there is no suitable grasping position around it. When exploring invisible objects, many existing methods depend excessively on model design and redundant grasping actions. We propose a graph-based deep reinforcement learning model to efficiently explore invisible objects and improve the performance for cooperative grasping and pushing tasks. Our model first extracts the state features and then estimates the Q value with different graph Q-Nets according to whether the target object is found. The graph-based Q-learning model contains an encoder, a graph reasoning module and a decoder. The encoder is used to integrate the state features such that the features of one region include those of other regions. The graph reasoning module captures the internal relationships of features between different regions through graph convolution networks. The decoder maps the features transformed by reasoning to the original state features. Our method achieves a 100% success rate in the task of exploring the target object and a success rate of more than 90% in the task of grasping and pushing cooperatively in simulation experiment, which performs better than many existing state-of-the-art methods. Our method is an effective means to help robots obtain completely occluded objects by grasping and pushing cooperation in the cluttered scenes. The verification experiment on the real robot further shows the generalization and practicability of our proposed model.
Keyword :
Reinforcement learning Reinforcement learning Graph convolution network Graph convolution network Robotic manipulation Robotic manipulation Fully occluded object Fully occluded object Grasping and pushing Grasping and pushing
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zuo, Guoyu , Tong, Jiayuan , Wang, Zihao et al. A Graph-Based Deep Reinforcement Learning Approach to Grasping Fully Occluded Objects [J]. | COGNITIVE COMPUTATION , 2022 , 15 (1) : 36-49 . |
MLA | Zuo, Guoyu et al. "A Graph-Based Deep Reinforcement Learning Approach to Grasping Fully Occluded Objects" . | COGNITIVE COMPUTATION 15 . 1 (2022) : 36-49 . |
APA | Zuo, Guoyu , Tong, Jiayuan , Wang, Zihao , Gong, Daoxiong . A Graph-Based Deep Reinforcement Learning Approach to Grasping Fully Occluded Objects . | COGNITIVE COMPUTATION , 2022 , 15 (1) , 36-49 . |
Export to | NoteExpress RIS BibTex |
Abstract :
Efficient and collision-free coordination of two robot arms is increasingly needed in various service-oriented robotic applications. This paper proposes a dual arm coordination algorithm to improve the efficiency of coordination by considering both robot's actions and operating sequences of the tasks that need to use two arms to complete complex operations. Teleoperation demonstration is first performed to obtain the robot's human-like motion trajectories, so as to reduce the probability of the collisions between the two arms. The coordination diagram in time domain is then designed to more clearly represent the situations of trajectory collisions and find the collision-free coordination action law. A Coordination Pair Generator (CPG) is designed to reorganize the operating sequences according to the characteristics of input trajectories and the action coordination. The effectiveness and efficiency of our method are verified on the simulation and physical experiments which execute the drug sorting task in nursing homes, respectively, on the ABB YuMi robot model and self-developed robot system. According to the experiment results, the operation time has been reduced by 9% and the collision area has been reduced by 7.5%.
Keyword :
Dual arm coordination Dual arm coordination teleoperation demonstration teleoperation demonstration coordination diagram coordination diagram
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zuo, Guoyu , Xu, Zichen , Huang, Gao . Dual Arm Coordination with Coordination Diagram Based on Teleoperation Demonstration [J]. | INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS , 2022 , 19 (04) . |
MLA | Zuo, Guoyu et al. "Dual Arm Coordination with Coordination Diagram Based on Teleoperation Demonstration" . | INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS 19 . 04 (2022) . |
APA | Zuo, Guoyu , Xu, Zichen , Huang, Gao . Dual Arm Coordination with Coordination Diagram Based on Teleoperation Demonstration . | INTERNATIONAL JOURNAL OF HUMANOID ROBOTICS , 2022 , 19 (04) . |
Export to | NoteExpress RIS BibTex |
Abstract :
The ability of robots to perceive position information of objects is important for pushing and grasping tasks in clutter. We propose a Coordinate Attention Grasping and Pushing Network (Ca-GPNet) to learn synergistic grasping and pushing strategies in cluttered environments. We use both fully convolutional networks to predict actions: One predicts the grasping direction and position of the gripper, and the other predicts the initial push position and direction of the gripper. We propose to use the coordinate attention module to capture position information of objects along the horizontal and vertical directions of space and aggregate the features. The attention module extracts long-range interdependencies in one dimension while keeping the information about the location of objects in the another dimension. Then a pair of attention maps that perceive the location and orientation features of objects are generated to reinforce the perception of objects' locations by the network. The grasping success rate of the system is 73.7% and 71.4% of the action efficiency in the simulation. Our approach can be applied to real-world pushing and grasping tasks. Ca-GPNet grasping success rate is 79.2% and action efficiency is 75.3 % in the real world. Compared with baseline methods, our system can rapidly learn the pushing and grasping cooperative policy in a cluttered environment, with higher sample efficiency, grasping success rate and action efficiency.
Keyword :
coordinate attention coordinate attention clutter clutter robots robots
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zhao, Min , Zuo, Guoyu , Huang, Gao . Collaborative Learning of Deep Reinforcement Pushing and Grasping based on Coordinate Attention in Clutter [J]. | 2022 INTERNATIONAL CONFERENCE ON VIRTUAL REALITY, HUMAN-COMPUTER INTERACTION AND ARTIFICIAL INTELLIGENCE, VRHCIAI , 2022 : 156-161 . |
MLA | Zhao, Min et al. "Collaborative Learning of Deep Reinforcement Pushing and Grasping based on Coordinate Attention in Clutter" . | 2022 INTERNATIONAL CONFERENCE ON VIRTUAL REALITY, HUMAN-COMPUTER INTERACTION AND ARTIFICIAL INTELLIGENCE, VRHCIAI (2022) : 156-161 . |
APA | Zhao, Min , Zuo, Guoyu , Huang, Gao . Collaborative Learning of Deep Reinforcement Pushing and Grasping based on Coordinate Attention in Clutter . | 2022 INTERNATIONAL CONFERENCE ON VIRTUAL REALITY, HUMAN-COMPUTER INTERACTION AND ARTIFICIAL INTELLIGENCE, VRHCIAI , 2022 , 156-161 . |
Export to | NoteExpress RIS BibTex |
Export
Results: |
Selected to |
Format: |