• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:张海斌

Refining:

Source

Submit Unfold

Co-Author

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 11 >
Convergence analysis of generalized ADMM with majorization for linearly constrained composite convex optimization SCIE
期刊论文 | 2023 | OPTIMIZATION LETTERS
Abstract&Keyword Cite

Abstract :

The generalized alternating direction method of multipliers (ADMM) of Xiao et al. (Math Prog Comput 10:533-555, 2018) aims at the two-block linearly constrained composite convex programming problem, in which each block is in the form of "nonsmooth + quadratic". However, in the case of non-quadratic (but smooth), this method may fail unless the favorable structure of "nonsmooth + smooth" is no longer used. This paper aims to remedy this defect by using a majorized technique to approximate the augmented Lagrangian function, so that the corresponding subproblems can be decomposed into some smaller problems and then solved separately. Furthermore, the recent symmetric Gauss-Seidel (sGS) decomposition theorem guarantees the equivalence between the bigger subproblem and these smaller ones. This paper focuses on convergence analysis, that is, we prove that the sequence generated by the proposed method converges globally to a Karush-Kuhn-Tucker point of the considered problem. Finally, we do some numerical experiments on a kind of simulated convex composite optimization problems which illustrate that the proposed method is evidently efficient.

Keyword :

Composite convex programming Composite convex programming Alternating direction method of multipliers Alternating direction method of multipliers Majorization Majorization Proximal point term Proximal point term Symmetric Gauss-Seidel iteration Symmetric Gauss-Seidel iteration

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Hongwu , Zhang, Haibin , Xiao, Yunhai et al. Convergence analysis of generalized ADMM with majorization for linearly constrained composite convex optimization [J]. | OPTIMIZATION LETTERS , 2023 .
MLA Li, Hongwu et al. "Convergence analysis of generalized ADMM with majorization for linearly constrained composite convex optimization" . | OPTIMIZATION LETTERS (2023) .
APA Li, Hongwu , Zhang, Haibin , Xiao, Yunhai , Li, Peili . Convergence analysis of generalized ADMM with majorization for linearly constrained composite convex optimization . | OPTIMIZATION LETTERS , 2023 .
Export to NoteExpress RIS BibTex
An Improved Robust Sparse Convex Clustering SCIE CPCI-S
期刊论文 | 2023 , 28 (6) , 989-998 | TSINGHUA SCIENCE AND TECHNOLOGY
Abstract&Keyword Cite

Abstract :

Convex clustering, turning clustering into a convex optimization problem, has drawn wide attention. It overcomes the shortcomings of traditional clustering methods such as K-means, Density-Based Spatial Clustring of Applications with Noise (DBSCAN) and hierarchical clustering that can easily fall into the local optimal solution. However, convex clustering is vulnerable to the occurrence of outlier features, as it uses the Frobenius norm to measure the distance between data points and their corresponding cluster centers and evaluate clusters. To accurately identify outlier features, this paper decomposes data into a clustering structure component and a normalized component that captures outlier features. Different from existing convex clustering evaluating features with the exact measurement, the proposed model can overcome the vast difference in the magnitude of different features and the outlier features can be efficiently identified and removed. To solve the proposed model, we design an efficient algorithm and prove the global convergence of the algorithm. Experiments on both synthetic datasets and UCI datasets demonstrate that the proposed method outperforms the compared approaches in convex clustering.

Keyword :

outlier features outlier features Newton's method Newton's method convex clustering convex clustering block coordinate descent block coordinate descent

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Ma, Jinyao , Zhang, Haibin , Yang, Shanshan et al. An Improved Robust Sparse Convex Clustering [J]. | TSINGHUA SCIENCE AND TECHNOLOGY , 2023 , 28 (6) : 989-998 .
MLA Ma, Jinyao et al. "An Improved Robust Sparse Convex Clustering" . | TSINGHUA SCIENCE AND TECHNOLOGY 28 . 6 (2023) : 989-998 .
APA Ma, Jinyao , Zhang, Haibin , Yang, Shanshan , Jiang, Jiaojiao , Li, Gaidi . An Improved Robust Sparse Convex Clustering . | TSINGHUA SCIENCE AND TECHNOLOGY , 2023 , 28 (6) , 989-998 .
Export to NoteExpress RIS BibTex
FedRecovery: Differentially Private Machine Unlearning for Federated Learning Frameworks SCIE
期刊论文 | 2023 , 18 , 4732-4746 | IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY
Abstract&Keyword Cite

Abstract :

Over the past decades, the abundance of personal data has led to the rapid development of machine learning models and important advances in artificial intelligence (AI). However, alongside all the achievements, there are increasing privacy threats and security risks that may cause significant losses for data providers. Recent legislation requires that the private information about a user should be removed from a database as well as machine learning models upon certain deletion requests. While erasing data records from memory storage is straightforward, it is often challenging to remove the influence of particular data samples from a model that has already been trained. Machine unlearning is an emerging paradigm that aims to make machine learning models "forget" what they have learned about particular data. Nevertheless, the unlearning issue for federated learning has not been completely addressed due to its special working mode. First, existing solutions crucially rely on retraining-based model calibration, which is likely unavailable and can pose new privacy risks for federated learning frameworks. Second, today's efficient unlearning strategies are mainly designed for convex problems, which are incapable of handling more complicated learning tasks like neural networks. To overcome these limitations, we took advantage of differential privacy and developed an efficient machine unlearning algorithm named FedRecovery. The FedRecovery erases the impact of a client by removing a weighted sum of gradient residuals from the global model, and tailors the Gaussian noise to make the unlearned model and retrained model statistically indistinguishable. Furthermore, the algorithm neither requires retraining-based fine-tuning nor needs the assumption of convexity. Theoretical analyses show the rigorous indistinguishability guarantee. Additionally, the experiment results on real-world datasets demonstrate that the FedRecovery is efficient and is able to produce a model that performs similarly to the retrained one.

Keyword :

feder-ated learning feder-ated learning differential privacy differential privacy Index Terms- Machine unlearning Index Terms- Machine unlearning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Lefeng , Zhu, Tianqing , Zhang, Haibin et al. FedRecovery: Differentially Private Machine Unlearning for Federated Learning Frameworks [J]. | IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY , 2023 , 18 : 4732-4746 .
MLA Zhang, Lefeng et al. "FedRecovery: Differentially Private Machine Unlearning for Federated Learning Frameworks" . | IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY 18 (2023) : 4732-4746 .
APA Zhang, Lefeng , Zhu, Tianqing , Zhang, Haibin , Xiong, Ping , Zhou, Wanlei . FedRecovery: Differentially Private Machine Unlearning for Federated Learning Frameworks . | IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY , 2023 , 18 , 4732-4746 .
Export to NoteExpress RIS BibTex
An Accelerated Regularized Chebyshev-Halley Method for Unconstrained Optimization SCIE
期刊论文 | 2023 | ASIA-PACIFIC JOURNAL OF OPERATIONAL RESEARCH
Abstract&Keyword Cite

Abstract :

In machine learning, most models can be transformed into unconstrained optimization problems, so how to solve the unconstrained optimization problem for different objective functions is always a hot issue. In this paper, a class of unconstrained optimization where objection function has pth-order derivative and Lipschitz continuous simultaneously is studied. To handle such problems, we propose an accelerated regularized Chebyshev-Halley method based on the Accelerated Hybrid Proximal Extragradient (A-HPE) framework. It proves that convergence complexity of the proposed method is O(e(-1/5) ), which is consistent with the lower iteration complexity bound for third-order tensor methods. Numerical experiments on functions in machine learning demonstrate the promising performance of the proposed method.

Keyword :

tensor methods tensor methods machine learning machine learning convergence complexity convergence complexity Unconstrained optimization Unconstrained optimization

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xiao, Jianyu , Zhang, Haibin , Gao, Huan . An Accelerated Regularized Chebyshev-Halley Method for Unconstrained Optimization [J]. | ASIA-PACIFIC JOURNAL OF OPERATIONAL RESEARCH , 2023 .
MLA Xiao, Jianyu et al. "An Accelerated Regularized Chebyshev-Halley Method for Unconstrained Optimization" . | ASIA-PACIFIC JOURNAL OF OPERATIONAL RESEARCH (2023) .
APA Xiao, Jianyu , Zhang, Haibin , Gao, Huan . An Accelerated Regularized Chebyshev-Halley Method for Unconstrained Optimization . | ASIA-PACIFIC JOURNAL OF OPERATIONAL RESEARCH , 2023 .
Export to NoteExpress RIS BibTex
AN ACCELERATED GRADIENT METHOD FOR NONCONVEX SPARSE SUBSPACE CLUSTERING PROBLEM SCIE
期刊论文 | 2022 , 18 (2) , 265-280 | PACIFIC JOURNAL OF OPTIMIZATION
Abstract&Keyword Cite

Abstract :

The sparse subspace clustering problem is to group a set of data into their underlying subspaces and correct the underlying noise simultaneously. It was shown in the recent literature that, the clustering task can be characterized as a block diagonal matrix regularized nonconvex minimization problem. However, this problem is not easy to solve because it contains a nonconvex bilinear function. The earliest method named block diagonal regularization (BDR) only solved a penalized model, but not the original problem itself. The recently algorithm named accelerated block coordinated gradient descent (ABCGD) can solve the original problem efficiently, but its convergence is not given. In this paper, we attempt to use an accelerated gradient method (AGM), and establish its convergence in the sense of converging to a critical point with a certain stepsize policy. We show that closed-form solutions are enjoyed for each subproblem by taking full use of the constraints' structure so that the algorithm is easily implementable. Finally, we do numerical experiments by the using of two real datasets. The numerical results illustrate that the proposed algorithm AGM performs better than BDR and ABCGD evidently.

Keyword :

nonconvex nonsmooth optimization nonconvex nonsmooth optimization Hopkins 155 real datasets Hopkins 155 real datasets sparse subspace clustering sparse subspace clustering accelerated gradient method accelerated gradient method Extended Yale B database Extended Yale B database

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Hongwu , Zhang, Haibin , Xiao, Yunhai . AN ACCELERATED GRADIENT METHOD FOR NONCONVEX SPARSE SUBSPACE CLUSTERING PROBLEM [J]. | PACIFIC JOURNAL OF OPTIMIZATION , 2022 , 18 (2) : 265-280 .
MLA Li, Hongwu et al. "AN ACCELERATED GRADIENT METHOD FOR NONCONVEX SPARSE SUBSPACE CLUSTERING PROBLEM" . | PACIFIC JOURNAL OF OPTIMIZATION 18 . 2 (2022) : 265-280 .
APA Li, Hongwu , Zhang, Haibin , Xiao, Yunhai . AN ACCELERATED GRADIENT METHOD FOR NONCONVEX SPARSE SUBSPACE CLUSTERING PROBLEM . | PACIFIC JOURNAL OF OPTIMIZATION , 2022 , 18 (2) , 265-280 .
Export to NoteExpress RIS BibTex
AN ADAPTIVE l(1)-l(2)-TYPE MODEL WITH HIERARCHIES FOR SPARSE SIGNAL RECONSTRUCTION PROBLEM SCIE
期刊论文 | 2022 , 18 (4) , 695-712 | PACIFIC JOURNAL OF OPTIMIZATION
Abstract&Keyword Cite

Abstract :

This paper addresses solving an adaptive l(1)-l(2) regularized model in the framework of hierarchical convex optimization for sparse signal reconstruction. This is realized in the framework of bi-level convex optimization, we can also turn the challenging bi-level model into a single-level constrained optimization problem through some priori information. The l(1)-l(2 )norm regularized least-square sparse optimization is also called the elastic net problem, and numerous simulation and real-world data show that the elastic net often outperforms the Lasso. However, the elastic net is suitable for handling Gaussian noise in most cases. In this paper, we propose an adaptive and robust model for reconstructing sparse signals, say l(p-)l(1)-l(2), where the l(p)-norm with p >= 1 measures the data fidelity and l(1)-l(2)-term measures the sparsity. This model is robust and flexible in the sense of having the ability to deal with different types of noises. To solve this model, we employ an alternating direction method of multipliers (ADMM) based on introducing one or a pair of auxiliary variables. From the point of view of numerical computation, we use numerical experiments to demonstrate that both of our proposed model and algorithms outperform the Lasso model solved by ADMM on sparse signal reconstruction problem.

Keyword :

alternating direction method of multipliers alternating direction method of multipliers convex optimization convex optimization l(p-) l(1 )-l(2)minimization l(p-) l(1 )-l(2)minimization sparse signal reconstruction sparse signal reconstruction hierarchical optimization hierarchical optimization

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Ding, Yanyun , Yue, Zhixiao , Zhang, Haibin . AN ADAPTIVE l(1)-l(2)-TYPE MODEL WITH HIERARCHIES FOR SPARSE SIGNAL RECONSTRUCTION PROBLEM [J]. | PACIFIC JOURNAL OF OPTIMIZATION , 2022 , 18 (4) : 695-712 .
MLA Ding, Yanyun et al. "AN ADAPTIVE l(1)-l(2)-TYPE MODEL WITH HIERARCHIES FOR SPARSE SIGNAL RECONSTRUCTION PROBLEM" . | PACIFIC JOURNAL OF OPTIMIZATION 18 . 4 (2022) : 695-712 .
APA Ding, Yanyun , Yue, Zhixiao , Zhang, Haibin . AN ADAPTIVE l(1)-l(2)-TYPE MODEL WITH HIERARCHIES FOR SPARSE SIGNAL RECONSTRUCTION PROBLEM . | PACIFIC JOURNAL OF OPTIMIZATION , 2022 , 18 (4) , 695-712 .
Export to NoteExpress RIS BibTex
Online Non-monotone DR-Submodular Maximization: 1/4 Approximation Ratio and Sublinear Regret CPCI-S
期刊论文 | 2022 , 13595 , 118-125 | COMPUTING AND COMBINATORICS, COCOON 2022
Abstract&Keyword Cite

Abstract :

In an era of data explosion and uncertain information, online optimization becomes a more and more powerful framework. And online DR-submodular maximization is an important subclass because its wide aplications in machine learning, statistics, etc., and significance for exploring general non-convex problems. In this paper, we focus on the online non-monotone DR-submodular maximizaition under general constraint set, and propose a meta-Frank-Wolfe online algorithm with appropriately choosing parameters. Based on the Lyapunov function approach in [8] and variance reduction technique in [16], we show that the proposed online algorithm attains sublinear regret against a 1/4 approximation ratio to the best fixed action in hindsight.

Keyword :

Variance reduction Variance reduction Regret Regret Approximation ratio Approximation ratio Online optimization Online optimization DR-submodularity DR-submodularity

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Feng, Junkai , Yang, Ruiqi , Zhang, Haibin et al. Online Non-monotone DR-Submodular Maximization: 1/4 Approximation Ratio and Sublinear Regret [J]. | COMPUTING AND COMBINATORICS, COCOON 2022 , 2022 , 13595 : 118-125 .
MLA Feng, Junkai et al. "Online Non-monotone DR-Submodular Maximization: 1/4 Approximation Ratio and Sublinear Regret" . | COMPUTING AND COMBINATORICS, COCOON 2022 13595 (2022) : 118-125 .
APA Feng, Junkai , Yang, Ruiqi , Zhang, Haibin , Zhang, Zhenning . Online Non-monotone DR-Submodular Maximization: 1/4 Approximation Ratio and Sublinear Regret . | COMPUTING AND COMBINATORICS, COCOON 2022 , 2022 , 13595 , 118-125 .
Export to NoteExpress RIS BibTex
Selected Papers from the 1st International Symposium on Thermal-Fluid Dynamics (ISTFD2019) SCIE
期刊论文 | 2021 , 43 (8-10) , 655-657 | HEAT TRANSFER ENGINEERING
Abstract&Keyword Cite

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Bai, Bofeng , Zhang, Haibin , Cheng, Lixin et al. Selected Papers from the 1st International Symposium on Thermal-Fluid Dynamics (ISTFD2019) [J]. | HEAT TRANSFER ENGINEERING , 2021 , 43 (8-10) : 655-657 .
MLA Bai, Bofeng et al. "Selected Papers from the 1st International Symposium on Thermal-Fluid Dynamics (ISTFD2019)" . | HEAT TRANSFER ENGINEERING 43 . 8-10 (2021) : 655-657 .
APA Bai, Bofeng , Zhang, Haibin , Cheng, Lixin , Ghajar, Afshin J. . Selected Papers from the 1st International Symposium on Thermal-Fluid Dynamics (ISTFD2019) . | HEAT TRANSFER ENGINEERING , 2021 , 43 (8-10) , 655-657 .
Export to NoteExpress RIS BibTex
A Novel Adaptive Differential Privacy Algorithm for Empirical Risk Minimization SCIE
期刊论文 | 2021 , 38 (05) | ASIA-PACIFIC JOURNAL OF OPERATIONAL RESEARCH
Abstract&Keyword Cite

Abstract :

Privacy-preserving empirical risk minimization model is crucial for the increasingly frequent setting of analyzing personal data, such as medical records, financial records, etc. Due to its advantage of a rigorous mathematical definition, differential privacy has been widely used in privacy protection and has received much attention in recent years of privacy protection. With the advantages of iterative algorithms in solving a variety of problems, like empirical risk minimization, there have been various works in the literature that target differentially private iteration algorithms, especially the adaptive iterative algorithm. However, the solution of the final model parameters is imprecise because of the vast privacy budget spending on the step size search. In this paper, we first proposed a novel adaptive differential privacy algorithm that does not require the privacy budget for step size determination. Then, through the theoretical analyses, we prove that our proposed algorithm satisfies differential privacy, and their solutions achieve sufficient accuracy by infinite steps. Furthermore, numerical analysis is performed based on real-world databases. The results indicate that our proposed algorithm outperforms existing algorithms for model fitting in terms of accuracy.

Keyword :

iteration algorithm iteration algorithm empirical risk minimization empirical risk minimization Differential privacy Differential privacy

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Kaili , Zhang, Haibin , Zhao, Pengfei et al. A Novel Adaptive Differential Privacy Algorithm for Empirical Risk Minimization [J]. | ASIA-PACIFIC JOURNAL OF OPERATIONAL RESEARCH , 2021 , 38 (05) .
MLA Zhang, Kaili et al. "A Novel Adaptive Differential Privacy Algorithm for Empirical Risk Minimization" . | ASIA-PACIFIC JOURNAL OF OPERATIONAL RESEARCH 38 . 05 (2021) .
APA Zhang, Kaili , Zhang, Haibin , Zhao, Pengfei , Chen, Haibin . A Novel Adaptive Differential Privacy Algorithm for Empirical Risk Minimization . | ASIA-PACIFIC JOURNAL OF OPERATIONAL RESEARCH , 2021 , 38 (05) .
Export to NoteExpress RIS BibTex
An inertial Douglas-Rachford splitting algorithm for nonconvex and nonsmooth problems SCIE
期刊论文 | 2021 , 35 (17) | CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE
Abstract&Keyword Cite

Abstract :

In the fields of wireless communication and data processing, there are varieties of mathematical optimization problems, especially nonconvex and nonsmooth problems. For these problems, one of the biggest difficulties is how to improve the speed of solution. To this end, here we mainly focused on a minimization optimization model that is nonconvex and nonsmooth. Firstly, an inertial Douglas-Rachford splitting (IDRS) algorithm was established, which incorporate the inertial technology into the framework of the Douglas-Rachford splitting algorithm. Then, we illustrated the iteration sequence generated by the proposed IDRS algorithm converges to a stationary point of the nonconvex nonsmooth optimization problem with the aid of the Kurdyka-Lojasiewicz property. Finally, a series of numerical experiments were carried out to prove the effectiveness of our proposed algorithm from the perspective of signal recovery. The results are implicit that the proposed IDRS algorithm outperforms another algorithm.

Keyword :

nonconvex and nonsmooth optimization nonconvex and nonsmooth optimization Kurdyka-Lojasiewicz property Kurdyka-Lojasiewicz property Douglas-Rachford splitting Douglas-Rachford splitting inertial inertial

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Feng, Junkai , Zhang, Haibin , Zhang, Kaili et al. An inertial Douglas-Rachford splitting algorithm for nonconvex and nonsmooth problems [J]. | CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE , 2021 , 35 (17) .
MLA Feng, Junkai et al. "An inertial Douglas-Rachford splitting algorithm for nonconvex and nonsmooth problems" . | CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE 35 . 17 (2021) .
APA Feng, Junkai , Zhang, Haibin , Zhang, Kaili , Zhao, Pengfei . An inertial Douglas-Rachford splitting algorithm for nonconvex and nonsmooth problems . | CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE , 2021 , 35 (17) .
Export to NoteExpress RIS BibTex
10| 20| 50 per page
< Page ,Total 11 >

Export

Results:

Selected

to

Format:
Online/Total:870/2181146
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.