Query:
学者姓名:张海斌
Refining:
Year
Type
Indexed by
Source
Complex
Co-Author
Language
Clean All
Abstract :
The privacy-protected algorithm (PPA) is pivotal in the realm of machine learning, especially for handling sensitive data types, such as medical and financial records. PPA enables two distinct operations: data publishing and data analysis, each capable of functioning independently. However, the field lacks a unified framework or an efficient algorithm to synergize these operations. This deficiency inspires our current research endeavor. In this paper, we introduce a novel dual-mode empirical risk minimization (D-ERM) model, specifically designed for integrated learning tasks. We also develop an alternating minimization differential privacy protection algorithm (AMDPPA) for implementing the D-ERM model. Our theoretical analysis confirms the differential privacy and accuracy of AMDPPA. We validate the algorithm's efficacy through numerical experiments using real-world datasets, demonstrating its ability to effectively balance privacy with learning efficiency.
Keyword :
Alternating minimization algorithm Alternating minimization algorithm Dual-mode learning Dual-mode learning Differential privacy protection Differential privacy protection Dual-mode empirical risk minimization model Dual-mode empirical risk minimization model
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zhao, Pengfei , Zhang, Kaili , Zhang, Haibin et al. Alternating minimization differential privacy protection algorithm for the novel dual-mode learning tasks model [J]. | EXPERT SYSTEMS WITH APPLICATIONS , 2024 , 259 . |
MLA | Zhao, Pengfei et al. "Alternating minimization differential privacy protection algorithm for the novel dual-mode learning tasks model" . | EXPERT SYSTEMS WITH APPLICATIONS 259 (2024) . |
APA | Zhao, Pengfei , Zhang, Kaili , Zhang, Haibin , Chen, Haibin . Alternating minimization differential privacy protection algorithm for the novel dual-mode learning tasks model . | EXPERT SYSTEMS WITH APPLICATIONS , 2024 , 259 . |
Export to | NoteExpress RIS BibTex |
Abstract :
Partial label learning (PLL) is a classification problem where each training instance is ambiguously annotated with a set of candidate labels, among which only one is the ground truth label. The topological structure in the feature space is frequently used by existing PLL approaches to clarify if a candidate label is the ground truth label for a training example. However, these techniques frequently experience cumulative errors brought on by the error-prone labeling confidence estimate throughout the topological structure because of the redundant and noisy features occurring naturally in the feature space. In this paper, we propose a novel approach through Identifying Outlier Features (PL-IOF) to address this challenge. The feature space is decomposed into class prototype features and outlier features, where the former captures the discriminative features of each class prototype, and the latter captures the outlier features in each training instance. A unified framework is specifically suggested to simultaneously optimize class prototypes and outlier features, as well as to estimate labeling confidences over incomplete label training instances. This framework ensures the high quality of the extracted class prototypes by recognizing the outliers. Experiments on both synthetic and real-world datasets demonstrate the out-performance of PL-IOF over the state-of-the-art.
Keyword :
Outlier features Outlier features Disambiguation Disambiguation Label prototype Label prototype Partial label learning Partial label learning
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Ma, Jinyao , Jiang, Jiaojiao , Bao, Wei et al. Partial label learning via identifying outlier features [J]. | KNOWLEDGE-BASED SYSTEMS , 2024 , 301 . |
MLA | Ma, Jinyao et al. "Partial label learning via identifying outlier features" . | KNOWLEDGE-BASED SYSTEMS 301 (2024) . |
APA | Ma, Jinyao , Jiang, Jiaojiao , Bao, Wei , Zhang, Haibin . Partial label learning via identifying outlier features . | KNOWLEDGE-BASED SYSTEMS , 2024 , 301 . |
Export to | NoteExpress RIS BibTex |
Abstract :
Convex clustering, turning clustering into a convex optimization problem, has drawn wide attention. It overcomes the shortcomings of traditional clustering methods such as K-means, Density-Based Spatial Clustring of Applications with Noise (DBSCAN) and hierarchical clustering that can easily fall into the local optimal solution. However, convex clustering is vulnerable to the occurrence of outlier features, as it uses the Frobenius norm to measure the distance between data points and their corresponding cluster centers and evaluate clusters. To accurately identify outlier features, this paper decomposes data into a clustering structure component and a normalized component that captures outlier features. Different from existing convex clustering evaluating features with the exact measurement, the proposed model can overcome the vast difference in the magnitude of different features and the outlier features can be efficiently identified and removed. To solve the proposed model, we design an efficient algorithm and prove the global convergence of the algorithm. Experiments on both synthetic datasets and UCI datasets demonstrate that the proposed method outperforms the compared approaches in convex clustering.
Keyword :
outlier features outlier features Newton's method Newton's method convex clustering convex clustering block coordinate descent block coordinate descent
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Ma, Jinyao , Zhang, Haibin , Yang, Shanshan et al. An Improved Robust Sparse Convex Clustering [J]. | TSINGHUA SCIENCE AND TECHNOLOGY , 2023 , 28 (6) : 989-998 . |
MLA | Ma, Jinyao et al. "An Improved Robust Sparse Convex Clustering" . | TSINGHUA SCIENCE AND TECHNOLOGY 28 . 6 (2023) : 989-998 . |
APA | Ma, Jinyao , Zhang, Haibin , Yang, Shanshan , Jiang, Jiaojiao , Li, Gaidi . An Improved Robust Sparse Convex Clustering . | TSINGHUA SCIENCE AND TECHNOLOGY , 2023 , 28 (6) , 989-998 . |
Export to | NoteExpress RIS BibTex |
Abstract :
Over the past decades, the abundance of personal data has led to the rapid development of machine learning models and important advances in artificial intelligence (AI). However, alongside all the achievements, there are increasing privacy threats and security risks that may cause significant losses for data providers. Recent legislation requires that the private information about a user should be removed from a database as well as machine learning models upon certain deletion requests. While erasing data records from memory storage is straightforward, it is often challenging to remove the influence of particular data samples from a model that has already been trained. Machine unlearning is an emerging paradigm that aims to make machine learning models "forget" what they have learned about particular data. Nevertheless, the unlearning issue for federated learning has not been completely addressed due to its special working mode. First, existing solutions crucially rely on retraining-based model calibration, which is likely unavailable and can pose new privacy risks for federated learning frameworks. Second, today's efficient unlearning strategies are mainly designed for convex problems, which are incapable of handling more complicated learning tasks like neural networks. To overcome these limitations, we took advantage of differential privacy and developed an efficient machine unlearning algorithm named FedRecovery. The FedRecovery erases the impact of a client by removing a weighted sum of gradient residuals from the global model, and tailors the Gaussian noise to make the unlearned model and retrained model statistically indistinguishable. Furthermore, the algorithm neither requires retraining-based fine-tuning nor needs the assumption of convexity. Theoretical analyses show the rigorous indistinguishability guarantee. Additionally, the experiment results on real-world datasets demonstrate that the FedRecovery is efficient and is able to produce a model that performs similarly to the retrained one.
Keyword :
feder-ated learning feder-ated learning differential privacy differential privacy Index Terms- Machine unlearning Index Terms- Machine unlearning
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zhang, Lefeng , Zhu, Tianqing , Zhang, Haibin et al. FedRecovery: Differentially Private Machine Unlearning for Federated Learning Frameworks [J]. | IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY , 2023 , 18 : 4732-4746 . |
MLA | Zhang, Lefeng et al. "FedRecovery: Differentially Private Machine Unlearning for Federated Learning Frameworks" . | IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY 18 (2023) : 4732-4746 . |
APA | Zhang, Lefeng , Zhu, Tianqing , Zhang, Haibin , Xiong, Ping , Zhou, Wanlei . FedRecovery: Differentially Private Machine Unlearning for Federated Learning Frameworks . | IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY , 2023 , 18 , 4732-4746 . |
Export to | NoteExpress RIS BibTex |
Abstract :
In machine learning, most models can be transformed into unconstrained optimization problems, so how to solve the unconstrained optimization problem for different objective functions is always a hot issue. In this paper, a class of unconstrained optimization where objection function has pth-order derivative and Lipschitz continuous simultaneously is studied. To handle such problems, we propose an accelerated regularized Chebyshev-Halley method based on the Accelerated Hybrid Proximal Extragradient (A-HPE) framework. It proves that convergence complexity of the proposed method is O(e(-1/5) ), which is consistent with the lower iteration complexity bound for third-order tensor methods. Numerical experiments on functions in machine learning demonstrate the promising performance of the proposed method.
Keyword :
tensor methods tensor methods machine learning machine learning convergence complexity convergence complexity Unconstrained optimization Unconstrained optimization
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Xiao, Jianyu , Zhang, Haibin , Gao, Huan . An Accelerated Regularized Chebyshev-Halley Method for Unconstrained Optimization [J]. | ASIA-PACIFIC JOURNAL OF OPERATIONAL RESEARCH , 2023 , 40 (04) . |
MLA | Xiao, Jianyu et al. "An Accelerated Regularized Chebyshev-Halley Method for Unconstrained Optimization" . | ASIA-PACIFIC JOURNAL OF OPERATIONAL RESEARCH 40 . 04 (2023) . |
APA | Xiao, Jianyu , Zhang, Haibin , Gao, Huan . An Accelerated Regularized Chebyshev-Halley Method for Unconstrained Optimization . | ASIA-PACIFIC JOURNAL OF OPERATIONAL RESEARCH , 2023 , 40 (04) . |
Export to | NoteExpress RIS BibTex |
Abstract :
The generalized alternating direction method of multipliers (ADMM) of Xiao et al. (Math Prog Comput 10:533-555, 2018) aims at the two-block linearly constrained composite convex programming problem, in which each block is in the form of "nonsmooth + quadratic". However, in the case of non-quadratic (but smooth), this method may fail unless the favorable structure of "nonsmooth + smooth" is no longer used. This paper aims to remedy this defect by using a majorized technique to approximate the augmented Lagrangian function, so that the corresponding subproblems can be decomposed into some smaller problems and then solved separately. Furthermore, the recent symmetric Gauss-Seidel (sGS) decomposition theorem guarantees the equivalence between the bigger subproblem and these smaller ones. This paper focuses on convergence analysis, that is, we prove that the sequence generated by the proposed method converges globally to a Karush-Kuhn-Tucker point of the considered problem. Finally, we do some numerical experiments on a kind of simulated convex composite optimization problems which illustrate that the proposed method is evidently efficient.
Keyword :
Composite convex programming Composite convex programming Alternating direction method of multipliers Alternating direction method of multipliers Majorization Majorization Proximal point term Proximal point term Symmetric Gauss-Seidel iteration Symmetric Gauss-Seidel iteration
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Li, Hongwu , Zhang, Haibin , Xiao, Yunhai et al. Convergence analysis of generalized ADMM with majorization for linearly constrained composite convex optimization [J]. | OPTIMIZATION LETTERS , 2023 , 18 (5) : 1173-1200 . |
MLA | Li, Hongwu et al. "Convergence analysis of generalized ADMM with majorization for linearly constrained composite convex optimization" . | OPTIMIZATION LETTERS 18 . 5 (2023) : 1173-1200 . |
APA | Li, Hongwu , Zhang, Haibin , Xiao, Yunhai , Li, Peili . Convergence analysis of generalized ADMM with majorization for linearly constrained composite convex optimization . | OPTIMIZATION LETTERS , 2023 , 18 (5) , 1173-1200 . |
Export to | NoteExpress RIS BibTex |
Abstract :
In an era of data explosion and uncertain information, online optimization becomes a more and more powerful framework. And online DR-submodular maximization is an important subclass because its wide aplications in machine learning, statistics, etc., and significance for exploring general non-convex problems. In this paper, we focus on the online non-monotone DR-submodular maximizaition under general constraint set, and propose a meta-Frank-Wolfe online algorithm with appropriately choosing parameters. Based on the Lyapunov function approach in [8] and variance reduction technique in [16], we show that the proposed online algorithm attains sublinear regret against a 1/4 approximation ratio to the best fixed action in hindsight.
Keyword :
Variance reduction Variance reduction Regret Regret Approximation ratio Approximation ratio Online optimization Online optimization DR-submodularity DR-submodularity
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Feng, Junkai , Yang, Ruiqi , Zhang, Haibin et al. Online Non-monotone DR-Submodular Maximization: 1/4 Approximation Ratio and Sublinear Regret [J]. | COMPUTING AND COMBINATORICS, COCOON 2022 , 2022 , 13595 : 118-125 . |
MLA | Feng, Junkai et al. "Online Non-monotone DR-Submodular Maximization: 1/4 Approximation Ratio and Sublinear Regret" . | COMPUTING AND COMBINATORICS, COCOON 2022 13595 (2022) : 118-125 . |
APA | Feng, Junkai , Yang, Ruiqi , Zhang, Haibin , Zhang, Zhenning . Online Non-monotone DR-Submodular Maximization: 1/4 Approximation Ratio and Sublinear Regret . | COMPUTING AND COMBINATORICS, COCOON 2022 , 2022 , 13595 , 118-125 . |
Export to | NoteExpress RIS BibTex |
Abstract :
The sparse subspace clustering problem is to group a set of data into their underlying subspaces and correct the underlying noise simultaneously. It was shown in the recent literature that, the clustering task can be characterized as a block diagonal matrix regularized nonconvex minimization problem. However, this problem is not easy to solve because it contains a nonconvex bilinear function. The earliest method named block diagonal regularization (BDR) only solved a penalized model, but not the original problem itself. The recently algorithm named accelerated block coordinated gradient descent (ABCGD) can solve the original problem efficiently, but its convergence is not given. In this paper, we attempt to use an accelerated gradient method (AGM), and establish its convergence in the sense of converging to a critical point with a certain stepsize policy. We show that closed-form solutions are enjoyed for each subproblem by taking full use of the constraints' structure so that the algorithm is easily implementable. Finally, we do numerical experiments by the using of two real datasets. The numerical results illustrate that the proposed algorithm AGM performs better than BDR and ABCGD evidently.
Keyword :
nonconvex nonsmooth optimization nonconvex nonsmooth optimization Hopkins 155 real datasets Hopkins 155 real datasets sparse subspace clustering sparse subspace clustering accelerated gradient method accelerated gradient method Extended Yale B database Extended Yale B database
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Li, Hongwu , Zhang, Haibin , Xiao, Yunhai . AN ACCELERATED GRADIENT METHOD FOR NONCONVEX SPARSE SUBSPACE CLUSTERING PROBLEM [J]. | PACIFIC JOURNAL OF OPTIMIZATION , 2022 , 18 (2) : 265-280 . |
MLA | Li, Hongwu et al. "AN ACCELERATED GRADIENT METHOD FOR NONCONVEX SPARSE SUBSPACE CLUSTERING PROBLEM" . | PACIFIC JOURNAL OF OPTIMIZATION 18 . 2 (2022) : 265-280 . |
APA | Li, Hongwu , Zhang, Haibin , Xiao, Yunhai . AN ACCELERATED GRADIENT METHOD FOR NONCONVEX SPARSE SUBSPACE CLUSTERING PROBLEM . | PACIFIC JOURNAL OF OPTIMIZATION , 2022 , 18 (2) , 265-280 . |
Export to | NoteExpress RIS BibTex |
Abstract :
This paper addresses solving an adaptive l(1)-l(2) regularized model in the framework of hierarchical convex optimization for sparse signal reconstruction. This is realized in the framework of bi-level convex optimization, we can also turn the challenging bi-level model into a single-level constrained optimization problem through some priori information. The l(1)-l(2 )norm regularized least-square sparse optimization is also called the elastic net problem, and numerous simulation and real-world data show that the elastic net often outperforms the Lasso. However, the elastic net is suitable for handling Gaussian noise in most cases. In this paper, we propose an adaptive and robust model for reconstructing sparse signals, say l(p-)l(1)-l(2), where the l(p)-norm with p >= 1 measures the data fidelity and l(1)-l(2)-term measures the sparsity. This model is robust and flexible in the sense of having the ability to deal with different types of noises. To solve this model, we employ an alternating direction method of multipliers (ADMM) based on introducing one or a pair of auxiliary variables. From the point of view of numerical computation, we use numerical experiments to demonstrate that both of our proposed model and algorithms outperform the Lasso model solved by ADMM on sparse signal reconstruction problem.
Keyword :
alternating direction method of multipliers alternating direction method of multipliers convex optimization convex optimization l(p-) l(1 )-l(2)minimization l(p-) l(1 )-l(2)minimization sparse signal reconstruction sparse signal reconstruction hierarchical optimization hierarchical optimization
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Ding, Yanyun , Yue, Zhixiao , Zhang, Haibin . AN ADAPTIVE l(1)-l(2)-TYPE MODEL WITH HIERARCHIES FOR SPARSE SIGNAL RECONSTRUCTION PROBLEM [J]. | PACIFIC JOURNAL OF OPTIMIZATION , 2022 , 18 (4) : 695-712 . |
MLA | Ding, Yanyun et al. "AN ADAPTIVE l(1)-l(2)-TYPE MODEL WITH HIERARCHIES FOR SPARSE SIGNAL RECONSTRUCTION PROBLEM" . | PACIFIC JOURNAL OF OPTIMIZATION 18 . 4 (2022) : 695-712 . |
APA | Ding, Yanyun , Yue, Zhixiao , Zhang, Haibin . AN ADAPTIVE l(1)-l(2)-TYPE MODEL WITH HIERARCHIES FOR SPARSE SIGNAL RECONSTRUCTION PROBLEM . | PACIFIC JOURNAL OF OPTIMIZATION , 2022 , 18 (4) , 695-712 . |
Export to | NoteExpress RIS BibTex |
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Bai, Bofeng , Zhang, Haibin , Cheng, Lixin et al. Selected Papers from the 1st International Symposium on Thermal-Fluid Dynamics (ISTFD2019) [J]. | HEAT TRANSFER ENGINEERING , 2021 , 43 (8-10) : 655-657 . |
MLA | Bai, Bofeng et al. "Selected Papers from the 1st International Symposium on Thermal-Fluid Dynamics (ISTFD2019)" . | HEAT TRANSFER ENGINEERING 43 . 8-10 (2021) : 655-657 . |
APA | Bai, Bofeng , Zhang, Haibin , Cheng, Lixin , Ghajar, Afshin J. . Selected Papers from the 1st International Symposium on Thermal-Fluid Dynamics (ISTFD2019) . | HEAT TRANSFER ENGINEERING , 2021 , 43 (8-10) , 655-657 . |
Export to | NoteExpress RIS BibTex |
Export
Results: |
Selected to |
Format: |