Follow
Xilie Xu
Title
Cited by
Cited by
Year
Attacks which do not kill training make adversarial learning stronger
J Zhang, X Xu, B Han, G Niu, L Cui, M Sugiyama, M Kankanhalli
ICML 2020, 2020
4762020
An LLM can Fool Itself: A Prompt-Based Adversarial Attack
X Xu, K Kong, N Liu, L Cui, D Wang, J Zhang, M Kankanhalli
ICLR 2024, 2024
59*2024
Decision Boundary-aware Data Augmentation for Adversarial Training
C Chen, J Zhang, X Xu, L Lyu, C Chen, T Hu, G Chen
IEEE Transactions on Dependable and Secure Computing, 2022
25*2022
AutoLoRa: An Automated Robust Fine-Tuning Framework
X Xu, J Zhang, M Kankanhalli
ICLR 2024, 2024
13*2024
Enhancing Adversarial Contrastive Learning via Adversarial Invariant Regularization
X Xu, J Zhang, F Liu, M Sugiyama, M Kankanhalli
NeurIPS 2023, 2023
112023
NoiLin: Improving adversarial training and correcting stereotype of noisy labels
J Zhang, X Xu, B Han, T Liu, L Cui, G Niu, M Sugiyama
Transactions on Machine Learning Research, 2022
11*2022
Efficient Adversarial Contrastive Learning via Robustness-Aware Coreset Selection
X Xu, J Zhang, F Liu, M Sugiyama, M Kankanhalli
NeurIPS 2023 (spotlight), 2023
92023
Privacy-Preserving Low-Rank Adaptation for Latent Diffusion Models
Z Luo, X Xu, F Liu, YS Koh, D Wang, J Zhang
AAAI 2025, 2024
32024
Adversarial Attack and Defense for Non-Parametric Two-Sample Tests
X Xu, J Zhang, F Liu, M Sugiyama, M Kankanhalli
ICML 2022, 2022
22022
Technical Report for ICML 2024 TiFA Workshop MLLM Attack Challenge: Suffix Injection and Projected Gradient Descent Can Easily Fool An MLLM
Y Guo, Z Xu, X Xu, YK Wong, L Nie, M Kankanhalli
arXiv preprint arXiv:2412.15614, 2024
2024
Perplexity-aware Correction for Robust Alignment with Noisy Preferences
K Kong, X Xu, D Wang, J Zhang, M Kankanhalli
NeurIPS 2024, 2024
2024
Towards Robust Foundation Models: Adversarial Contrastive Learning
J Zhang, X Xu
The Third Blogpost Track at ICLR 2024, 2024
2024
The system can't perform the operation now. Try again later.
Articles 1–12