Active prompting with chain-of-thought for large language models S Diao, P Wang, Y Lin, T Zhang NAACL 2024, 2023 | 163 | 2023 |
Black-box prompt learning for pre-trained language models S Diao, Z Huang, R Xu, X Li, Y Lin, X Zhou, T Zhang TMLR, 2022 | 87 | 2022 |
Mitigating the alignment tax of rlhf Y Lin*, H Lin*, W Xiong*, S Diao*, J Liu, J Zhang, R Pan, H Wang, W Hu, ... EMNLP 2024, 580-606, 2024 | 84* | 2024 |
Bayesian invariant risk minimization Y Lin, H Dong, H Wang, T Zhang CVPR 2022 (Oral), 2022 | 75 | 2022 |
Sparse invariant risk minimization X Zhou*, Y Lin*, W Zhang*, T Zhang ICML 2022, 2022 | 73 | 2022 |
ZIN: When and how to learn invariance without environment partition? Y Lin, S Zhu, L Tan, P Cui Advances in Neural Information Processing Systems 35, 24529-24542, 2022 | 71* | 2022 |
Self-guided noise-free data generation for efficient zero-shot learning J Gao, R Pi, L Yong, H Xu, J Ye, Z Wu, W Zhang, X Liang, Z Li, L Kong ICLR 2023, 2023 | 65* | 2023 |
R-Tuning: Teaching Large Language Models to Refuse Unknown Questions H Zhang*, S Diao*, Y Lin*, YR Fung, Q Lian, X Wang, Y Chen, H Ji, ... NAACL 2024 (Outstanding Paper Award), 2023 | 61* | 2023 |
Model agnostic sample reweighting for out-of-distribution learning X Zhou*, Y Lin*, R Pi*, W Zhang, R Xu, P Cui, T Zhang ICML 2022, 2022 | 60* | 2022 |
Cable sheath loss reduction strategy research based on the coupled line model Y Lin, Z Xu IEEE Transactions on Power Delivery 30 (5), 2303-2311, 2015 | 50 | 2015 |
ID and OOD Performance Are Sometimes Inversely Correlated on Real-world Datasets D Teney, Y Lin, SJ Oh, E Abbasnejad NeurIPS 2023 (Spotlight), 2022 | 44 | 2022 |
Arithmetic control of llms for diverse user preferences: Directional preference alignment with multi-objective rewards H Wang*, Y Lin*, W Xiong*, R Yang, S Diao, S Qiu, H Zhao, T Zhang NAACL 2024, 2024 | 39 | 2024 |
Probabilistic bilevel coreset selection X Zhou, R Pi, W Zhang, Y Lin, Z Chen, T Zhang ICML 2022, 2022 | 34 | 2022 |
Spurious feature diversification improves out-of-distribution generalization Y Lin*, L Tan*, Y Hao*, H Wong, H Dong, W Zhang, Y Yang, T Zhang ICLR 2024, 2023 | 22 | 2023 |
A Holistic View of Label Noise Transition Matrix in Deep Learning and Beyond Y Lin*, R Pi*, W Zhang, X Xia, J Gao, X Zhou, T Liu, B Han ICLR 2023 (Spotlight), 2022 | 22* | 2022 |
Particle-based variational inference with preconditioned functional gradient flow H Dong, X Wang, Y Lin, T Zhang ICLR 2023, 2022 | 17 | 2022 |
Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs R Yang, R Ding, Y Lin, H Zhang, T Zhang arXiv preprint arXiv:2406.10216, 2024 | 16 | 2024 |
Analysis of coupling effect on LCC-MCC hybrid HVDC from parallel AC lines in close proximity Y Lin, Z Xu, L Xiao, Z Zhang, H Xiao 2015 IEEE Power & Energy Society General Meeting, 1-5, 2015 | 16 | 2015 |
Stable learning via sparse variable independence H Yu, P Cui, Y He, Z Shen, Y Lin, R Xu, X Zhang Proceedings of the AAAI Conference on Artificial Intelligence 37 (9), 10998 …, 2023 | 12 | 2023 |
Provably invariant learning without domain information X Tan*, Y Lin*, S Zhu, C Qu, X Qiu, X Yinghui, P Cui, Y Qi ICML 2022, 2023 | 10 | 2023 |