7月29日 Yang Feng🙇🏿:Learning from Similar Linear Representations: Adaptivity, Minimaxity, and Robustness

时间👵🧌:2024-07-22浏览:47设置

讲座题目:Learning from Similar Linear Representations: Adaptivity, Minimaxity, and Robustness

主讲人:Yang Feng 教授

主持人:周勇 教授

开始时间:2024-07-29 14:00

讲座地址:普陀校区理科大楼A1514

主办单位🍞:统计学院


报告人简介:

      Yang Feng is a Professor of Biostatistics at New York University. He obtained his Ph.D. in Operations Research at Princeton University in 2010. Feng’s research interests encompass the theoretical and methodological aspects of machine learning, high-dimensional statistics, network models, and nonparametric statistics, leading to a wealth of practical applications. He has published more than 70 papers in statistical and machine learning journals. His research has been funded by multiple grants from the National Institutes of Health (NIH) and the National Science Foundation (NSF), notably the NSF CAREER Award. He is currently an Associate Editor for the Journal of the American Statistical Association (JASA), the Journal of Business & Economic Statistics (JBES), and the Annals of Applied Statistics (AoAS). His professional recognition includes being named a fellow of the American Statistical Association (ASA) and the Institute of Mathematical Statistics (IMS), as well as an elected member of the International Statistical Institute (ISI).


报告内容🌜:

Representation multi-task learning (MTL) and transfer learning (TL) have achieved tremendous success in practice. However, the theoretical understanding of these methods is still lacking. Most existing theoretical works focus on cases where all tasks share the same representation, and claim that MTL and TL almost always improve performance. However, as the number of tasks grows, assuming all tasks share the same representation is unrealistic. Also, this does not always match empirical findings, which suggest that a shared representation may not necessarily improve single-task or target-only learning performance. In this paper, we aim to understand how to learn from tasks with similar but not exactly the same linear representations, while dealing with outlier tasks. With a known intrinsic dimension, we propose two algorithms that are adaptive to the similarity structure and robust to outlier tasks under both MTL and TL settings. Our algorithms outperform single-task or target-only learning when representations across tasks are sufficiently similar and the fraction of outlier tasks is small. Furthermore, they always perform no worse than single-task learning or target-only learning, even when the representations are dissimilar. We provide information-theoretic lower bounds to show that our algorithms are nearly minimax optimal in a large regime. We also propose an algorithm to adapt to the unknown intrinsic dimension. We conduct two simulation studies to verify our theoretical results.



返回原图
/

 

光辉娱乐专业提供🌘:光辉娱乐🏛、等服务,提供最新官网平台、地址、注册、登陆、登录、入口、全站、网站、网页、网址、娱乐、手机版、app、下载、欧洲杯、欧冠、nba、世界杯、英超等,界面美观优质完美,安全稳定,服务一流,光辉娱乐欢迎您。 光辉娱乐官网xml地图
光辉娱乐 光辉娱乐 光辉娱乐 光辉娱乐 光辉娱乐 光辉娱乐 光辉娱乐 光辉娱乐 光辉娱乐 光辉娱乐