学术报告

学术报告

您当前所在位置: 首页 > 学术报告 > 正文
报告时间 报告地点
报告人

报告题目:Adversarial Machine Learning

报告人:Fabio Roli, Professor, University of Cagliari, Italy

照片:

邀请人:袁 景

报告时间:6月21日(星期五)上午9点

报告地点:信远楼II206数统院报告厅

报告人简介:Fabio Roli is a Full Professor of Computer Engineering at the University of Cagliari, Italy, and Director of the Pattern Recognition and Applications laboratory (http://pralab.diee.unica.it/). He is partner and R&D manager of the company Pluribus One that he co-founded (https://www.pluribus-one.it). He has been doing research on the design of pattern recognition and machine learning systems for thirty years. His current h-index is 60 according to Google Scholar (June 2019). He has been appointed Fellow of the IEEE and Fellow of the International Association for Pattern Recognition. He was a member of NATO advisory panel for Information and Communications Security, NATO Science for Peace and Security (2008 – 2011).

报告摘要:Machine-learning algorithms are widely used for cybersecurity applications, including spam, malware detection, biometric recognition. In these applications, the learning algorithm has to face intelligent and adaptive attackers who can carefully manipulate data to purposely subvert the learning process. As machine learning algorithms have not been originally designed under such premises, they have been shown to be vulnerable to well-crafted, sophisticated attacks, including test-time evasion and training-time poisoning attacks (also known as adversarial examples). This talk aims to introduce the fundamentals of adversarial machine learning by a well-structured overview of techniques to assess the vulnerability of machine-learning algorithms to adversarial attacks (both at training and test time), and some of the most effective countermeasures proposed to date. We report application examples including object recognition in images, biometric identity recognition, spam and malware detection.

上一篇:Constrained Deep Networks: Models and Optimization

下一篇:Entire solutions of the Fisher-KPP equation

关闭