TUTORIAL-1
TITLE: Adversarial Learning: Secure and Robust AI (*)
ABSTRACT: This tutorial covers approaches that prevent malicious interference and adversarial perturbations in (continuous) training and deployment of Deep Neural Networks. Deep learning, including both offline and continual online reinforcement learning, is based on enormous training datasets whose acquisition and curation may be insecure. Thus, deep learning may be prone to backdoors (Trojans) or “error-generic” data-poisoning attacks. Based on the training data, deep learning employs gradient based optimization of a “loss” function to determine the enormous set of parameters of a given deep neural network (DNN) model, so deep learning does not directly determine the DNN’s very complex decision boundaries. Training datasets may be imbalanced or inadequate for highly reliable generalization performance, or the deep learning process may overfit to the training data. Thus, DNNs may be prone to non-malicious bias and adversarial perturbations. In this tutorial, we will describe methods to address such problems involving discrete (including sequentially discrete) decision-making. Defenses can be crafted for before/during training (data cleansing or correction), post-training (no training data available), and operational (test-time) scenarios. Some post-training defenses leverage small clean datasets while others do not. Some defenses are analogous to fuzzing techniques in cyber security, while others seek to mitigate superfluous functionality which enables certain attacks. In particular, we will explain defense approaches based on embedded/internal features, i.e., anomaly detectors in activation space – such approaches reflect the DDDAS-based system-cognizant modeling methods. Examples will be drawn from deep classification of images and natural-language generative AIs.
(*) The methodologies discussed are based on research supported in part by AFOSR DDDAS and NSF SBIR grants and was conducted in collaboration with former and current Ph.D. students at Penn State.
INSTRUCTORS’ BIOS:
George Kesidis received his MS (1990 – neural networks, stochastic optimization) and PhD (1992 – performance evaluation, networking) in EECS from UC Berkeley. Following eight years as a professor of ECE at the University of Waterloo, he has been a professor of CSE and EE at the Pennsylvania State University since 2000. His research interests include both theoretical and applied problems in AI/ML, cyber security, cloud computing, and networking. In the past 25 years, his research in these areas has been supported by over a dozen NSF grants, several Cisco Systems gifts, and grants from AFOSR, DARPA and ONR. Presently, he has NSF grants on edge cloud support of virtual reality and on AI applications to manufacturing and a Cisco gift for adversarial AI. In 2023, he co-authored a book through Cambridge University Press on Adversarial Learning and Secure AI. He also co-founded a start-up working on this topic in addition to boutique AI/ML projects.
David Miller joined Penn State’s Electrical Engineering Department in 1995. He is an active researcher in the areas of machine learning, data compression, bioinformatics, source and channel coding, and statistical estimation. He publishes regularly on unsupervised clustering, supervised classification, semi-supervised learning, active learning, adversarial learning, feature selection, maximum entropy statistical inference, hidden Markov models, and semisupervised/transductive learning techniques and their applications to a variety of problem domains. He has numerous publications collectively in: NIPS, IJCAI, IEEE T-PAMI, IEEE TNN-LS, IEEE Trans. Signal Processing, and Neural Computation. Dr. Miller did seminal work on semi-supervised learning in 1996. He received an NSF CAREER Award in 1996. He was on the IEEE SP Society Conference Board from 2019-2022 and is currently on the Management Board for IEEE Transactions on Artificial Intelligence. He was Chair of the Machine Learning for Signal Processing Technical Committee, within the IEEE Signal Processing Society from 2007-2009. He was an Associate Editor for IEEE Transactions on Signal Processing from 2004-2007. He was General Chair for the 2001 IEEE Workshop on Neural Networks for Signal Processing. Dr. Miller has been a PI or co-PI on grants from NSF, AFOSR, NIH, ONR, AFRL, NASA and NIH. Dr. Miller is also co-founder of the startup Anomalee, Inc, which received a National Science Foundation SBIR award in the area of adversarial learning.