11th ACM Workshop on Artificial Intelligence and Security
with the 25th ACM Conference on Computer and Communications Security (CCS)
October 19, 2018
Toronto, Canada


Navigation

  • About

News / Upcoming Events

  • April 30, 2018: The call for papers is published!
  • May 10, 2018: We are pleased to announce that AISec will be a double workshop this year, so we we will be able to publish roughly twice as many papers as in past years!
  • May 10, 2018: The submission page is open!

Overview

The 2018 ACM Workshop on Artificial Intelligence and Security will be co-located with CCS — the premier computer security conference. As the 11th workshop in the series, AISec 2018 calls for papers on topics related to both AI/learning and security/privacy.

Artificial Intelligence (AI), and Machine Learning (ML) in particular, provide a set of useful analytic and decision-making techniques that are being leveraged by an ever-growing community of practitioners, including applications with security-sensitive elements. However, while security researchers often utilize such techniques to address problems and AI/ML researchers develop techniques for big-data analytics applications, neither community devotes much attention to the other. Within security research, AI/ML components are often regarded as black-box solvers. Conversely, the learning community seldom considers the security/privacy implications entailed in the application of their algorithms when designing them. While these two communities generally focus on different issues, where these two fields do meet, interesting problems appear. Researchers working in the intersection have already raised many novel questions for both communities and created a new branch of research known as secure learning.

The past few years have particularly seen increasing interest within the AISec / Secure Learning community. There are several reasons for this surge. Firstly, machine learning, data mining, and other artificial intelligence technologies play a key role in extracting knowledge, situational awareness, and security intelligence from Big Data. Secondly, companies like Google, Amazon and Splunk are increasingly exploring and deploying learning technologies to address Big Data problems for their customers. Finally, these trends are increasingly exposing companies and their customers/users to intelligent technologies. As a result, these learning technologies are both being explored by researchers as potential solutions to security/privacy problems, and also are being investigated as a potential source of new privacy/security vulnerabilities that need to be secured to prevent them from misbehaving or leaking information to an adversary. The AISec Workshop meets this need and serves as the sole long-running venue for this topic.

AISec serves as the primary meeting place for diverse researchers in security, privacy, AI, and machine learning, and as a venue to develop the fundamental theory and practical applications supporting the use of machine learning for security and privacy. The workshop addresses on this burgeoning community who are especially focused on (among other topics) learning in game-theoretic adversarial environments, privacy-preserving learning, or use of sophisticated new learning algorithms in security.