The 1st. International Symposium on AI Safet­­­y and Security

Call for Participation

In 2022, the Japan Society of Artificial Intelligence (JSAI) established a Special Interest Group on AI Safety and Security (SIG-SEC). SIG-SEC aims to foster a research community comprising experts in AI and Information Security to lead the interdisciplinary research field of “AI Safety and Security” or so-called “Trustworthy AI.”

In recent years, numerous products and services utilizing AI technology have permeated society, with AI-driven decision-making increasingly exerting influence over human lives and various industries. As AI-driven autonomous decision-making gradually displaces human involvement, the necessity to consider AI security as a design principle has grown significantly. This symposium aims to explore and advance research in the realm of AI safety and security, encompassing topics such as malfunction, attacks, defenses, tracking, and analysis, in pursuit of innovative ideas and solutions.

We welcome Professor Shamir for the 1st International Symposium on AI Safety and Security. We look forward to your participation!

Orgnaizer

Co-organizer
Supported by
Sponsored by

JSAI SIG-SEC (Japan Society of Artificial Intelligence, AI Safety and Security Special Interest Group)
AWS (AI Security Workshop Committee)
JDC (Japan Datacom), IISEC(Institute of Information Security)
DNV Business Assurance Japan

Date
Venue



Registration URL

Jan 15th (Mon), 2024
Institute of Information Security (IISEC)
2-14-1 Tsuruyacho, Kanagawa-ku, Yokohama 221-0835, Japan
https://www.iisec.ac.jp/english/access/
(5 min walk from JR Yokohama Station)
https://www.ai-gakkai.or.jp/sig-system/sigusers/add/sec/int_sigsec2024

Program

13:30-13:35Opening
13:35-13:45Introduction of Heidelberg Laureate Forum
Yuko Ishida (Japan Datacom)
13:45-14:45Invited Talk (Tentative)
The Dimpled Manifold Model of Adversarial Examples in Machine Learning
Adi Shamir, Odelia Melamed, Oriel BenShmuel
14:45-15:00Break
15:00-15:50
Session 1
Robustness bounds on the successful adversarial examples: Theory and Practice
Hiroaki Maeshima, Akira Otsuka(Institute of Information Security) 

Learning on Contextual Code Property Graph for Source Code Vulnerability Detection 
Muhammad Fakhrur Rozi, Seiichi Ozawa (Kobe University)
15:50-16:00Break
16:00-16:50
Session 2
AI-generated text detection method using entropy with text frequency
Kaito Taguchi, Yujie Gu, Kouichi Sakurai(Kyushu University) 

Re-visited Privacy-Preserving Machine Learning
Atsuko Miyaji, Tatuhiro Yamatsuki, Bingchang He, and Shintaro Yamashita (Osaka University)
16:50Closing