Workshop on Cybersecurity, AI, and Education

Date: Monday, February 16, 2026 • 1:00-3:00 PM EST
Zoom: By Invitation

SFS Principal Investigator (PI) Meeting and Symposium

NSF Project (SaTC: EDU #2335666)

NSF Project Description

SaTC: EDU: AI for Cybersecurity Education via an LLM-enabled Security Knowledge Graph (#2335666).

Building a skilled cybersecurity workforce is paramount for national interest in the modern digital age. Traditional cybersecurity education often struggles to keep pace with an evolving threat landscape and to support learners with diverse backgrounds.

This project addresses two challenges: (1) creating an interactive teaching methodology by simulating realistic cyber-attack scenarios for learners at different levels, and (2) designing personalized, comprehensive, and up-to-date learning plans with continuous feedback and evaluation of learners’ cognitive engagement.

The project is conducted by interdisciplinary research experts in AI, cybersecurity, and education, with two interdependent goals: (a) leveraging large language models (LLMs) to empower educators and equip students with necessary skills to combat emerging threats; and (b) embedding AI into the education system to enable practical, metric-based e valuation and feedback to measure cognitive engagement and dynamically update learning goals.

Workshop Overview

This workshop, held as part of the SFS Principal Investigator (PI) Meeting and Symposium, convenes researchers, educators, and practitioners at the intersection of cybersecurity, artificial intelligence, and education to reimagine how cybersecurity education is designed, delivered, and evaluated in the era of Generative AI (GAI).

As AI technologies increasingly shape both cyber threats and defenses, there is a pressing need to rethink educational paradigms that prepare the next generation of cybersecurity professionals. This workshop provides a forum for discussing how AI-driven approaches can enhance learning effectiveness while maintaining rigor, trustworthiness, and security awareness.

The workshop is structured around three tightly connected themes that reflect emerging research directions and practical challenges: knowledge-aware AI for education, trust, safety, and security awareness, and agentic and adaptive AI for learning. Through invited talks and presentations, participants will explore how these themes can jointly advance scalable, personalized, and resilient cybersecurity education.

Workshop Themes

The workshop is organized around three interconnected themes that collectively address both the opportunities and challenges of integrating Generative AI into cybersecurity education:

  • Knowledge-aware AI for education, This theme focuses on grounding AI-powered educational tools in structured and validated knowledge sources, such as knowledge graphs and retrieval-augmented generation (RAG) frameworks. Emphasis is placed on ensuring factual correctness, transparency, and pedagogical alignment, while mitigating hallucinations and misinformation in AI-assisted learning environments.
  • Trust, safety, and security awareness, This theme examines the reliability and responsible use of AI in educational settings, addressing issues such as robustness, bias, adversarial manipulation, data privacy, and ethical considerations. Discussions will highlight the safeguards, evaluation methodologies, and best practices required to foster trust in AI systems while cultivating security awareness among learners.
  • Agentic and adaptive AI for learning, This theme explores how agentic and adaptive AI systems, including multi-agent architectures, can support personalized instruction, interactive laboratories, continuous feedback, and learner engagement. Topics include adaptive learning pathways, intelligent tutoring systems, and AI-driven assessment mechanisms that dynamically respond to learners' needs and progress.

Program

Time: 1:00 PM - 3:00 PM (EST)
Format: Welcome & opening remarks + 1 keynote + 4 invited talks + closing

Wei Ding (placeholder photo)
Wei Ding
University of Massachusetts Boston
Giri Narasimhan (placeholder photo)
Giri Narasimhan
Florida International University
Jundong Li (placeholder photo)
Jundong Li
University of Virginia
Raha Moraffah (placeholder photo)
Raha Moraffah
Worcester Polytechnic Institute
Chengshuai Zhao (placeholder photo)
Chengshuai Zhao
Arizona State University
Time (EST) Duration Speaker Session
1:00 pm 05 min Huan Liu & Garima Agrawal Welcome & Opening Remarks
1:05 pm 35 min Wei Ding Keynote   [Slides]
1:40 pm 20 min Giri Narasimhan Invited Talk
2:00 pm 20 min Jundong Li Invited Talk   [Slides]
2:20 pm 20 min Raha Moraffah Invited Talk
2:40 pm 15 min Chengshuai Zhao Invited Talk   [Slides]
2:55 pm 05 min Garima Agrawal & Huan Liu Reflections & Closing

Invited Speakers

Wei Ding

Wei Ding

Executive Director, Paul English Applied AI Institute & Distinguished Professor of Computer Science, UMass Boston

Bio: Wei Ding is the Executive Director of the Paul English Applied AI Institute and a Distinguished Professor of Computer Science at UMass Boston. She previously served as a Program Director at the National Science Foundation (2019–2023) and received the NSF Director’s Award in 2022.

Keynote: In this keynote, Professor Ding will share practical lessons from building the Paul English Applied AI Institute at UMass Boston and leading the Youth STEAM Initiative, where she mentors middle and high school students to apply AI and STEAM skills in service of community needs. She will highlight what has worked to translate AI into education impact at scale, including program design, partnership models, and outcomes that can be replicated across institutions.

Giri Narasimhan

Giri Narasimhan

University Distinguished Professor, Knight Foundation School of Computing and Information Sciences, Florida International University

Bio: Giri Narasimhan is a University Distinguished Professor in the Knight Foundation School of Computing and Information Sciences at FIU, where he heads the Bioinformatics Research Group (BioRG) and the Algorithms for Machine Learning and Data Analytics (AMaDAys) research group. His research interests include Artificial Intelligence (AI), Machine Learning (ML), Data Science, and Algorithms. His work is interdisciplinary and has applied AI and ML techniques to the fields of Environmental Science, Bioinformatics, Public Policy, and Engineering. His research has been funded by the NSF, NIH, NIJ, DoD, FDOH, USDA, DoE, DARPA, NPS, and private industry. His recent publications on the applications of AI/ML for improving computer caches, predicting the strength of three-dimensional biomolecular interactions, and for mitigating floods in coastal river systems have gained recognition. Details at: https://users.cs.fiu.edu/~giri/.

Talk abstract: AI systems are increasingly used to support productivity and decision-making, while reducing human error. But can it support structured learning? If so, how? We already have an array of AI tools at our disposal to assist the learning process. These tools do wonders in the hands of an accomplished learner. But how can we help novice learners? What do existing AI tools lack? What are the essential ingredients of a good learning environment that can be simulated with AI? In this talk, we discuss Euleris, a novel AI-powered learning platform, along with some implications for cybersecurity learners.

Jundong Li

Jundong Li

Associate Professor, University of Virginia (ECE & CS)

Bio: Jundong Li is an Associate Professor at the University of Virginia with joint appointments in the Department of Electrical and Computer Engineering and the Department of Computer Science. His research spans data mining, machine learning, and artificial intelligence, with a particular emphasis on graph machine learning and trustworthy, safe machine learning. His work has received multiple best paper awards and early career research honors.

Talk abstract: Modern machine learning models can absorb sensitive, copyrighted, or harmful information from their training data, raising concerns when such data must be removed after deployment. Machine unlearning aims to eliminate the influence of specific data without retraining models from scratch. This talk focuses on trustworthy machine unlearning and two fundamental challenges: certification, which guarantees that unlearning is sufficiently accurate, and verification, which enables independent assessment of whether unlearning has truly occurred. I introduce certified unlearning approaches that bound the gap between unlearned and retrained models, and show that existing verification techniques can be circumvented by adversarial unlearning strategies, revealing critical gaps in current unlearning guarantees.

Raha Moraffah

Raha Moraffah

Assistant Professor of Computer Science, Worcester Polytechnic Institute

Bio: Dr. Raha Moraffah is an Assistant Professor in the Department of Computer Science at Worcester Polytechnic Institute (WPI). Prior to joining WPI, she earned her Ph.D. in computer science from Arizona State University. Raha's research spans machine learning, data mining, artificial intelligence, and causal inference, with a specific focus on developing trustworthy and responsible machine learning and generative AI algorithms. She is also interested in integrating a causality perspective into machine learning, paving the way for the advancement of causal machine learning. Her scholarly contributions have been featured in prestigious academic venues such as KDD, ICDM, WSDM, ECML, AAAI, EMNLP, EACL, and IJCNLP. Her work has earned several awards, including the Outstanding Paper Award at IJCNLP 2023, the 2024 ASU Engineering Dean's Dissertation Award, ASU’s nominee for the 2024 CGS/ProQuest Distinguished Dissertation Awards, and the SCAI Doctoral Fellowship for Excellent Research (the highest distinction).

Talk abstract: Multi-agent Retrieval-Augmented Generation (RAG) systems are an emerging opportunity for AI-assisted cybersecurity education, enabling role-specialized agents to retrieve, integrate, and explain information across structured security knowledge graphs and other heterogeneous sources. However, many existing multi-agent designs remain predominantly reactive and correlation-driven, limiting their ability to reason about the effects of pedagogical and security interventions. This talk presents a vision for causal multi-agent RAG systems in cybersecurity education, highlighting how causal planning can support “why” explanations, counterfactual reasoning, and principled decision-making in adaptive cybersecurity learning environments.

Chengshuai Zhao

Chengshuai Zhao

Ph.D. Student, Arizona State University

Bio: Chengshuai Zhao is a third-year Ph.D. student in Computer Science at Arizona State University (ASU). He works as a Graduate Research Associate in the Data Mining and Machine Learning Lab (DMML), advised by Prof. Huan Liu. His research spans data mining, AI for science, representation learning, and large language models, with the goal of building systems that are more generalizable, transparent, safe, and capable of uncovering knowledge at the frontiers of human understanding. His works have received the Best Student Paper award at IEEE CogMI and the Best Demo award at CIKM. He is a member of IEEE, ACM, SIAM, and AAAS. He also serves as a program committee member and reviewer for leading conferences, including NeurIPS, ICML, SIGKDD, WSDM, ACL, and AAAI.

Talk abstract: Artificial Intelligence (AI), particularly large language models (LLMs), offers significant potential to transform education through adaptive, data-driven learning environments. This talk presents an interdisciplinary effort to integrate AI into cybersecurity education by developing an interactive learning system that combines LLM-based question answering with agent-based simulation exercises for hands-on, problem-based learning. Beyond intelligent content support, the approach embeds AI into a metric-based evaluation framework that monitors student progress, analyzes engagement and skill development, and assesses learning effectiveness. Continuous assessment is coupled with personalized feedback to dynamically adapt learning pathways to diverse student backgrounds and proficiency levels. By positioning AI as both an instructional and evaluative partner, this work highlights a broader vision for AI-enhanced education that supports personalized learning, informed pedagogy, and the development of a skilled and adaptable workforce.

Organizers

Garima Agrawal (placeholder photo)

Garima Agrawal

Arizona State University
Huan Liu (placeholder photo)

Huan Liu

Arizona State University
Chengshuai Zhao (placeholder photo)

Chengshuai Zhao

Arizona State University

Contact

For questions about the workshop program, please contact the organizers at czhao93 [at] asu.edu.