Publications and conference presentations will be listed here as they are completed.
Purdue University · Doctor of Technology Student
Building Explainable AI Systems for Cyber Threat Intelligence and Security Analytics
I am a cybersecurity researcher and doctoral student in the Doctor of Technology program at Purdue University. My academic background includes a degree in Computer Engineering Technology, a Master of Science in Information Technology Management with an emphasis in Information Assurance, and a dual Master of Business Administration. These academic experiences are complemented by more than twenty years of professional experience in threat intelligence, security operations, and enterprise cyber defense.
My research focuses on the intersection of artificial intelligence, knowledge representation, and cyber threat intelligence. In particular, I investigate the use of neuro-symbolic AI and fuzzy logic inference to make AI-generated threat intelligence more reliable, trustworthy, and explainable. I also explore how AI-assisted ontology engineering can enhance the ability of security systems to interpret and reason about complex threat data.
Developing structured approaches to threat actor identification and intelligence lifecycle automation using AI-driven pipelines.
Designing knowledge representations that capture the semantics of cyber threats, enabling structured reasoning over intelligence data.
Building AI systems that produce transparent, interpretable outputs so human analysts can trust and verify machine-generated intelligence.
Integrating symbolic reasoning with neural architectures to achieve robust, uncertainty-aware AI that bridges the gap between data-driven and logic-based methods.
Investigating vulnerabilities in machine learning models, including data poisoning, prompt injection, and supply chain attacks on AI systems.
"Enhancing Cyber Threat Intelligence through Explainable AI: A Design Science Approach Using Knowledge Graphs, Ontologies, and Large Language Models"
My doctoral research addresses a critical challenge in modern cyber threat intelligence (CTI): transforming large volumes of unstructured threat data into intelligence that is transparent, explainable, and operationally actionable. Security analysts routinely rely on AI-assisted detection systems, yet many current approaches function as opaque models that provide limited insight into how conclusions are reached. This lack of interpretability creates barriers to trust, validation, and effective decision-making in cybersecurity operations.
Using a Design Science Research methodology, this work investigates how knowledge graphs, cybersecurity ontologies, large language models, neuro-symbolic reasoning, and fuzzy logic inference can be integrated to construct an explainable cyber threat intelligence framework. The proposed system incorporates structured representations derived from standards such as MITRE ATT&CK and STIX to contextualize alerts, threat reports, and related security artifacts within a machine-interpretable knowledge structure. By embedding explainable AI directly into the intelligence generation process, the framework enables analysts to trace the reasoning behind AI-assisted conclusions and better understand the relationships between observed indicators, adversary behavior, and potential threat activity.
This research is conducted under the advisement of Dr. Julia Rayz, Professor and Associate Department Head in the Department of Computer and Information Technology at Purdue University and a CERIAS Fellow specializing in natural language understanding, knowledge representation, and fuzzy logic.
Publications and conference presentations will be listed here as they are completed.
An explainable cyber threat intelligence framework that integrates LLMs, knowledge graphs, and ontologies for automated threat analysis and attribution.
Developing reasoning systems that align CTI ontologies (STIX, ATT&CK, D3FEND) for cross-framework threat intelligence correlation and analysis.
Building systematic evaluation frameworks for assessing LLM performance on cybersecurity-specific tasks, including threat report summarization and indicator extraction.
Exploring hybrid AI architectures that combine neural models with symbolic reasoning to improve explainability and reliability in cybersecurity analysis. This research integrates knowledge graphs and cybersecurity ontologies to support structured, evidence-based threat intelligence aligned with frameworks such as MITRE ATT&CK.
Developing fuzzy inference systems to model uncertainty and partial evidence in cyber threat intelligence. This work applies interpretable fuzzy rule systems to represent analyst knowledge and reason about ambiguous signals, behavioral indicators, and incomplete intelligence reports.
Interested in collaboration, speaking opportunities, or discussing research? Reach out below.