Skip to content

Purdue University · Doctor of Technology Student

Dennis Mercer

Building Explainable AI Systems for Cyber Threat
Intelligence and Security Analytics

01

About

I am a cybersecurity researcher and doctoral student in the Doctor of Technology program at Purdue University. My academic background includes a degree in Computer Engineering Technology, a Master of Science in Information Technology Management with an emphasis in Information Assurance, and a dual Master of Business Administration. These academic experiences are complemented by more than twenty years of professional experience in threat intelligence, security operations, and enterprise cyber defense.

My research focuses on the intersection of artificial intelligence, knowledge representation, and cyber threat intelligence. In particular, I investigate the use of neuro-symbolic AI and fuzzy logic inference to make AI-generated threat intelligence more reliable, trustworthy, and explainable. I also explore how AI-assisted ontology engineering can enhance the ability of security systems to interpret and reason about complex threat data.

02

Research Interests

03

Dissertation Focus

"Enhancing Cyber Threat Intelligence through Explainable AI: A Design Science Approach Using Knowledge Graphs, Ontologies, and Large Language Models"

My doctoral research addresses a critical challenge in modern cyber threat intelligence (CTI): transforming large volumes of unstructured threat data into intelligence that is transparent, explainable, and operationally actionable. Security analysts routinely rely on AI-assisted detection systems, yet many current approaches function as opaque models that provide limited insight into how conclusions are reached. This lack of interpretability creates barriers to trust, validation, and effective decision-making in cybersecurity operations.

Using a Design Science Research methodology, this work investigates how knowledge graphs, cybersecurity ontologies, large language models, neuro-symbolic reasoning, and fuzzy logic inference can be integrated to construct an explainable cyber threat intelligence framework. The proposed system incorporates structured representations derived from standards such as MITRE ATT&CK and STIX to contextualize alerts, threat reports, and related security artifacts within a machine-interpretable knowledge structure. By embedding explainable AI directly into the intelligence generation process, the framework enables analysts to trace the reasoning behind AI-assisted conclusions and better understand the relationships between observed indicators, adversary behavior, and potential threat activity.

This research is conducted under the advisement of Dr. Julia Rayz, Professor and Associate Department Head in the Department of Computer and Information Technology at Purdue University and a CERIAS Fellow specializing in natural language understanding, knowledge representation, and fuzzy logic.

04

Publications & Presentations

Publications and conference presentations will be listed here as they are completed.

05

Projects

Framework

AI-Driven CTI Framework (XCTIF)

An explainable cyber threat intelligence framework that integrates LLMs, knowledge graphs, and ontologies for automated threat analysis and attribution.

  • Python
  • Knowledge Graphs
  • LLMs
  • MITRE ATT&CK
Research

Ontology-Based Reasoning Models

Developing reasoning systems that align CTI ontologies (STIX, ATT&CK, D3FEND) for cross-framework threat intelligence correlation and analysis.

  • Ontology Design
  • STIX/TAXII
  • Semantic Web
Evaluation

LLM Evaluation Pipelines

Building systematic evaluation frameworks for assessing LLM performance on cybersecurity-specific tasks, including threat report summarization and indicator extraction.

  • NLP
  • Benchmarking
  • Python
  • Prompt Engineering
Research

Neuro-Symbolic Threat Reasoning

Exploring hybrid AI architectures that combine neural models with symbolic reasoning to improve explainability and reliability in cybersecurity analysis. This research integrates knowledge graphs and cybersecurity ontologies to support structured, evidence-based threat intelligence aligned with frameworks such as MITRE ATT&CK.

  • Neuro-Symbolic AI
  • Knowledge Graphs
  • Cybersecurity Ontologies
  • MITRE ATT&CK
Research

Fuzzy Logic Threat Inference

Developing fuzzy inference systems to model uncertainty and partial evidence in cyber threat intelligence. This work applies interpretable fuzzy rule systems to represent analyst knowledge and reason about ambiguous signals, behavioral indicators, and incomplete intelligence reports.

  • Fuzzy Logic
  • Fuzzy Inference Systems
  • Explainable AI
  • Threat Intelligence
06

Contact

Interested in collaboration, speaking opportunities, or discussing research? Reach out below.