This section will include peer-reviewed publications, white papers, invited talks, and conference presentations as current doctoral and applied research projects mature into formal outputs.
Purdue University · Doctor of Technology Researcher
Dennis Mercer
Designing explainable, ontology-aligned AI systems that transform unstructured cybersecurity data into decision-ready threat intelligence.
About
I am a cybersecurity researcher, doctoral student in Purdue University’s Doctor of Technology program, and senior industry practitioner with more than two decades of experience in threat intelligence, security operations, and enterprise cyber defense. My work sits at the intersection of artificial intelligence, cybersecurity, and knowledge representation, with a particular focus on building systems that make machine-generated threat intelligence more transparent, interpretable, and operationally useful.
My research investigates how explainable AI, knowledge graphs, cybersecurity ontologies, neuro-symbolic reasoning, and fuzzy logic can be combined to transform unstructured alerts, reports, and threat data into structured, evidence-based intelligence. The broader goal is to improve analyst trust, support defensible attribution and validation, and enable AI systems that do not merely generate outputs, but provide traceable reasoning that security teams can inspect and use.
Research Areas
-
Explainable Cyber Threat Intelligence
Designing AI systems that convert unstructured cybersecurity data into transparent, evidence-based, and operationally actionable threat intelligence.
-
Knowledge Graphs and Cybersecurity Ontologies
Building ontology-aligned knowledge structures that represent adversaries, indicators, tactics, techniques, and relationships across threat intelligence sources and standards.
-
Neuro-Symbolic Security Reasoning
Integrating data-driven AI with symbolic and semantic reasoning to improve the interpretability, consistency, and analytical rigor of cybersecurity decision support systems.
-
Fuzzy Logic and Uncertainty Modeling
Applying fuzzy inference and uncertainty-aware reasoning to ambiguous, incomplete, and weakly structured cyber evidence in order to better reflect real analytic conditions.
-
Adversarial and Trustworthy AI for Security
Investigating the resilience, robustness, and trustworthiness of AI systems used in cybersecurity, including risks such as model manipulation, prompt abuse, and adversarial interference.
Dissertation Focus
“Designing an Explainable Cyber Threat Intelligence Framework Using Knowledge Graphs, Ontologies, and AI-Driven Reasoning”
My doctoral research addresses a central limitation in contemporary cyber threat intelligence workflows: the difficulty of transforming large volumes of unstructured alerts, reports, and security artifacts into intelligence that is transparent, explainable, and analytically defensible. Although AI systems are increasingly used to support cybersecurity operations, many current approaches remain difficult to interpret, making it challenging for analysts to understand why a conclusion was reached, what evidence supports it, and how much confidence should be placed in the result.
This research investigates how knowledge graphs, cybersecurity ontologies, large language models, neuro-symbolic reasoning, and fuzzy logic can be integrated into an explainable cyber threat intelligence framework. The goal is to create a system that does more than classify or summarize. It should organize heterogeneous threat data into machine-interpretable structures, support traceable reasoning, align outputs with operational frameworks such as MITRE ATT&CK and STIX, and provide analysts with decision-ready intelligence that can be inspected, validated, and acted upon.
Using a Design Science Research methodology, the work focuses on the design and evaluation of an AI-enabled framework that improves contextualization, evidence tracing, attribution support, and analytic trust in cyber threat intelligence production. This research is conducted under the advisement of Dr. Julia Rayz, Professor and Associate Department Head in the Department of Computer and Information Technology at Purdue University, whose expertise includes natural language understanding, knowledge representation, and fuzzy logic.
Publications & Presentations
Projects
X-CTIF: Explainable Cyber Threat Intelligence Framework
A research framework for transforming unstructured cybersecurity data into explainable, ontology-aligned, and decision-ready threat intelligence through the integration of AI, knowledge graphs, and structured reasoning.
- Python
- Knowledge Graphs
- LLMs
- MITRE ATT&CK
Ontology-Aligned Threat Reasoning
A semantic reasoning initiative focused on aligning cyber threat intelligence ontologies and operational frameworks to improve interoperability, evidence mapping, and cross-source analytic consistency.
- Ontology Design
- STIX/TAXII
- Semantic Web
Cybersecurity LLM Evaluation Pipelines
A systematic evaluation effort for measuring how large language models perform on cybersecurity tasks such as extraction, summarization, contextualization, and analytic reasoning under operational constraints.
- NLP
- Benchmarking
- Python
- Prompt Engineering
Neuro-Symbolic Threat Analysis
A hybrid reasoning project exploring how symbolic representations, ontologies, and machine learning can be combined to support explainable and defensible threat analysis in complex environments.
- Neuro-Symbolic AI
- Knowledge Graphs
- Cybersecurity Ontologies
- MITRE ATT&CK
Fuzzy Inference for Threat Intelligence
A research effort focused on representing ambiguity, partial evidence, and analytic uncertainty in cyber threat intelligence through interpretable fuzzy rule-based systems.
- Fuzzy Logic
- Fuzzy Inference Systems
- Explainable AI
- Threat Intelligence
Contact
I welcome opportunities for research collaboration, speaking engagements, professional dialogue, and interdisciplinary work at the intersection of artificial intelligence and cybersecurity.