Ghidra: Is Newer Always Better?

Examines impact of Ghidra’s 39 releases and 13,000 commits on code similarity analysis and metrics like analysis time and function detection, revealing that newer versions don’t always provide superior results for every use case.

Automated Discovery for Emulytics

Describes automated methods and tools for discovering information systems through network and host analysis to create high-fidelity emulation models, demonstrated on SCinet with 5 routers and 10,000 endpoints.

Quantifying Uncertainty in Emulations: LDRD Report

Sandia LDRD report summarizing a three-year project to quantify behavioral (not performance) differences between emulations and real-world systems by running representative workloads on both and comparing collected metrics.

Lessons learned from 10k experiments to compare virtual and physical testbeds

Documents lessons learned from running over 10,000 experiments and processing half a petabyte of data to quantify behavioral (not just performance) differences between virtual and physical testbeds for cyber security research.

Virtually the same: Comparing physical and virtual testbeds

Comparative analysis quantifying behavioral differences between physical and virtual testbeds for cyber security research to assess the fidelity of virtualized environments for experimentation.

Attacking DBSCAN for Fun and Profit

Demonstrates how adversaries can subvert DBSCAN clustering by injecting bridge points to merge arbitrary clusters, degrading system performance, and proposes machine learning-based remediation using outlier detection.