Lab Home | Phone | Search | ||||||||
|
||||||||
Malware is an ever-evolving threat that can have drastic effects on both individuals and institutions. Neural network malware classification systems are key tools for combating malware threats but are vulnerable to adversarial machine learning attacks. Existing defenses often rely on enhancing the training process. We propose focusing on the verification process used to evaluate improvements to training. As such, we present a case study that evaluates a novel verification domain that will help to ensure tangible safeguards against adversaries. To do so, we describe malware classification and two types of common malware datasets (feature and image datasets), demonstrate the certified robustness accuracy of malware classifiers using the Neural Network Verification (NNV) and Neural Network Enumeration (nnenum) tools and outline the challenges and future considerations necessary for the improvement and refinement of the verification of malware classification. Bio: Preston Robinette works in Dr. Taylor T. Johnson's Verification and Validation for Intelligent and Trustworthy Autonomy Laboratory (VeriVITAL). She is interested in research related to generative AI, information hiding, reinforcement learning, machine learning applications, and cybersecurity. Preston is a recipient of the National Defense Science and Engineering Graduate Fellowship Award and the 2023 Vanderbilt University ABS Scholarship Award. She earned a BS in Physics from Presbyterian College and has completed internships at ORNL, NASA, AFRL, NSA, Apple, and Google (summer 2024). Join on Teams: Meeting ID: 265 581 486 687 Passcode: dddYCG 1 575-323-9652 ID: 196825268 Host: ALDSC - Mary Ann Novak |