Cybersecurity, Master of Science (MS)
| RESEARCH METHODS AND COLLOQUIUM | 4 |
& | ADVANCED COMPUTER AND INFORMATION SECURITY and ADVANCED COMPUTER AND INFORMATION SECURITY LAB | 4 |
& | ADVANCED NETWORK SECURITY and ADVANCED NETWORK SECURITY LAB | 4 |
| DIRECTED STUDY | 4 |
| PROJECT MANAGEMENT FOR CYBERSECURITY | |
| |
| |
&
| MACHINE LEARNING and MACHINE LEARNING LAB | |
&
| DEEP LEARNING and DEEP LEARNING LAB | |
&
| AI FUNDAMENTALS and AI FUNDAMENTALS LAB | |
&
| AI METHODS AND VALIDATION and AI METHODS AND VALIDATION LAB | |
&
| SECURING AI and SECURING AI LAB | |
| |
| THESIS | 12 |
| or | RESEARCH REPORT |
| Total Credits | 48 |
Students who earn an MS in Cybersecurity from EWU should be able to:
General MS Cybersecurity Concentration
- demonstrate cybersecurity principles in the securing of networks and software systems;
- possess an advanced understanding of core cybersecurity knowledge;
- use advanced cybersecurity skills in securing of networks and the development of software systems.
Secure AI Concentration
- demonstrate cybersecurity principles in the securing of networks and software systems;
- possess an advanced understanding of core cybersecurity knowledge;
- use advanced cybersecurity skills in securing of networks and the development of software systems;
- Understand the Legal and Ethical Implications: Navigate the legal and ethical considerations specific to AI security, including privacy, accountability, and the impact of AI on society;
- Understand AI and Machine Learning Fundamentals: Grasp the core principles of AI, including machine learning, neural networks, and data science, which underpin AI systems;
- Assess AI System Vulnerabilities: Identify and analyze potential vulnerabilities in AI models, including data poisoning, adversarial attacks, and model inversion;
- Secure AI Pipelines: Implement security measures throughout the AI development pipeline, from data collection and preprocessing to model training, deployment, and maintenance;
- Defend Against Adversarial Attacks: Develop strategies to protect AI models from adversarial attacks, which involve malicious inputs crafted to deceive the AI system;
- Ensure Data Integrity: Protect the integrity and confidentiality of the data used in AI systems, ensuring that the data is free from tampering and bias;
- Mitigate Bias and Fairness Issues: Identify and mitigate bias in AI systems to ensure fair and ethical outcomes, especially in high-stakes applications;
- Design Robust AI Systems: Create AI systems resilient to attacks, errors, and other disruptions, ensuring reliable and secure performance in various environments;
- Monitor AI System Performance: Monitor AI systems to detect and respond to security threats, performance degradation, and unexpected behaviors.