EMERGING DIALOGUES IN ASSESSMENT

Advancing a Cybersecurity Course: Successes Through AI Technology and Collaborative Projects

 

March 25, 2026

  • Alex Yu, B.S., Server Administrator & Computer Science Adjunct Professor, Walsh University
  • Amy J. Heston, Ph.D., Professor of Inorganic Chemistry, Walsh University

Abstract

The participants in a cybersecurity course were encouraged to use generative AI to expand upon the topics in their textbook and labs with examples, procedures, and even skits.  AI also assisted students and the instructor in troubleshooting computer problems that affected the machines in the lab. During class, students were guided in ethical AI use and given opportunities to fact-check the content and, therefore, AI technology became a valuable asset, speeding up research and content generation.

 

Introduction

This project highlighted the first inclusion of Artificial Intelligence (AI) in Walsh University’s course on Advanced Cybersecurity: CS 387. This work showcased its effectiveness in clarifying concepts, enhancing programming, brainstorming concepts, and troubleshooting technical issues. Caveats with AI were also noted. Overall, the assessment of this new pedagogical approach, with special emphasis on teamwork, revealed increased student achievement in cybersecurity and aligned with best practices and advancements for STEM assessment practices (Redd et al., 2024).

Clarifying Concepts with AI

Some cybersecurity and networking concepts were easier for students to grasp once AI was used to elucidate the subjects. AI also supplemented textbooks with additional examples on a topic of study. For example, when the instructor referenced one example of a database inference technique, AI was called upon to create more concrete examples to provide greater depth in learning.

Skits provided effective learning experiences in this course. Although an instructor could explain the process of having one’s networked computer obtain an IP address from a server using the Dynamic Host Configuration Protocol (DHCP) with diagrams and protocol analyzer output to show the digital conversations on the network, the explanation could be dry or intimidating to students who haven’t taken a networking course. As an alternative, AI introduced the subject material by creating a skit where computers were represented by human actors who interacted with each other in a relatable manner. Students acted as machines on a local area network and talked to each other in plain English to obtain network information which was then used to reach Internet destinations. The short performance primed the audience for the technical follow-up of the topic.

A second skit illustrated a Man-in-the-Middle (MitM) attack where the ARP caches of a target computer and the router’s gateway were maliciously altered to send communications to the MitM.  An actor portrayed a user in a coffee shop who navigated to a banking web site and fell victim to a MitM attack when a malicious hacker entered the scene and was inserted between the user and the Internet gateway, thereby allowing the attacker to observe network traffic between the endpoints. The skit was enhanced by having the actors uphold a sign to show their ARP caches before and after insertion. Tennis balls represented packets of information that traversed the network; tossing these balls illustrated the change in network traffic patterns as the takeover occurred. To add an element of humor, the villainous character recited AI-generated poetry to describe his devious intentions.

Enhancing Programming with AI

Students who encountered unfamiliar computer languages or commands obtained sample code to show proper syntax and usage in scripts and programs. The lab material included commands entered on the Windows command line, PowerShell, and Linux bash and zsh Z shells.  Code in languages such as Python, PHP, Java, Assembly, and SQL were also included, but may not have been recognized by inexperienced students. AI was used to annotate lines of code to explain the purpose of each command, translate from an unfamiliar language to one that the student knew, and provide links to more resources that included tutorials and robust explanations.

Brainstorming with AI

Students in CS 387 were strongly encouraged to work on team projects.  AI was utilized to suggest ideas for consideration.  In an online discussion, a student had indicated his curiosity about Bluetooth vulnerabilities but lacked experience in the subject. As the course did not already have a lab about this topic, the student tasked AI with generating a list of equipment and a procedure to follow in order to effectively demonstrate how to conduct Bluejacking and Bluesnarfing as Blue-tooth related cyberattacks (Browning & Kessler, 2009; Patel et al., 2021).The benefits of this experience were that the members of the student team quickly learned the material, and the instructor incorporated the procedure into a future lab. Other improvements to the course included modifying selected questions on the midterm and final exams; AI was given the task of referencing the specific chapters in the textbook to provide a base set of questions for the instructor’s review.  

Troubleshooting with AI

When computers in the lab didn’t behave as expected, AI provided guidance and explanations. Students were working on a version of Oracle VirtualBox that caused their PCs to crash, resulting in the infamous Blue Screen of Death (BSoD). The instructor noted the error code and other technical details, then posed the question to Google’s Gemini for a solution to the problem. The AI-generated answer rapidly confirmed the solution within seconds, whereas the instructor’s search engine exploration of various methods would have taken considerably longer.  After following the advice, the issues on the student lab machines were rectified and the BSoDs no longer appeared.

Caveats with AI

As wonderful as generative AI can be, when using it or any technology, it is important to identify and understand its drawbacks to use it effectively and avoid pitfalls. Considering the instructor’s experience, it was critical to make sure students used AI as a tool to generate ideas and provide help, without overly relying on it to complete their assignments. The right balance provided a launching pad to overcome technical obstacles and expand creative horizons without escaping the academic rigor demanded by the instructor.

The selection of the generative AI tools had an impact on the output that the students obtained. For example, when used to generate skits, sometimes Gemini seemed to “not comprehend” certain directions given, such having the Man-in-the-Middle speak in limerick as opposed to other poetic forms, and formatting stage directions with a particular style. However, a trial version of a ChatGPT completed the task with very little re-prompting or intervention. Each AI tool has their own strengths and weaknesses; when one tool doesn’t yield the desired results, a different one (or version) may compensate.

Several instances of incorrect responses, or hallucinations, were found during the class activities and lesson preparation. It may not be as difficult to detect misleading or false information on websites, blogs, social media, etc., but if such sources were used to train Large Language Models (LLMs), then misinformation is likely to occur and may be more difficult to identify. Students should be taught to scrutinize generated output and carefully weigh the responses. Assessment of this learning experience for pilot study indicated that students were able to identify multiple mistakes in the responses to their queries. Therefore, the students were able to identify and document AI errors which is an important for students’ career readiness skills.

In addition, Gemini generated exam questions that were outside the range of specified chapters. When asked to correct this, the AI tool simply stated that the questions were based on the specified chapters in the latest version of the textbook, giving no indication of having made an error. It is for this reason that AI tools remind users that it can make mistakes, and the results should be fact checked. Evaluation of AI hallucinations became an important part of the cybersecurity learning activities.

Assessment of Collaborative Assignments

Considering the institution’s strategic goal to incorporate more high-impact practices (HIPs) this academic year, the instructor focused on student success for collaborative assignments, one of several HIPs for teamwork in higher education (Kuh, 2008).  Incorporating collaborative approaches strengthened the learning experience and increased student achievement. Therefore, the team efforts provided a significant and enriched learning experience in alignment with the core focus of HIPs in higher education (Kuh, 2008). Assessment of this new pedagogical approach indicated that most of the students used AI tools for their assignments and class projects at least half of the time. Almost all the students used ChatGPT, with just a few having used multiple tools such as Gemini, Claude, and Poe. Students used AI to troubleshoot, find procedures, brainstorm, code, and answer questions. Results also indicated that most students caught AI giving incorrect information and most preferred to use AI tools and the instructor as a resource. Therefore, the exploration, incorporation, and implementation of AI technology into the computer science curriculum provided many benefits for undergraduate students. Students gained new perspectives when utilizing AI to clarify concepts and the application to computer science. AI tools proved successful in exploring alternative approaches to programming. The overall assessment of student performance indicated that students were meeting the benchmark for brainstorming with AI to generate initial ideas and applying AI in solving technical problems. Another observation included drawbacks to AI such as the potential for student overreliance on AI. Overall, the assessment of this new pedagogical approach, with special emphasis on teamwork, revealed increased student achievement in cybersecurity topics and its applications to real-life situations. 

Future Directions for Continuous Improvement

The findings above revealed new ideas to enhance skills related to the application of AI. In upcoming iterations of this course, students will 1) utilize AI as a source of academic assistance by generating study guides and quizzes to assess comprehension; 2) compare various LLMs to gauge their effectiveness in the context of cybersecurity; 3) set up their own private environments to analyze potentially unsanitized server logs and other sensitive files to detect patterns and anomalies in a secure manner; 4) write programs that interact with LLMs via Application Programming Interfaces (APIs) to provide appropriate commands to conduct penetration testing; and 5) continue to be challenged to identify AI-generated inaccuracies.

Conclusion

This first inclusion of AI in this cybersecurity course showcased its effectiveness in clarifying concepts, enhancing programming, brainstorming concepts, and troubleshooting technical problems. Although there were some minor issues, overall assessment revealed that the learning experience was enhanced by the incorporation of AI. These efforts aligned well with the strategic goals of faculty-driven improvements for program offerings and the implementation of HIPs in the computer science curriculum. Moreover, these initiatives supported the institutional mission to provide relevant and relatable learning opportunities to enhance both career readiness in computer science and leadership within their profession.

 

References

Browning, D., & Kessler, G. C. (2009). Bluetooth hacking: A case study. Journal of Digital Forensics, Security and Law, 4.

Kuh, G. D. (2008). High-impact educational practices: What they are, who has access to them, and why they matter. AAC&U.

Patel, N., Wimmer, H., & Rebman, C. M., Jr. (2021). Investigating Bluetooth vulnerabilities to defend from attacks. In Proceedings of the 5th International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT) (pp. 549–554). IEEE.

Redd, K., Estrada, M., Nembhard, H. B., & Ngai, C. (2024). Eight indicators for measuring equitable student success in STEM. Change: The Magazine of Higher Learning, 56(3), 4–14.