Episodes

  • Using Role-Playing Scenarios to Identify Bias in LLMs
    Sep 16 2024

    Harmful biases in large language models (LLMs) make AI less trustworthy and secure. Auditing for biases can help identify potential solutions and develop better guardrails to make AI safer. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Katie Robinson and Violet Turri, researchers in the SEI’s AI Division, discuss their recent work using role-playing game scenarios to identify biases in LLMs.

    Show More Show Less
    45 mins
  • Best Practices and Lessons Learned in Standing Up an AISIRT
    Sep 9 2024

    In the wake of widespread adoption of artificial intelligence (AI) in critical infrastructure, education, government, and national security entities, adversaries are working to disrupt these systems and attack AI-enabled assets. With nearly four decades in vulnerability management, the Carnegie Mellon University Software Engineering Institute (SEI) recognized a need to create an entity that would identify, research, and identify mitigation strategies for AI vulnerabilities to protect national assets against traditional cybersecurity, adversarial machine learning, and joint cyber-AI attacks. In this SEI podcast, Lauren McIlvenny, director of threat analysis in the SEI’s CERT Division, discusses best practices and lessons learned in standing up an AI Security Incident Response Team (AISIRT).

    Show More Show Less
    38 mins
  • 3 API Security Risks (and How to Protect Against Them)
    Aug 22 2024

    The exposed and public nature of application programming interfaces (APIs) come with risks including the increased network attack surface. Zero trust principles are helpful for mitigating these risks and making APIs more secure. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), McKinley Sconiers-Hasan, a solutions engineer in the SEI CERT Division, discusses three API risks and how to address them through the lens of zero trust.

    Show More Show Less
    19 mins
  • Evaluating Large Language Models for Cybersecurity Tasks: Challenges and Best Practices
    Jul 25 2024
    How can we effectively use large language models (LLMs) for cybersecurity tasks? In this Carnegie Mellon University Software Engineering Institute podcast, Jeff Gennari and Sam Perl discuss applications for LLMs in cybersecurity, potential challenges, and recommendations for evaluating LLMs.
    Show More Show Less
    43 mins
  • Capability-based Planning for Early-Stage Software Development
    Jul 18 2024

    Capability-Based Planning (CBP) defines a framework that has an all-encompassing view of existing abilities and future needs for strategically deciding what is needed and how to effectively achieve it. Both business and government acquisition domains use CBP for financial success or to design a well-balanced defense system. The definitions understandably vary across these domains. In this SEI podcast, Anandi Hira, a data scientist, and William R. Nichols, an initiative lead for Software Engineering Measurement and Analysis, introduce CBP and its use and application in software acquisition.

    Show More Show Less
    34 mins
  • Safeguarding Against Recent Vulnerabilities Related to Rust
    Jul 1 2024

    What can the recently discovered vulnerabilities related to Rust tell us about the security of the language? In this podcast from the Carnegie Mellon University Software Engineering Institute, David Svoboda discusses two vulnerabilities, their sources, and how to mitigate them.

    Show More Show Less
    26 mins
  • Developing a Global Network of Computer Security Incident Response Teams (CSIRTs)
    Jun 21 2024

    Cybersecurity risks aren’t just a national concern. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), the CERT division’s Tracy Bills, senior cybersecurity operations researcher and team lead, and James Lord, security operations technical manager, discuss the SEI’s work developing Computer Security Incident Response Teams (CSIRTs) across the globe.

    Show More Show Less
    31 mins
  • Automated Repair of Static Analysis Alerts
    May 31 2024

    Developers know that static analysis helps make code more secure. However, static analysis tools often produce a large number of false positives, hindering their usefulness. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), David Svoboda, a software security engineer in the SEI’s CERT Division, discusses Redemption, a new open source tool from the SEI that automatically repairs common errors in C/C++ code generated from static analysis alerts, making code safer and static analysis less overwhelming.

    Show More Show Less
    27 mins