Sponsored by AI-RMF® LLC
Welcome to "Security-of-AI"
Where AI-Governance and AI-Security Converge
Advancing Safe, and Responsible AI through Awareness and Education
Sponsored by AI-RMF® LLC
Advancing Safe, and Responsible AI through Awareness and Education

As artificial intelligence systems become increasingly powerful and pervasive, organizations face a critical challenge: ensuring these systems are both secure and responsibly governed. Traditionally, AI governance and AI security have evolved as parallel disciplines—governance focusing on ethical frameworks, accountability, and regulatory compliance, while security addresses technical vulnerabilities, adversarial attacks, and data protection. However, this separation is becoming untenable.
The emerging field of "Security-of-AI" recognizes that governance and security are fundamentally intertwined. A governance framework without robust security measures cannot enforce its policies; security controls without governance context may miss critical risks or impede legitimate use.
This convergence demands a holistic approach where security controls become governance mechanisms, and governance requirements drive security architecture. Organizations must develop integrated frameworks that address everything from supply chain integrity and model provenance to access controls and bias mitigation within a unified security-governance paradigm. Only through this convergence can we build AI systems that are simultaneously secure, trustworthy, and aligned with organizational and societal values.
AWS & SANS - THE AI SECURITY WAR
They are sounding the alarm: the AI race has moved from innovation to security. This video breaks down the new AWS x SANS signals, covering AI security risks—prompt injection, model manipulation, data leakage, supply chain threats—and why governance, NIST AI-RMF, and an AI Bill of Materials matter for CISOs. Learn how AI can both amplify risk and strengthen defenses via automation, AI agents, and proactive monitoring. Strategic implications for critical infrastructure, finance, and national security are explored in a tight 2:50 briefing for security leaders. Please like, subscribe and share the video.
THE BLACK BOX PRESIDENT
An investigative short AI thriller about MADISON, the Presidential Decision Intelligence System reshaping power behind the scenes. In this 5-minute episode of Security of AI, Bobby walks through how a brilliant, opaque advisor became indistinguishable from presidential judgment—raising urgent questions about accountability, AI security, cognitive offloading, AI governance, trust and democratic authority. Watch cinematic footage, tense narration, and expert-style analysis exploring opacity, attribution, and the security implications when advice becomes control. Please like, subscribe and share.
AGENTIC AI - THE SECURITY SHIFT
Agentic AI isn’t just smarter — it acts. In this urgent Security of AI briefing, Bobby explains why agentic AI represents a security shift: autonomy, persistent memory, tool and API calls, and multi-agent delegation become new attack surfaces. Learn about real threats — goal hijacking, memory poisoning, privilege escalation, prompt injection — and why governance frameworks (NIST, MITRE ATLAS, OWASP) must evolve. Practical defenses: minimal footprint, zero standing privilege, explicit confirmation gates, agent identity/provenance, and immutable audit trails. A must-watch for AI security leaders, risk managers, and governance teams preparing for the agentic era. Please like, subscribe and share.

Our Security of AI methodology combines the structured governance framework of the NIST AI Risk Management Framework (AI-RMF) with comprehensive, actionable security techniques across the entire AI lifecycle.
The NIST AI-RMF provides our governance foundation through its four core functions—Govern, Map, Measure, and Manage—ensuring that AI systems are developed and deployed with clear accountability, risk awareness, and stakeholder alignment. This framework guides our strategic decision-making, establishes organizational policies, and maintains oversight throughout the AI lifecycle. Our security implementation builds upon these AI-Security foundational integrated practice areas:
1. Establish AI Governance Framework
2. Conduct Risk Assessment
3. Secure Data and Models
4. Discover and Manage AI Assets
5. Implement Adversarial Defense and Security Controls
6. Monitor Continuously and Ensure Compliance
7. Establish Incident Response Protocols
8. Foster Innovation and Continuous Improvement

The National Institute of Standards and Technology (NIST) is a U.S. federal agency that develops standards, guidelines, and tools to ensure the reliability and security of technology, including artificial intelligence (AI). NIST's mission spans a wide array of fields from cybersecurity to physical sciences and engineering, aiming to promote innovation and industrial competitiveness.
In the realm of artificial intelligence, NIST introduced the AI Risk Management Framework (AI-RMF) to guide organizations in managing the risks associated with AI systems. The AI-RMF is designed to be a flexible and voluntary framework that helps stakeholders across various sectors understand, assess, and address the risks AI technologies can pose. This includes considerations for the ethical, technical, and societal implications of AI deployment. The framework emphasizes the importance of trustworthy AI, which means AI systems that are responsible, equitable, traceable, reliable, and governable while also being transparent, explainable, and accountable.

AI security involves a series of steps and strategies aimed at protecting AI systems from vulnerabilities, ensuring they operate reliably, and are free from manipulation. Here are the main steps involved in securing AI systems:

Overview
Our AI-Threat Landscape section serves as a critical resource for understanding the ever-evolving threats in the realm of artificial intelligence. As AI technologies integrate more deeply into various sectors, the potential for sophisticated threats grows. This section provides a comprehensive analysis of the current and emerging threats specific to AI systems, aiming to equip stakeholders with the knowledge required to identify, assess, and mitigate these risks effectively. Learn about
Identify Key Threats:
Threat Mitigation Strategies:

AI-Red Team Testing (AI-RTT) is a proactive approach to identifying vulnerabilities, harms and risks to better develop and deploy Responsible AI. The goal is to release safe and secure artificial intelligence (AI) systems, by simulating adversarial behaviors and stress-testing models under various conditions. This process ensures that AI systems are robust, secure, and aligned with organizational goals and ethical standards.
Here, we integrate AI-Red Team Testing with the principles and guidelines of the NIST AI-Risk Management Framework (AI-RMF) to deliver a structured and comprehensive Independent Verification and Validation (IV&V) of AI systems.
In this section, we will delve into the specific information, tools and techniques for:
We are a learning organization. There's much to see here, but we think there's still much to learn, so, take your time, look around, learn and/or contribute. We hope you enjoy our site and take a moment to drop us a line or subscribe.
Bobby K. Jenkins Patuxent River, Md. 20670 <<www.linkedin.com/in/bobby-jenkins-navair-492267239<<
Mon | By Appointment | |
Tue | By Appointment | |
Wed | By Appointment | |
Thu | By Appointment | |
Fri | By Appointment | |
Sat | Closed | |
Sun | Closed |