NIST AI Released to Check the Security Capabilities of Other Models

With AI becoming a larger part of our soceity, its risks for misinformation and securti breaches only grows. That is why US Department of Commerce’s National Institute of Standards and Technology (NIST) has recently revealed Dioptra or NIST AI. Wit this new tool, they have begun studying different AIs and identified the places where they are most vulnerable.

This model looks over other AI systems and determines the type of attacks that can be conducted on them. By giving developers this knowledge, they hope it can help them create safeguards and plans to prevent anyone from taking advantage of these flaws.

Purpose of NIST AI

Purpose-of-NIST-AI

The software of Dioptra is currently available free for download and is aimed to help developers quantify the performance of their AIs. This will give them concrete data about how good it is and where it falls short. Knowledge of that will prevent them from failing.

The system will show how often a system can fail and what might cause it to crash. This is part of the recent AI executive order passed by Joe Biden last year. In this order, he demands that the NIST develop Dioptra and it be used to help test different AI models.

“Testing the effects of adversarial attacks on machine learning models is one of the goals of Dioptra, a new software package aimed at helping AI developers and customers determine how well their AI software stands up to a variety of adversarial attacks.”

-NIST Offical Statement

What Else Is In Store for the NIST AI?

Aside from the existing AI, the NIST has also released documents that outline AI safety and standards based on the requirements in Joe Biden’s executive order. This includes an initial draft of guidelines for developers on what should be included in new foundation models. This list has been called Managing Misuse Risk for Dual-Use Foundation Models.

Aside from that, these guidelines also list some voluntary practices that developers adopt while designing their models. These practices are meant to reduce the risk of misuse, increase public safety, and prevent security threats. This comes in the form of seven approaches that will mitigate the risk and ensure transparency.

“Together, these practices can help prevent models from enabling harm through activities like developing biological weapons, carrying out offensive cyber operations, and generating child sexual abuse material and nonconsensual intimate imagery.” 

-NIST Offical Statement

Companion Douments for NIST AI

Companion-Douments-for-NIST-AI

Aside from NIST AI, other releases include guidance documents that will serve as additional companion resources for the AI Risk Management Framework (AI RMF) and Secure Software Development Framework (SSDF). All of these will also be used to help developers manage the risks of AI systems.

The first of these is called AI RMF Generative AI Profile which lists the 12 risks of generative AI that developers and organizations need to look out for. The second is a list of 200 that NIST AI recommends to control the risks.

One of the main risks NIST AI cites is a lower barrier of entry for cybersecurity attacks. With AI generating less reputable content and no fact-checking mechanism, it is easier for bad actors to generate hate speech, harmful content, or propaganda. Even if content is created in good faith, the tendency for AIs to hallucinate remains an ever-present risk. 

The next document is titled Secure Software Development Practices for Generative AI and Dual-Use Foundation Models. This will be used along the SSDF and focuses on the coding practices with AI. This brings up the risks of how coding tools might be compromised with poor or malicious training data. Both of these make the AI system less reliable.

Already the US has begun cooperation with other countries to mitigate the risks. They recently agreed to cooperate with China, the EU, India, and more. This culminated in the Bletchley Declaration which these countries will form a common set of standards that will govern the evolution of AI. The NIST AI is just the first step fo maintaining security.

How Will BPOs Be Affected By These Developments?

Although the development of NIST AI is meant to only affect, in reality, all businesses will need to pay attention, including those of BPO outsourcing services. With AI playing a larger role in these businesses, they must also understand the risks if they want to ensure both financial and ethical success. 

Many BPO global services are being used to create content and code, two of the areas highlighted by the NIST’s report as potentially compromised. If these organizations are serious about providing the best service possible, they must learn how to manage the risks and this report gives a comprehensive guide.

Lastly, even though this is meant to enforce organizations in the US, this report may serve as the guidelines for future international AI regulations. It is in many BPO IT provider’s best interest to understand the rules now and adhere to them so that they can provide seamless services.