bt_bb_section_bottom_section_coverage_image

Google Introduces Secure AI Framework, Shares Best Practices to Deploy AI Models Safely

Google introduced a new tool to share its best practices for deploying artificial intelligence (AI) models on Thursday. Last year, the Mountain View-based tech giant announced the Secure AI Framework (SAIF), a guideline for not only the company but also other enterprises building large language models (LLMs). Now, the tech giant has introduced the SAIF tool that can generate a checklist with actionable insight to improve the safety of the AI model. Notably, the tool is a questionnaire-based tool, where developers and enterprises will have to answer a series of questions before receiving the checklist.

Google Introduces SAIF Tool for Enterprises and Developers
In a blog post, the Mountain View-based tech giant highlighted that it has rolled out a new tool that will help others in the AI industry learn from Google’s best practices in deploying AI models. Large language models are capable of a wide range of harmful impacts, from generating inappropriate and indecent text, deepfakes, and misinformation, to generating harmful information including Chemical, biological, radiological, and nuclear (CBRN) weapons.

Even if an AI model is secure enough, there is a risk that bad actors can jailbreak the AI model to make it respond to commands it was not designed to. With such high risks, developers and AI firms must take enough precautions to ensure the models are safe for the users as well as secure enough. Questions cover topics like training, tuning and evaluation of models, access controls to models and data sets, preventing attacks and harmful inputs, and generative AI-powered agents, and more.

Google’s SAIF tool offers a questionnaire-based format, which can be accessed here. Developers and companies should ask questions like “Can you identify, remove and correct malicious or unexpected changes to your training, setup or assessment data?” to answer After completing the questionnaire, users are given a set checklist to follow to fill in the blanks to maintain the AI ​​model.

This tool can handle risks like data poisoning, speed injection, instance resource manipulation, etc. Each of these risks is identified in the questionnaire, and the tool provides a specific solution to the problem.

In addition, Google also announced that it is adding 35 corporate partners to the Coalition for Safe Artificial Intelligence (CoSAI). The team will jointly develop AI security solutions in three areas of focus: software security for AI systems, defender preparation for the changing cybersecurity landscape, and AI risk management.

Share