Enkrypt AI Unveils LLM Safety Leaderboard to Enable Enterprises to Adopt Generative AI Safely and Responsibly

–News Direct–

The rapid adoption of Generative AI, including in regulated settings, has continued to make the security and safety of Large Language Models (LLMs) a key concern amongst cybersecurity professionals. Policy-makers and security professionals around the world continue to seek new technology to help mitigate the risks of Generative AI technologies. For example; just days ago, the US Governments Department of Homeland Security appointed a board to advise on the role of artificial intelligence on critical infrastructure.

Sahil Agarwal, CEO of Enkrypt AI commented: LLMs are increasingly seen as potential back-office powerhouses for enterprises, processing data and enabling faster front-office decision-making. Consider a fintech where an LLM-powered application is key in rejecting a loan application from a person of color without clear explanation. This raises concerns about implicit biases, as LLMs often reflect societal inequities present in their training data sourced from the internet. Moreover, cases like Google's LLM appearing 'woke' highlight the risks of overcorrecting these biases. How safe is Anthropics Claude3 Model? Is Coheres Command R+ LLM really ready for enterprise use? These scenarios underscore the urgent need for careful checks on these models to prevent exacerbating societal inequities and causing harm.

LLM Safety Leaderboard by Enkrypt AI
LLM Safety Leaderboard by Enkrypt AI

At the highly anticipated RSA conference, Enkrypt AI, the leader in securing Generative AI technologies, will introduce its latest innovation, the LLM Safety Leaderboard. This product is part of Enkrypt AI's comprehensive Sentry suite, designed to empower enterprises to deploy LLMs with heightened security and peace of mind.

The LLM Safety Leaderboard will provide essential insights into the vulnerabilities and hallucination risks of various LLMs, enabling technology teams to make informed decisions about which models best suit their specific needs. This tool aims to educate and raise awareness about the relative strengths and potential weaknesses of different LLMs, so AI engineers can make informed decisions about the unique strengths of each.

Highlights of the LLM Safety Leaderboard include: Comprehensive Vulnerability Insights which delivers detailed evaluations of potential security risks, including data leakage, privacy breaches, and susceptibility to cyber-attacks. Ethical and Compliance Risk Assessment which tests for biases, toxicity, and compliance with ethical standards and regulatory requirements, ensuring models align with enterprise and brand values.

The LLM Safety Leaderboard is a new component of Enkrypts Sentry suite, which includes Sentry Red Team, Sentry Guardrails, and Sentry Compliance. This suite offers a holistic approach to managing and securing LLMs, aligning with the strictest standards for privacy, security, and compliance within the enterprise environment.

The announcement comes as a new preprint paper by Enkrypt AI, Increased LLM Vulnerabilities from Fine-tuning and Quantization, has found that common practices used to implement LLMs in business settings, namely fine-tuning and quantization, lead to increased risk of security vulnerabilities namely from jailbreaking. However, implementing external guardrails platforms like Enkrypts Sentry Guardrails solution was successful in mitigating such vulnerabilities. On one model, Enkrypts Sentry Guardrails provided a 9x reduction in vulnerability to jailbreaking attacks.

Sahil Agarwal, CEO of Enkrypt AI said: With the launch of the LLM Safety Leaderboard, we are enhancing our commitment to enabling the safe, secure, and responsible use of generative AI in the enterprise. This tool will serve as a critical resource for organizations aiming to navigate the complexities of AI implementation with full confidence in their security posture.

Prashanth Harshangi, CTO of Enkrypt AI, added: In the last two quarters, our team has been solely focused on generative AI safety and making rapid progress with our Sentry Suite. Comprising three key components – Sentry Red Team, Sentry Guardrails, and Sentry Compliance. With the LLM Safety Leaderboard, we are proud to offer a product that not only identifies potential risks but also empowers businesses to proactively manage and mitigate these challenges, enabling informed and faster decision making.

About Enkrypt AI

Enkrypt AI, co-founded by Yale PhDs Sahil Agarwal and Prashanth Harshangi, is pioneering the safe adoption of Generative AI within enterprises. With an innovative all-in-one platform, Enkrypt AI is revolutionizing how Large Language Models (LLMs) are integrated and managed, addressing critical needs for reliability, security, data privacy, and compliance in a unified solution.

Used by mid to large-sized enterprises in industries including finance and life sciences, Enkrypt AI's Sentry offers a proactive approach to AI security, fostering trust and efficiency in AI implementations from chatbots to automated reporting. Enkrypt AI sits between users and AI models, to offer a variety of safety and security layers.

Enkrypt AI stands apart by merging threat detection, privacy, and compliance into a comprehensive toolkit, poised to become the definitive Enterprise Generative AI platform for an evolving regulatory landscape. For more information please visit https://www.enkryptai.com/ or follow via LinkedIn, X, Instagram or YouTube.

Contact Details

Enkrypt AI

Bilal Mahmood

+44 7714 007257

b.mahmood@stockwoodstrategy.com

Company Website

https://www.enkryptai.com/

View source version on newsdirect.com: https://newsdirect.com/news/enkrypt-ai-unveils-llm-safety-leaderboard-to-enable-enterprises-to-adopt-generative-ai-safely-and-responsibly-147863159

Enkrypt AI

comtex tracking

COMTEX_451959593/2655/2024-05-06T09:02:44

Disclaimer: The views, suggestions, and opinions expressed here are the sole responsibility of the experts. No Times of Chennai journalist was involved in the writing and production of this article.