Nvidia Rolls Out Software Tools to Keep AI Chatbots From Sharing Dangerous Misinformation
Chatbots will no longer be able to babble away like five-year old kids, if Nvidia's new software tool works as promised. The tech company has released NeMo Guardrails, a tool that will reportedly ensure that generative AI apps offer responses that are "accurate, appropriate, on topic, and secure."
The new tool is a part of AI Foundations which was launched by Nvidia last month. This cloud-based AI service allows users to generate and run custom generative AI models for business, research, or just for fun. It offers three models, NeMo, Picasso, and BioNeMo.

What are NeMo Guardrails?
Nvidia's NeMo model is quite similar to ChatGPT, however, it is fine-tuned for businesses. For instance, the AI model can be trained for customer service and marketing content. It can reportedly mimic the company's tone to deliver the best responses.
Meanwhile, Nemo Guardrails is a software that can be used by developers to enforce "guardrails" on apps powered by large language models (LLMs). The software will set three types of boundaries-topical guardrails, safety guardrails, and security guardrails.
As the names suggest, topical guardrails prevent apps from going off topic, while safety guardrails make sure that apps serve correct and appropriate information. Finally, security guardrails ensure that apps connect to external third-party applications that are deemed safe.
Can Anyone Use NeMo Guardrails?
Nemo Guardrails is an open-source software which will work well with all the tools already used by enterprise developers. Nvidia claims that "virtually every software developer" can make use of NeMo Guardrails. There's "no need to be a machine learning expert or data scientist," the brand adds.
Nvidia states in its blog post, "Nvidia made NeMo Guardrails - the product of several years' research - open source to contribute to the developer community's tremendous energy and work AI safety."
"Together, our efforts on guardrails will help companies keep their smart services aligned with safety, privacy and security requirements so these engines of innovation stay on track," the company concluded.
The launch of NeMo Guardrails comes on the heels of reports cautioning against "hallucinating" chatbots. A Google executive once explained that when the chatbot hallucinates, it can convincingly spit out made-up answers. We can expect Nvidia's newest tool to curb such responses.


Click it and Unblock the Notifications








