AI guardrails ensure safe, effective deployment by aligning systems with responsible practices, reducing risks like misinformation and bias.
Copyright: cio.com – “How Guardrails Allow Enterprises To Deploy Safe, Effective AI”
AI guardrails are the technical tools companies use to ensure their systems conform to evolving policies and responsible practices. But with increasing options now available from big providers, startups, and the open-source community, finding the right solution isn’t always straightforward.
Google has finally fixed its AI recommendation to use non-toxic glue as a solution to cheese sliding off pizza. “Glue, even non-toxic varieties, is not meant for human consumption,” says Google Gemini today. “It can be harmful if ingested. There was a bit of a funny internet meme going around about using glue in pizza sauce, but that’s definitely not a real solution.”
Google’s situation is funny. The company that invented the very idea of gen AI is having trouble teaching its chatbot it shouldn’t treat satirical Onion articles and Reddit trolls as sources of truth. And Google’s AI has made other high-profile flubs before, costing the company billions in market value. But it’s not just the AI giants that can get in hot water because of something their AIs do. This past February, for instance, a Canadian court ruled that Air Canada must stand behind a promise of a discounted fare made by its chatbot, even though the chatbot’s information was incorrect. And as gen AI is deployed by more companies, especially for high-risk, public-facing use cases, we’re likely to see more examples like this.
According to a McKinsey report released in May, 65% of organizations have adopted gen AI in at least one business function, up from 33% last year. But only 33% of respondents said they’re working to mitigate cybersecurity risks, down from 38% last year. The only significant increase in risk mitigation was in accuracy, where 38% of respondents said they were working on reducing risk of hallucinations, up from 32% last year.
However, organizations that followed risk management best practices saw the highest returns from their investments. For example, 68% of high performers said gen AI risk awareness and mitigation were required skills for technical talent, compared to just 34% for other companies. And 44% of high performers said they have clear processes in place to embed risk mitigation in gen AI solutions, compared to 23% of other companies.[…]
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
Read more: www.cio.com
AI guardrails ensure safe, effective deployment by aligning systems with responsible practices, reducing risks like misinformation and bias.
Copyright: cio.com – “How Guardrails Allow Enterprises To Deploy Safe, Effective AI”
AI guardrails are the technical tools companies use to ensure their systems conform to evolving policies and responsible practices. But with increasing options now available from big providers, startups, and the open-source community, finding the right solution isn’t always straightforward.
Google has finally fixed its AI recommendation to use non-toxic glue as a solution to cheese sliding off pizza. “Glue, even non-toxic varieties, is not meant for human consumption,” says Google Gemini today. “It can be harmful if ingested. There was a bit of a funny internet meme going around about using glue in pizza sauce, but that’s definitely not a real solution.”
Google’s situation is funny. The company that invented the very idea of gen AI is having trouble teaching its chatbot it shouldn’t treat satirical Onion articles and Reddit trolls as sources of truth. And Google’s AI has made other high-profile flubs before, costing the company billions in market value. But it’s not just the AI giants that can get in hot water because of something their AIs do. This past February, for instance, a Canadian court ruled that Air Canada must stand behind a promise of a discounted fare made by its chatbot, even though the chatbot’s information was incorrect. And as gen AI is deployed by more companies, especially for high-risk, public-facing use cases, we’re likely to see more examples like this.
According to a McKinsey report released in May, 65% of organizations have adopted gen AI in at least one business function, up from 33% last year. But only 33% of respondents said they’re working to mitigate cybersecurity risks, down from 38% last year. The only significant increase in risk mitigation was in accuracy, where 38% of respondents said they were working on reducing risk of hallucinations, up from 32% last year.
However, organizations that followed risk management best practices saw the highest returns from their investments. For example, 68% of high performers said gen AI risk awareness and mitigation were required skills for technical talent, compared to just 34% for other companies. And 44% of high performers said they have clear processes in place to embed risk mitigation in gen AI solutions, compared to 23% of other companies.[…]
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
Read more: www.cio.com
Share this: