Google’s new challenge aims to advance the field of machine unlearning, a crucial aspect of AI that involves making generative AI forget specific data, which is vital for AI ethics and law.


Copyright: “Google Announces Machine Unlearning Challenge Which Will Help In Getting Generative AI To Forget What Decidedly Needs To Be Forgotten, Vital Says AI Ethics And AI Law”


Have you recently forgotten where you put your car keys? Or maybe you went so far as to even forget where you parked your car, nearly losing it in a massive maze of a jampacked parking lot. We all have plenty of frenetic thoughts on our minds these days. Occasionally having a bout of forgetfulness seems like a sure byproduct of our hectic lives.

It might seem altogether disconcerting that we do at times manage to forget things. This reminds me of the famous saying that those that forget the past are fated to repeat their mistakes. But there are surprisingly some upsides to forgetting. Being able to forget exceedingly foul and dour memories that would otherwise haunt you could be said to showcase a kind of mercifulness of forgetfulness.

I’ve got a vexing question for you.

Can you force your mind to forget something?

This is an age-old question. Some would insist that the very effort to purposefully try to forget something will merely reinforce it. The more you think of the aspect, the more ingrained and mindfully powerful it will become. A counter viewpoint is that you would be better off trying to allow the memory to simply fade into oblivion. Just do not think of the aspect and hopefully, it will ultimately go out of sight and out of mind.

Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


Let’s switch hats and think about the topic of AI and the role of being able to forget something. In today’s column, I will be examining the arduous task of trying to force generative AI into forgetting particular facets that seem to be encoded into the generative AI structure. This is a very hard problem to solve. Indeed, I will describe a recent announcement by Google regarding an interesting challenge that all comers are welcome to try and take on, ostensibly involving being able to find new or novel ways to make AI forget what we might want it to forget.

Let’s go ahead and jump into this thorny and vital topic.

I dare say that most people probably don’t even know this is a notably looming concern. The inability to ensure that AI is able to forget particular facets is going to increasingly become significant to society all told. All those generative AI apps are fusing together a morass of all sorts of data, sometimes including false or deemed unsavory data, and we will need to somehow excise the badness. A plethora of ethical and legal ramifications arise.

The Tough Time Of Getting Generative AI To Forget

Suppose you are using a generative AI app such as the widely and wildly successful ChatGPT by AI maker OpenAI or others such as Bard (Google), Claude (Anthropic), etc. While using generative AI, you come across a response or output that seems to reveal a person’s private info such as their social security number, their driver’s license identification, and other crucial information. This doesn’t seem right. Generative AI is going around revealing the intimate private data of a particular individual.

Not good.

Your likely assumption is that a particular detail of that nature could readily be deleted from the generative AI. Most people envision that generative AI is akin to a large database of collected facts and figures. All you would seem to need to do is search for and find the offending piece of data, delete it, and the verboten content would be forever erased from the generative AI. Seemingly easy-peasy.[…]

Read more: