AI and machine learning saw several steps forward in 2020, from the first beta of GPT-3, stricter regulation of AI technologies and conversations around algorithmic bias, and strides in AI-assisted development and testing.

Copyright by www.sdtimes.com

SwissCognitive, AI, Artificial Intelligence, Bots, CDO, CIO, CI, Cognitive Computing, Deep Learning, IoT, Machine Learning, NLP, Robot, Virtual reality, learningGPT-3 is a neutral-network-developed language model created by OpenAI. It entered its first private beta in June this year, and OpenAI reported that it had a long waitlist of prospective testers waiting to assess the technology. Among the first to test the beta were Algolia, Quizlet, Reddit, and researchers at the Middlebury Institute. 

GPT-3 is being described as “the most capable language model created to date.” It is trained on key data models like Common Crawl, a huge library of books, and all of Wikipedia. 

In September, Microsoft announced that it had teamed up with OpenAI to exclusively license GPT-3. “Our mission at Microsoft is to empower every person and every organization on the planet to achieve more, so we want to make sure that this AI platform is available to everyone – researchers, entrepreneurs, hobbyists, businesses – to empower their ambitions to create something new and interesting,” Kevin Scott, executive vice president and chief technology officer for Microsoft, wrote in a blog post.

The ethics of AI and its potential biases were also more heavily talked about this year, with the Black Lives Matter movement bringing more attention to an issue that has been talked about in the industry for the past few years. Anaconda’s 2020 State of Data Science report revealed that social impact that stems from bias in data and models was the top issue that needs to be addressed in AI and machine learning, with 27% of respondents citing it as their top concern.  

In April, Washington state passed facial recognition legislation that ensured upfront testing, transparency, and accountability for facial recognition. The law requires that government agencies can only deploy facial recognition software if they make an API available for testing of “accuracy and unfair performance differences across distinct subpopulations.” […]

Read more: www.sdtimes.com