AI FAGMA HealthTech Media Microsoft Research

2020’s Biggest Stories in AI

2020’s Biggest Stories in AI

2020 provided a glimpse of just how much is beginning to penetrate everyday life. It seems likely that in the next few years we’ll regularly (and unknowingly) see -generated text in our social media feeds, advertisements, and news outlets. The implications of being used in the real world raise important questions about the ethical use of as well.

Copyright by www.insidebigdata.com

SwissCognitive, AI, Artificial Intelligence, Bots, CDO, CIO, CI, Cognitive Computing, Deep Learning, IoT, Machine Learning, NLP, Robot, Virtual reality, learningSo as we look forward to 2021, it is worth taking a moment to look back at the biggest stories in over the past year.

GPT-3: Generated Text

Perhaps the biggest splash of 2020 was made by OpenAI’s GPT-3 model. GPT-3 (Generative Pretrained Transformer 3) is an capable of understanding and generating text. The abilities of this  are impressive — early users have coaxed the to answer trivia questions, create fiction and poetry, and generate simple webpages from written instructions. Perhaps most impressively, humans cannot distinguish between articles written by GPT-3 and those written by humans.

Although GPT-3 is not yet approaching the technological singularity, this model and others like it will prove incredibly useful in the coming years. Companies and individuals can request access to the model outputs through an API (currently in private beta testing). Microsoft now owns the license to GPT-3, and other groups are working to create similar results. I expect we’ll soon see a proliferation of new capabilities related to ’s that understand language.

AlphaFold: Protein Folding

Outside of Natural Language Processing, 2020 also saw important progress in biotechnology. Starting early in the year, we got the rapid and timely advancement of mRNA vaccines. Throughout the year, clinical trials proved these to be highly effective. As the year came to a close, another bombshell — DeepMind’s AlphaFold appears to be a giant step forward, this time in the area of protein folding. 

This fall, the latest version of AlphaFold competed against other state-of-the-art methods in a biennial protein folding prediction contest called The CASP Assessment. In this contest, algorithms were tasked with converting amino acid sequences into protein structures and were judged based on the fraction of amino acids positions the model predicts correctly within a certain margin. In the most challenging Free-Modeling category, AlphaFold was able to predict the structure of unseen proteins with a median score of 88.1. The next closest predictor in this year’s contest scored 32.4. This is an astonishing leap forward.

Going forward, scientists can use models like AlphaFold to accelerate their research on disease and genetics. Perhaps at the end of 2021, we’ll be celebrating the technology that work like this enabled.

Democratizing Deep Learning

As highlighted above, — the primary method underlying many state-of-the-art Ais — is proving useful in domains as disparate as biology and natural language. Efforts to make more accessible to domain experts and practitioners is accelerating the adoption of in many fields.

Anyone with an internet connection can now generate a realistic but completely fake photograph of a human face. Similar technology has already been used to create more realistic — and more difficult to detect — fake social media accounts in disinformation campaigns, including some leading up to the 2020 U.S. election. And OpenAI is planning to make the capabilities of GPT-3 available to vetted users through a comparatively easy-to-use API. There is genuine concern that as deep-learning-enabled technology becomes more accessible, it also becomes easier to weaponize.

But pairing AIs with human domain experts can also be leveraged for good. Domain experts can steer the AIs towards impactful, solvable problems and diagnose when the AIs are biased or have reached incorrect conclusions. The AIs provide the ability to rapidly process enormous volumes of data (sometimes with higher accuracy than humans), making analyses cheaper and faster, and unlocking insights that might otherwise be out of reach. User-friendly tools, APIs, and libraries facilitate the adoption of , especially in fields that can leverage already well-established techniques such as image classification.

Ethics

One of the interesting consequences of and systems becoming more readily accessible has been the resulting shift of priorities in the field of Ethics. […]

read more: www.insidebigdata.com

0 Comments

Leave a Reply