In the late nineteenth century, Georg Cantor, the founder of set theory, demonstrated that not all infinite sets are created equal.

SwissCognitiveIn the late nineteenth century, Georg Cantor, the founder of set theory, demonstrated that not all infinite sets are created equal. In particular, the set of integer numbers is ‘smaller’ than the set of all real numbers, also known as the continuum. Cantor also suggested that there cannot be sets of intermediate size that is, larger than the integers but smaller than the continuum.

According to Continuum hypothesis, no set of distinct objects has a size larger than that of the integers but smaller than that of the real numbers, the statement, which can be neither proved nor refuted using the standard axioms of mathematics.

What Is A Continuum Hypothesis

In the 1960s, US mathematician Paul Cohen showed that the continuum hypothesis cannot be proved either true or false starting from the standard axioms the statements taken to be true of the theory of sets, which are commonly taken as the foundation for all of mathematics.

In this paper explaining the indecisiveness surrounding learnability, authors Ben David and his colleagues demonstrate how machine learning shares the same fate that has followed mathematics. They describe simple scenarios where learnability cannot be proved nor refuted using the standard axioms of mathematics. The proof is based on the fact the continuum hypothesis cannot be proved nor refuted. The main idea is to prove an equivalence between learnability and compression.

Can ML Models Learn Everything?

A good machine learning model makes predictions from a database of random examples. The basic goal is to perform as well, or nearly as well, as the best predictor in a family of functions, such as neural networks or decision trees. For a given model and function family, if this goal can be achieved under some reasonable constraints, the family is said to be learnable in the model.

Machine-learning theorists are typically able to transform questions about the learnability of a particular function family into problems that involve analysing various notions of dimension that measure some aspect of the family’s complexity. For example, the appropriate notion for analysing PAC learning is known as the Vapnik–Chervonenkis (VC) dimension, and, in general, results relating learnability to complexity are sometimes referred to as Occam’s-razor theorems.

Starting With An EMX

In this paper, the researchers introduce a learning model called estimating the maximum (EMX), and go on to discover a family of functions whose learnability in EMX is unprovable in standard mathematics.[…]


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

read more – copyright by www.analyticsindiamag.com