Computers outsmart us in number crunching since decades, but will they also outsmart us in creativity? Will they become the “better scientists” or will there always remain a difference between “pure prediction” and “real understanding”? 

Copyright by blogs.ethz.ch

SwissCognitivePhilosophy is often about “big concepts”; concepts such as knowledge, understanding, autonomy, transparency, intelligence, and creativity. And all these concepts are at stake in the context of current research in data science and artificial intelligence. It seems inescapable that we lose some of our own autonomy once our cars start driving autonomously and our houses become smarter and smarter. Computers outsmart us in number crunching since decades, but will they also outsmart us in creativity? Will they become the “better scientists” or will there always remain a difference between “pure prediction” and “real understanding”? Is predictive success acceptable even if it comes with a loss in transparency? After all, transparency is something we are very much worried about not only in science but in all kinds of political and societal contexts. At the same time, privacy and data protection laws are a major theme in public discourse as well. Consider tracking apps, for instance—do we really want to become transparent citizens and consumers, X-rayed as it were by a machine learning algorithm no one might actually understand?

Coordinating concepts and experiences
These are only a few of the numerous well-known questions that people are currently concerned about in relation to AI and data science. What might be the role of philosophy in this context? According to A.N. Whitehead, the purpose of doing philosophy is “to coordinate the current expressions of human experience” (Whitehead 1933: 286). Of course, the use of such concepts as knowledge, transparency, creativity, and autonomy stretches across various contexts of experience—from everyday business to global politics, science and so on. Thus, philosophy is about taking these different contexts seriously and gaining a better understanding of “big concepts” by putting their different uses and connotations into a meaningful structure. Such a structure, however, cannot be fixed once and for all. This is what Whitehead’s phrasing “current expressions of human experience” emphasizes. We do not experience things the same way (and maybe not even the same things) as people did a hundred or a thousand years ago. In order to distinguish more pervasive issues from what might rather be temporary particularities, it is therefore helpful to have a look at history and at the way the current situation came about. What, for instance, is really new about contemporary data science as compared to data-based (observational) research as conducted in the past? Might “understanding” or “creativity” be good terms to characterize and distinguish a humanly pre-trained software like AlphaZero from a fully self-trained software like AlphaGo? Or is the use of these terms to be restricted to actions carried out solely by human beings—and why?

Tackling these questions is likely to reveal shifts in the meanings and implications of those “big concepts”. And it will lead to a critical and historical awareness which, in turn, will reduce the danger of becoming obsessed, panicked, or paralyzed by contingent current developments. In fact, such an awareness may even lead to a kind of toolbox increasing one’s ability to cope with present and future obstacles (cf. Sieroka et al. 2018; also for the relation to the notion of responsibility).

Some examples
Let us assume transparency is indeed of fundamental value. But so is the saving of human life. So what about, for instance, using an “intransparent” emergency care app on my smartphone? If my concern is about whether I am having a heart attack and what to do next, I am not so much worried about the transparency of some underlying biomedical theorizing. Moreover, there might not be that much “scientific transparency” in my physician’s claim about whether I had a heart attack or not, either. Why should I then mistrust “Doctor App” on my smartphone whose diagnosis is based on the data of fifty million heart attacks, say, whereas my physician has experienced maybe fifty cases altogether? However, one might argue that this should not stop us from aiming at “transparency” in the long run. Even if “Doctor App” is currently more successful in its predictions, its effectiveness may not be sustainable or resilient if there is no understanding of the underlying physiological processes involved. That is, there is a danger of “Doctor App” relying too closely on a particular set of data, which would then hinder adaptability to new data and to a wider range of applications. […]

Read more: blogs.ethz.ch