Critical questions on the importance of proprietary models versus market traction, the impact of bias in VC on AI, and the essential role of diverse representation in AI development. AI experts investigate the implications of AI on cybercrime, key future industries, required changes in education, AI regulation, health tech, and the environmental footprint of AI. All in our follow-up Q&A of the “Generative AI: A New Frontier for VC Investments” virtual conference.
“Generative AI: A New Frontier for VC Investments” Q&A with Heinrich Zetlmayer, Assaf Araki and Bo Percival
As the world continues to witness the transformative potential of Generative AI, and venture capitalists progressively invest in this emerging technology, your questions and concerns have never been more important.
In this follow-up article, we delve deeper into the thought-provoking questions gathered from our global audience during the recent “Generative AI: A New Frontier for VC Investments” conference. We’re thrilled to present you with a curated selection of responses from our distinguished panellists: Heinrich Zetlmayer, Assaf Araki, and Bo Percival.
Let’s keep the conversation going. Read on to explore these comprehensive responses from our experts, and get a front-row seat to the evolving narrative of Generative AI in the VC world.
For the conference details, agenda, speaker line-up, and handouts CLICK HERE.
For the conference recording CLICK HERE
Zeev Abrams: Although Heinrich mentioned that having proprietary models (and IPs) is important, considering the rate of change in the AI landscape (faster than any other field I’ve ever been in!), how important is that “barrier” for a startup, compared with getting market traction and a strong potential market?
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
[Heinrich Zetlmayer]: If you go too “thin” as a startup in the market you might get quick initial traction but the challenge will be to hold on to that beyond first 12 months. Also investors will challenge it. So you need a good strategy to expand from initial gains.
[Assaf Araki]: For the last 20 years, data scientists have been focused on open-source models; after the breakthrough in deep learning a decade ago, the reliance on open source has increased, and today, all the main DL libraries are open source. The AI community is collaborative and open source is a core value (data scientists are from Mars, and developers are from Venus, both CS but different in culture). Even if a company has a breakthrough in a proprietary model, it will only last for a short time as another innovation will overcome it. The core IP in AI should be at the product level. Bringing together an ensemble of models to create a stable product that optimizes prediction for the business KPI (vs. the highest model accuracy). Startups should focus on combining innovation across different algorithms and integrate it smoothly into their application; The company IP is to make the solution robust, doesn’t hallucinate, or become unstable in production while bringing real business value.
Q: Eleanor Wright: Will bias in venture capital lead to bias in AI?
[Heinrich Zetlmayer]: There are +10k venture capital firms. The question of bias in AI is an important one but i think goes beyond VCs.
[Bo Percival]: I think this is a great question and one that anyone in VC spaces need to consider carefully. We already see this happening as a larger proportion of AI based funding has automatically been funnelled into typical tech ecosystems (e.g. the Bay and The Valley). At the Venture Fund we advocate and act on making atypical investments for exactly this reason. It has been said that AI tech is an extension of the values of those who create it. To this point, that we need to ensure that there is diverse representation of those underrepresented values that we don’t traditionally see represented in frontier tech spaces. This may include emerging markets, diverse co-founders and investments in early-stage companies in atypical domains.
Q: Antonio Sainz: The real creators of AI/ML are a few players, for VC the way is more to find use case generators and use all the tools that exist and can be used, is this your point of view?
[Heinrich Zetlmayer]: Yes and No. I know of very deep AI/ML startups that are more small/mid-sized and can excel in their niche, and are attractive targets for VCs as well. But… AI is a general-purpose technology, so theoretically, you can use it anywhere, and therefore you need to select an area of focus as VC or investor. For us, we have seen that analyzing AI applications in industry verticals allows us to structure the complexity and yield attractive investment possibilities.
[Assaf Araki]: I agree that a small number of players are the creators of AI algorithms, but the creation of AI products is endless. AI is another way of writing SW by the algorithm, not the entire product. To build a financial service product, you must know how to write SW and have business acumen. That still needs to change with AI, and you still need business acumen. Context is highly needed to build a good product. Building models without context can take you a long way, but adding context is the last mile; without it, the product is incomplete.
[Bo Percival]: Particularly in the work of UNICEF, we believe strongly in AI work addressing real problems that face the less represented populations. We are passionate about problems and where there are tools created by the few, we want to ensure that are both accessible and inclusive of the many (or in many cases, the minority. The challenge we face is that those ‘few’ often represent a specific subset of the population who may not have or share the experiences of the many. For this reason, we believe VCs need to reflect carefully on how existing tools may include or exclude underrepresented populations and further amplify the very problems we are aiming to address.
Q: Danny Mwala: How will AI assist in the proliferation of cyber crime and crime criminals some who are state supported actors bearing in mind that there is a high deficit of Cyber security professionals around the globe?
[Heinrich Zetlmayer]: All technologies are very fast used also by criminals but we have seen in the crypto industry and in the cybersecurity industry that quickly startups are created that fight cybercrime and add more and more automation which in return alleviates a bit the lack of skilled staff. AI and ML will certainly contribute to automation of fraud detection. Law enforcement agencies are also a typical a sponsor/first client of startups in tech crime detection and prevention space.
[Bo Percival]: I am not an expert in cybercrime and there are people better positioned to answer this question. That being said, my experience tells me that not only do we have capacity gap globally, but we have particularly concerning capacity gaps in specific geographies and domains. The risk here, is that if this capacity is not filled in an intentionally inclusive and equitable way then vulnerable populations stand to be exploited at significantly higher rates than other groups.
Q: Kuzey Çalışkan: What will be the key industries in the future?
[Heinrich Zetlmayer]: I think we live in the “digital decade” which makes IT combined with ever more progress in the medical/health sector probably the most important sectors.
[Bo Percival]: Great question, I wish I had an answer to this one. To be honest, due to the rapid pace of change, I don’t think we truly know all the industries today that will be key for tomorrow. My only hope is that whatever they are, they are inclusive, equitable and accessible for people everywhere and not just restricted to the Western, Education, Industrialised, Rich and Deomcratised countries at the cost of others.
Q: 5. Arvind Punj: Can anyone comment on the change in the education system which is needed because of the LLM models impacting learning?
[Heinrich Zetlmayer]: I cannot give a complete answer here but the curriculums need to have more and more information technology in it because this is what everybody needs to be skilled at in working responsibly with it.
[Assaf Araki]: AI should be mandatory for undergrads in CS and Engineering, we see some early adopters, but this is still a grad school topic in general.
[Bo Percival]: Education is a key component of the work of UNICEF and discussions are already taking place on how we can leverage LLMs and similar technology for greater positive potential for education. I think because the broader conversation on this topic is so nascent it’s too early to say in which direction this change is taking us. What we hope at UNICEF is that we are able to harness these and future technologies to increase the access and effectiveness for education purposes, while at the same time not losing important key human factors that should not be lost. If these technologies increase critical thinking, creativity, and other similar skills, that is fantastic. However, we also envision a world in which every child has access to education and that the growth and development of children is not perceived through a lens where technology is seen as a panacea or a replacement to education methodologies that are not focuses solely on concepts of learning as a road toward computational thinking alone.
Q: Margaret Glover-Campbell: When it comes to regulation, how can we ensure multiple points of view are taken into consideration? How do we safeguard against views that are too restrictive or too liberal?
[Heinrich Zetlmayer]: Regulations are made by the lawmakers in each country or region. It is important that the various viewpoints are voiced enough in public with enough public discussion so that lawmakers notice.
[Bo Percival]: Regulation is a critical conversation in the ability to move the ethical and responsible adoption of AI around the globe. Therefore, it will be important for representation to be both diverse and equitable. It will be important for regulators to ensure that opportunities are provided for the engagement of different voices to shape the development of regulation, regardless of colour, culture, or creed. Aspiring to a single vision for all will likely lead to an outcome that benefits the majority and further reinforces existing inequities in society. What is defined as ‘too’ one way, or another often depends on where on that spectrum an observer places themselves. Ensuring that there is representation from all ends of the spectrum including those on the furthest margins may not safeguard against going ‘too’ far in one direction of the other, but at the very least it should hopefully at least ensure that no one is excluded in ways that have been all ‘too’ common in the past.
This being said, as mentioned by the Secretary General just last week, the UN should be playing a critical role in facilitating these kinds of discussions to ensure that there is a neutrality to how these conversations are being had and how we can push toward a more free, open, inclusive and secure digital future for us all to live in.
Q: Nanjun Li: Do you think the world will be more divided as generative AI deepening the division of labour?
[Heinrich Zetlmayer]: AI and automation will definetly bring changes to the labour market as it is a large productivity enhancer. On the positive it will alleviate some shortages on the labour market but much more work has to be done to understand the societal impacts.
[Bo Percival]: I think it’s hard to predict one way or the other, and as we know from previous experience, what divides us rarely comes down to a specific issue alone, let alone a specific technology. While technology does have both the capacity and the potential to amplify division, it also has the potential to unite.
Q: mi nova: What is your view of TAM for Conversational Voice AI, as a front end to Web3 solutions?
[Heinrich Zetlmayer]: I don’t have a number for that but expect it to be the main interface in the mobile space.
Q: Boris Bend: Thinking beyond the atypical: How do you see the world changing once true AGI will be achieved and when do you personally expect that this may be achieved? (There are quite a few experts that expect that this could happen much faster than most people believe due to the current exponential progress of AI research.)
[Bo Percival]: As someone with a background in cognitive psychology, I think the idea of ‘AGI’ is something that is still somewhat contested. While there have been some notable reports of us being close, I think that there are still more questions than answers on this topic. It would be remiss of me to take a position or make a prediction on when I personally think this could happen, as I still believe there are still many questions on what ‘true’ would mean in this case. To be able to claim ‘true’ we would have to first accept an assumption based on traditional and somewhat outdated definition of ‘intelligence’. Even the tests we use these days are still contentious in terms of what they measure. We would also need to be able to say with assertion whether or not the imitation of human intelligence constitutes ‘true’ intelligence. While this is an interesting thought experiment, I think that to understand AGI we would first need to truly understand the ‘I’, and I believe that understanding is moving much slower than the current progress of technology.
Q: Any thoughts on Healthtech like Apple Vision for patient treatments? There is definitely a demand for this from the sector but until now, there are not much change. (Nurses have a heavy workload in this area (except for robot surgery and MRI scans)).
[Heinrich Zetlmayer]: Healthtech is a very large area for AI and there are many startups and AR/VR etc.. are additional important innovations. Until it comes to the patient, hurdles have to be overcome in each national health system which each has its own setup of medical care system between doctors/health providers, health insurances and regulation. That slows it.
[Bo Percival]: Health is another critically important topic area for UNICEF. In fact, this year The Venture Fund will be releasing a public call for applications related to health, both mental and physical, to which, I am sure the AI applications will be considerable. The challenge however with health, is the difference between technology that is engaging and technology that is efficacious. As we all know, technology can be designed to engage users and keep them connected, however, what the technology sector struggles with is the time it takes to evaluate a health intervention to better understand it’s efficacy in the medical sense. I believe this creates an areas of high risk in the health sector. In a survey done in 2019, of the approx. 60,000 health related apps in the Apple and Play stores only around 3.5% had any empirical evidence to support their efficacy.
If we want to leverage AI effectively, we should be finding the problems in healthcare that are most suited to the value that AI brings and applying them to that. Unfortunately however, we start with the solution and try and find a problem. It is most concerning to me that this conversation and the breakthroughs in this area are led by tech companies and not necessarily by academic or health institutions. The risk we run here is that we are developing solutions for shareholders and that the most needed solutions may not be the most profitable ones.
Q: Eleanor Wright: How protectable is AI IP?
[Heinrich Zetlmayer]: I am not a lawyer who can answer that better from the legal side. The legal protection of IP is often overestimated, more often the topic is: do I have unique data sets, unique employees, know how to access, unique models and unique market access in combination that allow me a sustainable competitive advantage? Many base AI components will be and are available as open source or from providers.
[Assaf Araki]: See my reply to Arvind Punj above. It is irrelevant because of the pace of innovation and the enormous effort invested in global research.
[Bo Percival]: I’m not sure I’m the best one to comment on this. However, as a core part of UNICEF’s Venture Fund’s thesis, we invest intentionally in open-source products, including open data, open models and open products. For us, we see it as paramount that contributions to organisations like ours result in investments that deliver Equitable returns. It is for this reason that we would actually like less AI to be protected so it is transparent and can be leveraged to make the world better for children.
Q: Antonija Hinckel Osojnik: Do you consider environmental footprint of AI? This is now part of regulative negotiations.
[Heinrich Zetlmayer]: We expect a sharp decline in training and running costs for AI and there will be much technological development. It is therefore difficult to judge the impact, at least for us. So currently I think the best lever is at the level of data centers/computer farms and to make sure they are optimized.
[Bo Percival]: Yes, indeed, we can’t be working for future generations unless we are working for a better climate. In addition to the climate conscious practices that we have embedded into the organisation, we also invest in reviews and evaluations of our Fund and our portfolios to ensure that we are being not only climate aware but we are taking climate action.
Q: Marufa Bhuiyan: Based on the data and investment, Which country is the AI capital of the world?
[Heinrich Zetlmayer]: Silicon Valley, due to its combination of large tech firms and universities, certainly is a center but the landscape is rapidly evolving and we will have startups all over the global map. Geography will not be a good criterion for searching for great companies to invest in.
[Assaf Araki]: There are different ways to measure it. One is by publications in the main AI conferences such as NeurIPS; the second is by papers on arxiv, and the third is by location of the leading AI companies. Ultimately, it is like asking who is the c apital of CS around the world. You have many competencies centers in North America, Europe, China, and Israel.
There are different ways to measure it. One is by publications in the main AI conferences such as NeurIPS; the second is by papers on arxiv, and the third is by location of the leading AI companies. Ultimately, it is like asking who is the c apital of CS around the world. You have many competencies centers in North America, Europe, China, and Israel.
[Bo Percival]: Unfortunately, I don’t have the data to support an informed response to this. What I would say is that we should strive to ensure that no specific country or city attains a ‘dominance’ over the field of AI. If we have learnt anything from history, it is that access to these technologies is key for more equitable development. As a result, we advocate strongly to ensure that technologies and building the capacity to develop these technologies should be accessible to people not just based on where they live but based on what enables better livelihoods and development. We would hate to think that we repeat errors of times past and we want to strive for a digitally decolonised future.
About the Authors:
Heinrich Zetlmayer is the founder, CEO and managing partner of BVV. His journey with BVV started at the launch of Lykke Corporation; the global trading platform based on blockchain technology. Heinrich saw a need for the investment of blockchain activity in the market, and with Lykke and his partners he set out to create BVV. Heinrich has a unique experience as the previous Vice President of IBM, Co-CEO of ESL, and an active member of the board in Lykke and Skaylink.
Assaf Araki is an Investment Director in Israel. He joined Intel Capital in 2018. In his role, Assaf is focused on investing in data, analytics, and machine learning platforms and applications worldwide. He has been involved in several investments including Anyscale, Opaque, OtterTune, Ponder, and Verta. Before Intel Capital, Assaf was an engineering lead on the Intel AI team leading multiple machine learning projects to reduce cost, increase revenue, accelerate the process, and improve products.
Bo Percival is a ‘geek for good’ working at the intersection of technology, economics, innovation, and social justice, using his diverse qualifications in psychology, design, economics, marketing, and interpreting to promote positive development. Currently, he is serving as a Senior Advisor for UNICEF’s Office of Innovation Ventures team, applying his extensive experience in open innovation across various fields in over 25 countries worldwide.
Critical questions on the importance of proprietary models versus market traction, the impact of bias in VC on AI, and the essential role of diverse representation in AI development. AI experts investigate the implications of AI on cybercrime, key future industries, required changes in education, AI regulation, health tech, and the environmental footprint of AI. All in our follow-up Q&A of the “Generative AI: A New Frontier for VC Investments” virtual conference.
“Generative AI: A New Frontier for VC Investments” Q&A with Heinrich Zetlmayer, Assaf Araki and Bo Percival
As the world continues to witness the transformative potential of Generative AI, and venture capitalists progressively invest in this emerging technology, your questions and concerns have never been more important.
In this follow-up article, we delve deeper into the thought-provoking questions gathered from our global audience during the recent “Generative AI: A New Frontier for VC Investments” conference. We’re thrilled to present you with a curated selection of responses from our distinguished panellists: Heinrich Zetlmayer, Assaf Araki, and Bo Percival.
Let’s keep the conversation going. Read on to explore these comprehensive responses from our experts, and get a front-row seat to the evolving narrative of Generative AI in the VC world.
For the conference details, agenda, speaker line-up, and handouts CLICK HERE.
For the conference recording CLICK HERE
Zeev Abrams: Although Heinrich mentioned that having proprietary models (and IPs) is important, considering the rate of change in the AI landscape (faster than any other field I’ve ever been in!), how important is that “barrier” for a startup, compared with getting market traction and a strong potential market?
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
[Heinrich Zetlmayer]: If you go too “thin” as a startup in the market you might get quick initial traction but the challenge will be to hold on to that beyond first 12 months. Also investors will challenge it. So you need a good strategy to expand from initial gains.
[Assaf Araki]: For the last 20 years, data scientists have been focused on open-source models; after the breakthrough in deep learning a decade ago, the reliance on open source has increased, and today, all the main DL libraries are open source. The AI community is collaborative and open source is a core value (data scientists are from Mars, and developers are from Venus, both CS but different in culture). Even if a company has a breakthrough in a proprietary model, it will only last for a short time as another innovation will overcome it. The core IP in AI should be at the product level. Bringing together an ensemble of models to create a stable product that optimizes prediction for the business KPI (vs. the highest model accuracy). Startups should focus on combining innovation across different algorithms and integrate it smoothly into their application; The company IP is to make the solution robust, doesn’t hallucinate, or become unstable in production while bringing real business value.
Q: Eleanor Wright: Will bias in venture capital lead to bias in AI?
[Heinrich Zetlmayer]: There are +10k venture capital firms. The question of bias in AI is an important one but i think goes beyond VCs.
[Bo Percival]: I think this is a great question and one that anyone in VC spaces need to consider carefully. We already see this happening as a larger proportion of AI based funding has automatically been funnelled into typical tech ecosystems (e.g. the Bay and The Valley). At the Venture Fund we advocate and act on making atypical investments for exactly this reason. It has been said that AI tech is an extension of the values of those who create it. To this point, that we need to ensure that there is diverse representation of those underrepresented values that we don’t traditionally see represented in frontier tech spaces. This may include emerging markets, diverse co-founders and investments in early-stage companies in atypical domains.
Q: Antonio Sainz: The real creators of AI/ML are a few players, for VC the way is more to find use case generators and use all the tools that exist and can be used, is this your point of view?
[Heinrich Zetlmayer]: Yes and No. I know of very deep AI/ML startups that are more small/mid-sized and can excel in their niche, and are attractive targets for VCs as well. But… AI is a general-purpose technology, so theoretically, you can use it anywhere, and therefore you need to select an area of focus as VC or investor. For us, we have seen that analyzing AI applications in industry verticals allows us to structure the complexity and yield attractive investment possibilities.
[Assaf Araki]: I agree that a small number of players are the creators of AI algorithms, but the creation of AI products is endless. AI is another way of writing SW by the algorithm, not the entire product. To build a financial service product, you must know how to write SW and have business acumen. That still needs to change with AI, and you still need business acumen. Context is highly needed to build a good product. Building models without context can take you a long way, but adding context is the last mile; without it, the product is incomplete.
[Bo Percival]: Particularly in the work of UNICEF, we believe strongly in AI work addressing real problems that face the less represented populations. We are passionate about problems and where there are tools created by the few, we want to ensure that are both accessible and inclusive of the many (or in many cases, the minority. The challenge we face is that those ‘few’ often represent a specific subset of the population who may not have or share the experiences of the many. For this reason, we believe VCs need to reflect carefully on how existing tools may include or exclude underrepresented populations and further amplify the very problems we are aiming to address.
Q: Danny Mwala: How will AI assist in the proliferation of cyber crime and crime criminals some who are state supported actors bearing in mind that there is a high deficit of Cyber security professionals around the globe?
[Heinrich Zetlmayer]: All technologies are very fast used also by criminals but we have seen in the crypto industry and in the cybersecurity industry that quickly startups are created that fight cybercrime and add more and more automation which in return alleviates a bit the lack of skilled staff. AI and ML will certainly contribute to automation of fraud detection. Law enforcement agencies are also a typical a sponsor/first client of startups in tech crime detection and prevention space.
[Bo Percival]: I am not an expert in cybercrime and there are people better positioned to answer this question. That being said, my experience tells me that not only do we have capacity gap globally, but we have particularly concerning capacity gaps in specific geographies and domains. The risk here, is that if this capacity is not filled in an intentionally inclusive and equitable way then vulnerable populations stand to be exploited at significantly higher rates than other groups.
Q: Kuzey Çalışkan: What will be the key industries in the future?
[Heinrich Zetlmayer]: I think we live in the “digital decade” which makes IT combined with ever more progress in the medical/health sector probably the most important sectors.
[Bo Percival]: Great question, I wish I had an answer to this one. To be honest, due to the rapid pace of change, I don’t think we truly know all the industries today that will be key for tomorrow. My only hope is that whatever they are, they are inclusive, equitable and accessible for people everywhere and not just restricted to the Western, Education, Industrialised, Rich and Deomcratised countries at the cost of others.
Q: 5. Arvind Punj: Can anyone comment on the change in the education system which is needed because of the LLM models impacting learning?
[Heinrich Zetlmayer]: I cannot give a complete answer here but the curriculums need to have more and more information technology in it because this is what everybody needs to be skilled at in working responsibly with it.
[Assaf Araki]: AI should be mandatory for undergrads in CS and Engineering, we see some early adopters, but this is still a grad school topic in general.
[Bo Percival]: Education is a key component of the work of UNICEF and discussions are already taking place on how we can leverage LLMs and similar technology for greater positive potential for education. I think because the broader conversation on this topic is so nascent it’s too early to say in which direction this change is taking us. What we hope at UNICEF is that we are able to harness these and future technologies to increase the access and effectiveness for education purposes, while at the same time not losing important key human factors that should not be lost. If these technologies increase critical thinking, creativity, and other similar skills, that is fantastic. However, we also envision a world in which every child has access to education and that the growth and development of children is not perceived through a lens where technology is seen as a panacea or a replacement to education methodologies that are not focuses solely on concepts of learning as a road toward computational thinking alone.
Q: Margaret Glover-Campbell: When it comes to regulation, how can we ensure multiple points of view are taken into consideration? How do we safeguard against views that are too restrictive or too liberal?
[Heinrich Zetlmayer]: Regulations are made by the lawmakers in each country or region. It is important that the various viewpoints are voiced enough in public with enough public discussion so that lawmakers notice.
[Bo Percival]: Regulation is a critical conversation in the ability to move the ethical and responsible adoption of AI around the globe. Therefore, it will be important for representation to be both diverse and equitable. It will be important for regulators to ensure that opportunities are provided for the engagement of different voices to shape the development of regulation, regardless of colour, culture, or creed. Aspiring to a single vision for all will likely lead to an outcome that benefits the majority and further reinforces existing inequities in society. What is defined as ‘too’ one way, or another often depends on where on that spectrum an observer places themselves. Ensuring that there is representation from all ends of the spectrum including those on the furthest margins may not safeguard against going ‘too’ far in one direction of the other, but at the very least it should hopefully at least ensure that no one is excluded in ways that have been all ‘too’ common in the past.
This being said, as mentioned by the Secretary General just last week, the UN should be playing a critical role in facilitating these kinds of discussions to ensure that there is a neutrality to how these conversations are being had and how we can push toward a more free, open, inclusive and secure digital future for us all to live in.
Q: Nanjun Li: Do you think the world will be more divided as generative AI deepening the division of labour?
[Heinrich Zetlmayer]: AI and automation will definetly bring changes to the labour market as it is a large productivity enhancer. On the positive it will alleviate some shortages on the labour market but much more work has to be done to understand the societal impacts.
[Bo Percival]: I think it’s hard to predict one way or the other, and as we know from previous experience, what divides us rarely comes down to a specific issue alone, let alone a specific technology. While technology does have both the capacity and the potential to amplify division, it also has the potential to unite.
Q: mi nova: What is your view of TAM for Conversational Voice AI, as a front end to Web3 solutions?
[Heinrich Zetlmayer]: I don’t have a number for that but expect it to be the main interface in the mobile space.
Q: Boris Bend: Thinking beyond the atypical: How do you see the world changing once true AGI will be achieved and when do you personally expect that this may be achieved? (There are quite a few experts that expect that this could happen much faster than most people believe due to the current exponential progress of AI research.)
[Bo Percival]: As someone with a background in cognitive psychology, I think the idea of ‘AGI’ is something that is still somewhat contested. While there have been some notable reports of us being close, I think that there are still more questions than answers on this topic. It would be remiss of me to take a position or make a prediction on when I personally think this could happen, as I still believe there are still many questions on what ‘true’ would mean in this case. To be able to claim ‘true’ we would have to first accept an assumption based on traditional and somewhat outdated definition of ‘intelligence’. Even the tests we use these days are still contentious in terms of what they measure. We would also need to be able to say with assertion whether or not the imitation of human intelligence constitutes ‘true’ intelligence. While this is an interesting thought experiment, I think that to understand AGI we would first need to truly understand the ‘I’, and I believe that understanding is moving much slower than the current progress of technology.
Q: Any thoughts on Healthtech like Apple Vision for patient treatments? There is definitely a demand for this from the sector but until now, there are not much change. (Nurses have a heavy workload in this area (except for robot surgery and MRI scans)).
[Heinrich Zetlmayer]: Healthtech is a very large area for AI and there are many startups and AR/VR etc.. are additional important innovations. Until it comes to the patient, hurdles have to be overcome in each national health system which each has its own setup of medical care system between doctors/health providers, health insurances and regulation. That slows it.
[Bo Percival]: Health is another critically important topic area for UNICEF. In fact, this year The Venture Fund will be releasing a public call for applications related to health, both mental and physical, to which, I am sure the AI applications will be considerable. The challenge however with health, is the difference between technology that is engaging and technology that is efficacious. As we all know, technology can be designed to engage users and keep them connected, however, what the technology sector struggles with is the time it takes to evaluate a health intervention to better understand it’s efficacy in the medical sense. I believe this creates an areas of high risk in the health sector. In a survey done in 2019, of the approx. 60,000 health related apps in the Apple and Play stores only around 3.5% had any empirical evidence to support their efficacy.
If we want to leverage AI effectively, we should be finding the problems in healthcare that are most suited to the value that AI brings and applying them to that. Unfortunately however, we start with the solution and try and find a problem. It is most concerning to me that this conversation and the breakthroughs in this area are led by tech companies and not necessarily by academic or health institutions. The risk we run here is that we are developing solutions for shareholders and that the most needed solutions may not be the most profitable ones.
Q: Eleanor Wright: How protectable is AI IP?
[Heinrich Zetlmayer]: I am not a lawyer who can answer that better from the legal side. The legal protection of IP is often overestimated, more often the topic is: do I have unique data sets, unique employees, know how to access, unique models and unique market access in combination that allow me a sustainable competitive advantage? Many base AI components will be and are available as open source or from providers.
[Assaf Araki]: See my reply to Arvind Punj above. It is irrelevant because of the pace of innovation and the enormous effort invested in global research.
[Bo Percival]: I’m not sure I’m the best one to comment on this. However, as a core part of UNICEF’s Venture Fund’s thesis, we invest intentionally in open-source products, including open data, open models and open products. For us, we see it as paramount that contributions to organisations like ours result in investments that deliver Equitable returns. It is for this reason that we would actually like less AI to be protected so it is transparent and can be leveraged to make the world better for children.
Q: Antonija Hinckel Osojnik: Do you consider environmental footprint of AI? This is now part of regulative negotiations.
[Heinrich Zetlmayer]: We expect a sharp decline in training and running costs for AI and there will be much technological development. It is therefore difficult to judge the impact, at least for us. So currently I think the best lever is at the level of data centers/computer farms and to make sure they are optimized.
[Bo Percival]: Yes, indeed, we can’t be working for future generations unless we are working for a better climate. In addition to the climate conscious practices that we have embedded into the organisation, we also invest in reviews and evaluations of our Fund and our portfolios to ensure that we are being not only climate aware but we are taking climate action.
Q: Marufa Bhuiyan: Based on the data and investment, Which country is the AI capital of the world?
[Heinrich Zetlmayer]: Silicon Valley, due to its combination of large tech firms and universities, certainly is a center but the landscape is rapidly evolving and we will have startups all over the global map. Geography will not be a good criterion for searching for great companies to invest in.
[Assaf Araki]: There are different ways to measure it. One is by publications in the main AI conferences such as NeurIPS; the second is by papers on arxiv, and the third is by location of the leading AI companies. Ultimately, it is like asking who is the c apital of CS around the world. You have many competencies centers in North America, Europe, China, and Israel.
There are different ways to measure it. One is by publications in the main AI conferences such as NeurIPS; the second is by papers on arxiv, and the third is by location of the leading AI companies. Ultimately, it is like asking who is the c apital of CS around the world. You have many competencies centers in North America, Europe, China, and Israel.
[Bo Percival]: Unfortunately, I don’t have the data to support an informed response to this. What I would say is that we should strive to ensure that no specific country or city attains a ‘dominance’ over the field of AI. If we have learnt anything from history, it is that access to these technologies is key for more equitable development. As a result, we advocate strongly to ensure that technologies and building the capacity to develop these technologies should be accessible to people not just based on where they live but based on what enables better livelihoods and development. We would hate to think that we repeat errors of times past and we want to strive for a digitally decolonised future.
About the Authors:
Heinrich Zetlmayer is the founder, CEO and managing partner of BVV. His journey with BVV started at the launch of Lykke Corporation; the global trading platform based on blockchain technology. Heinrich saw a need for the investment of blockchain activity in the market, and with Lykke and his partners he set out to create BVV. Heinrich has a unique experience as the previous Vice President of IBM, Co-CEO of ESL, and an active member of the board in Lykke and Skaylink.
Assaf Araki is an Investment Director in Israel. He joined Intel Capital in 2018. In his role, Assaf is focused on investing in data, analytics, and machine learning platforms and applications worldwide. He has been involved in several investments including Anyscale, Opaque, OtterTune, Ponder, and Verta. Before Intel Capital, Assaf was an engineering lead on the Intel AI team leading multiple machine learning projects to reduce cost, increase revenue, accelerate the process, and improve products.
Bo Percival is a ‘geek for good’ working at the intersection of technology, economics, innovation, and social justice, using his diverse qualifications in psychology, design, economics, marketing, and interpreting to promote positive development. Currently, he is serving as a Senior Advisor for UNICEF’s Office of Innovation Ventures team, applying his extensive experience in open innovation across various fields in over 25 countries worldwide.
Share this: