In SwissCognitive’s recent online conference centered on the creative potential of AI, experts from across the globe shared insights, diving deep into the nuances of AI’s transformative capabilities. Audience members from every corner of the world posed intriguing questions, from the ethics of AI’s influence on the public to the technicalities of its advancement. While the speakers couldn’t address every question live, we’ve gathered their in-depth responses here. Dive in to explore the intersections of AI, ethics, generative potentials, trust, and more.

 

“Beyond Efficiency: AI’s Creative Potential” Q&A with Valeria Sadovykh, Monique Morrow, Camila Manera, George Boretos, Jeff Winter, and Arek Skouza.


 

Q: [VoY TeC]: What’s the role of the general public and their power to influence the business for AI for Good? At the same time, the public can tolerate totally unethical business activities such as the tobacco industry?

A: [Valeria Sadovykh]: The role of the general public cannot be overstated; each individual has a significant magnifying effect on their actions, including those involving AI for Good initiatives. The general public’s role in shaping AI for Good businesses is multifaceted. They can impact these enterprises in several ways:

Demand & Supply: The public’s consumption choices exert a significant influence on businesses, with ethical considerations playing a pivotal role in shaping purchasing decisions. Public demand for ethical AI solutions, products, and investments not only stimulates supply but also encourages the production of AI solutions with a strong commitment to social responsibility.

Education: Both individual and public education initiatives serve to heighten awareness and guide ethical consumer choices, fostering a culture of responsible AI consumption.

Advocacy and Awareness: Public awareness and advocacy exert substantial pressure on businesses, compelling them to prioritize transparency and ethical conduct.


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

Policy and Regulation: Public concerns and demands serve as catalysts for governments to establish AI regulations that emphasize ethical practices, as exemplified by the Hollywood AI strike.

However, tolerance for unethical practices varies due to factors like awareness, addiction, economic interests, and industry lobbying. Achieving ethics in all industries demands public awareness, policy reforms, and responsible consumer choices.

The “public” is made up of many individuals, each making their own choices. This means that, for most people, they either tolerate the tobacco industry or find it hard to regulate due to its significant economic impact. The tobacco industry promotes personal choice and individual ethical standards. Whether it’s tobacco, alcohol, petroleum, cosmetics, or textile companies, each manages ethics differently. Ultimately, it’s up to us to make responsible choices.

Q: [Vince Serignese]: If Generative AI can play a major role in managing the use of performance-enhancing drugs in sports and amateur athletics?

A: [Camila Manera]: Generative AI can play a major role in managing the use of performance-enhancing drugs (PEDs) in sports and amateur athletics. Here are some of the ways that generative AI can be used:

Identifying new PEDs: Generative AI can be used to identify new PEDs by analyzing large datasets of biological data. This can help anti-doping organizations to stay ahead of the curve and detect new PEDs before they are widely used.

Detecting PED use: Generative AI can be used to develop new methods for detecting PED use. For example, AI can be used to analyze blood and urine samples to identify biomarkers that are associated with PED use.

Preventing PED use: Generative AI can be used to develop educational programs and interventions to prevent athletes from using PEDs. For example, AI can be used to create personalized educational content for athletes that is tailored to their individual risk factors for PED use.

Q: [AI Enthusiast Epping]: If these technologies are 1000x more powerful in 18 months from now… How do you prepare yourself? After that, it will be 1 billion times more powerful, etc. So what now? Are we pushing, or stopping (if possible), what about the data? It is NOT only about the algorithms….”

A: [Jeff Winter]: Navigating the likely exponential growth of advanced technologies like AI requires us to look beyond mere technological capabilities. As we stare at an unknown future, embracing a culture of continuous learning becomes paramount. Yet, it’s not just about data, algorithms, and technological advancements; it’s about the narratives we craft with them. We must intertwine ethics with innovation, understanding that with great power comes profound responsibility. Instead of focusing on the speed of our ascent, we should be focusing on the direction we’re heading.

Q: [AI Enthusiast Epping]: Generative AI is not more than quickly gathering information that already exists. If we agree on that, what is new?

A: [George Boretos]: This is indeed usually the case, and we need to be clear on our expectations. Generative AI is a quick way to receive a prompt, understand it (not literally, though), and retrieve/present relevant information. In this sense, Gen AI does not create anything new but offers a quick and easy way to identify information relevant to what you are saying or asking.

However, there are a couple of exceptions. For instance, you get something new when you ask a tool like Midjourney to create a new image based on your prompt. Perhaps it’s not a work of art or completely innovative, but this is something new to some extent.

Also, the response we receive from the Generative AI tool may indeed be new to us and possibly to many people, although it is not new to the world. For instance, when such a tool checks your email and suggests a rewrite, this may be new to you (and possibly extremely helpful), although the tool may have based its response on several similar responses from other people.

So, yes, Generative AI, in most cases, does not generate something genuinely new (as opposed, for instance, to Predictive AI, which produces forecasts, i.e., information that we cannot possibly possess today), but on the other hand:

  • It’s great for understanding prompts (much more than previous-generation chatbots).
  • Because of that, it’s more user-friendly than ever, so, everyone can use it, and this helped AI penetrate the mainstream market (as opposed to the smaller early-stage market including an “elite” of tech enthusiasts and visionaries).
  • It does offer new information for each user, albeit a lot of it is based on existing knowledge that the user is not aware of.
  • In some cases, it does generate new content, as in the case of image creation.

There is a fine line between what Generative AI can or cannot do, and I think one of the significant challenges in the following months and years would be understanding these limits and acting accordingly.

Q: [István Alföldi]: Can you please provide some more information about “ChatGPT and Healthcare”?

A: [Arek Skouza]: Data Analysis for Medical Research: Advanced algorithms can sift through large volumes of research data to find meaningful patterns or correlations. This is particularly useful in drug discovery, genetic research, tailored therapies and target treatment, and epidemiological studies.

Medical Imaging: Data-intensive tasks such as analyzing MRI scans, X-rays, and other medical images can benefit from machine learning algorithms designed to detect abnormalities or specific conditions.

Predictive Analytics for Patient Outcomes: Machine learning models can be trained to predict patient outcomes based on various data points, including medical history, recent tests, and more. This can help doctors make more informed decisions or suggest alternative ways of approaching the same challenge (physiotherapy).

Telemedicine: Text-based AI models could offer supplementary support to healthcare providers in telemedicine applications, handling routine queries and gathering basic patient information for review by medical professionals. Algorithms can help to summarize conversations or analyze sentiment to spot suicidal cases and others.

Health Monitoring: Machine learning algorithms can analyze data from wearable devices to flag anomalies in real-time, potentially serving as an early warning system for medical issues.

Q: [Whitney Rix Victory]: You talk about building trust in AI and Generative AI, do you see that a multi-layer trust management structure and review of the data harvest will be effective in creating accuracy & integrity and thereby trust? What are your thoughts on the intervals of review and delivery of revisions of the data harvest and “clean up” process?

A: [Arek Skouza]: Trust is a critical factor when it comes to the adoption and effective utilization of AI and Generative AI technologies, especially in fields that require high levels of accuracy and integrity such as healthcare, finance, and governance.

Multi-Layer Trust Management Structure
A multi-layer trust management structure could indeed be an effective way to ensure the accuracy and integrity of the data and the resulting AI models. This structure could comprise various checks and balances, including algorithmic audits, human oversight, and real-time monitoring systems.

Role of Data Governance
Data governance plays a pivotal role in this structure. Profiling data at the early stages of an AI project can provide valuable insights into the quality and suitability of the data for the specific tasks at hand. Having data quality and bias as regular topics on management board agendas demonstrates a proactive approach to ensuring that the algorithms are trained on reliable and representative data sets. This allows for the prioritization of data issues and ensures that they are addressed in a timely manner.

Review Intervals & Revisions
The intervals for review and delivery of revisions would depend on the specific application, the velocity of data changes, and the risk profile of inaccuracies. In healthcare, for instance, frequent reviews may be necessary given the high stakes involved. In other less-sensitive applications, the intervals may be longer.

The ‘Never 100% Clean Data’ Reality

It’s true that data is never 100% clean or free from flaws. Whether it’s misspellings, duplicate records, or biased sampling, imperfections in the data can affect the performance and trustworthiness of an AI system. Acknowledging these flaws is the first step toward addressing them. This could involve developing partnerships to improve data quality or implementing advanced algorithms to clean the data more effectively.

Business Understanding & Partnerships
Business stakeholders should be educated about the limitations and uncertainties of the AI system. Understanding these flaws allows for more informed decisions and may lead to partnerships aimed at improving data quality. Whether it’s collaborating with external data providers or leveraging expertise from academia, partnerships can offer valuable avenues for improving data quality and, by extension, the reliability of AI systems.

Q: [VoY TeC]: What’s the role of the general public and their power to influence the business for AI for Good? At the same time, the public can tolerate totally unethical business activities such as the tobacco industry?

A: [Arek Skouza]: The general public plays a crucial role in steering businesses towards ethical AI practices, including initiatives under “AI for Good.” Public opinion can push for transparency, fairness, and accountability, aligning with OECD principles that call for AI systems to be challengeable. However, there’s a paradox where the public sometimes tolerates unethical industries, like tobacco. This inconsistency could be due to various factors like historical norms or lack of awareness. Closing the feedback loop is vital for ethical AI. Currently, most AI systems don’t allow users to correct or challenge outputs, which needs to be changed to improve trust and accuracy. The public, as end-users, can advocate for this change, thereby influencing businesses to incorporate more ethical and transparent AI practices.

Q: [Arvind Punj]: Could someone share some thoughts on the development of ChatGPT and trust?

A: [Monique Morrow]: Ethics and privacy will be key in this narrative as businesses adopt and implement generative AI. The implication and opportunity is to create a data governance framework that is transparent to all. This is a balancing act between revenue/profitability for companies and in most cases will require that companies indicate from the beginning their use of Generative AI and for what purpose. The Future of Work in this era will be most dynamic with implications to new skills. It does not have to be a dystopian world – but we must all be aware.

 


 

About the Contributors


George Boretos, founder of @FutureUP, is an AI pioneer with an impressive trajectory in Price Optimization. Having successfully raised $9m in funding and collaborated with global Fortune 500 clients, his leadership in the AI domain is unquestionable. Recognized as a Top 100 Thought Leader in AI by Thinkers360, George’s contributions span the creation of AI applications, predictive models, and noteworthy publications in eminent international journals like Elsevier and Foresight. His latest venture, FutureUP, embodies his 25+ years of expertise, merging Pricing and AI to empower enterprises in attaining their sales and profitability aspirations.


Monique Morrow, with 25+ years in global technology, is renowned for driving innovation in emerging tech. Holding various advisory roles, she’s a Venture Partner at Sparklabs Accelerator and Director at Hedera Hashgraph. An expert in cybersecurity and blockchain, she’s worked with industry giants like Cisco and chairs significant IEEE groups. Monique’s accolades include being named among Forbes’ top 50 women in tech and has an MSc in Digital Currency and Blockchain. Currently, she’s a Doctoral Student in Cyberpsychology.


A recognized leader in emerging technology, Dr. Valeria Sadovykh’s endeavors span strategic IT developments, digital transformation, and mass customization across various industries globally. Her passion for socially responsive AI and decision intelligence is evident in her transformative projects enhancing operational efficiencies, sustainability, and customer experiences. Living and working on multiple continents, Valeria’s global perspective, combined with her technical expertise and leadership, has earned her numerous accolades, including the esteemed ‘Individual with Extraordinary Ability’ by the US Government.


Based in Dallas-Fort Worth, Arek Skuza is an adept professional specializing in data monetization, AI, productivity, and data analytics. As an AI consultant, Arek guides global giants (e.g., Shell Energy, and L’Oreal) in AI integration, data monetization, and process enhancement. Arek strategizes AI adoption, offering insights into data-driven decisions, automation, and efficiency for brands like IKEA, P&G, Bayer, Modoma, and Arcade. Arek educates through workshops, courses, and training, simplifying AI for learners of diverse levels and empowering them for an AI-empowered world. With 120+ talks delivered, Arek inspires at conferences, blending motivational skills with AI expertise. Arek shares his insight through www.arekskuza.com


With over 18 years of experience working for different industrial automation product and solution providers, Jeff has a unique ability to simplify and communicate complex concepts to a wide range of audiences, educating and inspiring people from the shop floor up to the executive board room. As part of his experience, Jeff is also very active in the community of Industry 4.0. He currently is a part of the International Board of Directors for MESA (Manufacturing Enterprise Solutions Association), the leader of the Smart Manufacturing & IIoT Division of ISA (International Society of Automation), a U.S. registered expert for IEC (International Electrotechnical Commission) as a member of TC 65, and is on the Smart Manufacturing Advisory Board for Purdue University


Camila Manera has over 10 years of experience in decision-making and strategy for large organizations worldwide, mixing diverse disciplines such as design, creativity, user experience, data science, and AI. Her unique approach to intertwining creativity with technology is what makes her a pioneer in the field of AI.