In the evolving landscape of AI technology, the discourse on regulation has gained momentum. This article explores the delicate equilibrium between fostering innovation and implementing regulatory measures. It underscores the need for cautious progress and empirical understanding amidst the backdrop of sensationalized narratives.
SwissCognitive GuestBlogger: Dashel Myers – “AI Regulation: A Threat to Innovation and Progress”
In recent years, the idea of an AI-initiated doomsday, a robot apocalypse of some sort, has snowballed in popularity. Many intellectual groups agreed upon a “safe” approach to AI development, suggesting that companies pause advancement for a 6 month period. In the spirit of discussion, I add another work to the dialogue, making a holistic argument against the establishment of regulatory bodies.
Firstly, the concept of “safe” AI development should be nuanced. While there have been conversations around adopting a more conservative approach to innovation, there remains a lack of consensus on what defines this “safe” methodology. For intensive purposes, we will assume that AI “safety can be broadly defined as the endeavor to ensure that AI is deployed in ways that do not harm humanity.” Perhaps the notion that AI innovation should prioritize safety is valid, but it raises the question of whether innovation has ever been devoid of risk. A poignant example that highlights this risk-benefit dichotomy is the automobile. Despite causing 1.3 million deaths per year, cars remain widely purchased and manufactured. If we stipulate that these lives would have been saved had they walked, it presents a compelling case against cars. One might argue that adhering to agreed-upon speed limits could significantly reduce the number of deaths. In such cases, the responsibility for the crash lies with the driver in lieu of the vehicle. The AI sector faces similar challenges. Increasingly, bad actors have used AI for deep fakes to be employed in scam calls, smear campaigns, and other capacities. While this is one of the 100s of capabilities of new AI software, the fault of the deep fakes and various other forms of malicious activity lies squarely with the user.
Forseably, the government should not be so prescriptive, enabling the private sector to develop better systems. While considering potential regulation, it’s challenging to determine a reasonable scope. One mandate may involve constraining model training to a designated parameter count, say, not surpassing the 1.7 trillion parameters of GPT-4. Such a rule would assume a corollary relationship between the number of parameters and the power of the system. Where this becomes problematic is that the parameter count is not the only factor influencing ability and as we innovate, companies will locally optimize for more efficient methods. Consequently, the regulation that seemed salient one week may be archaic the next, requiring constant and onerous change. If regulations are indeed subject to ongoing revision, what criteria should define the point where well-intentioned involvement starts to stifle progress? If governments don’t embrace a hands-off approach, it’s critical that they establish a dynamic regulatory framework that keeps pace with advancements, an impossible task at such early stages of an industry.
Moreover, enacting premature regulations risks locking in the current AI leaders, limiting potential competition and establishing a moat of market dominance. As James Broughel reminds us, “regulation should be based on evidence of harm, rather than on the mere possibility of harm. Beyond speculation about how robots will take over the world or computers will turn the earth into a giant paperclip, we do not have much hard evidence that unaligned AI poses a significant risk to society.” This idea has become ever more important as billionaire technocrats push a superstitious idea of a rouge AI. While some suggestions, namely from OpenAI CEO Sam Altmam, seem to be made in good faith, many suspect that they are recommended as a ploy to reduce competition from startups and the open-source community. In an open letter addressing AI alignment, the Future of Life Institute proposed a collaborative effort between AI labs and independent experts. Their suggestion involved the establishment of a comprehensive set of safety protocols meticulously examined and supervised by independent specialists. While the notion holds merit, it regrettably received less attention compared to the letter’s other motion – a six-month suspension of AI research. This secondary recommendation is both unfeasible and counterproductive, particularly in light of the ongoing technological race between the US and China.
Calls for regulation are predicated on the idea that private sector entities can’t form their own AI oversight bodies. If these efforts prove fruitless, government intervention remains an option. Introducing new federal agencies will likely lead to new stakeholders, such as bureaucrats, academics, and corporations, who might leverage their influence to sway public policy to their own agendas. Normally, the public and private sectors identify issues at the same time. In such instances, compelling motivations, driven by reputational and financial concerns, often encourage private sector actors to take action.
In light of these considerations, my stance asserts that until we gain a clearer understanding of the dominant players in the AI landscape, regulatory measures will do more harm than good. Nevertheless, proceeding with cautious deliberation into uncharted domains remains paramount. This caution should be grounded in empirical evidence rather than sensationalized pop culture trends fueled by Hollywood narratives, conspiracy theories, and innate biases.
About the Author:
Dashel Myers is the ambassador for Wolfram Research, where he delves into AI and computational mathematics. His entrepreneurial focus lies in leveraging AI for cybersecurity, with plans to establish a dedicated company in the field.