Track My Token

Search

Will regulating AI threaten Web3 innovation?

Will regulating AI threaten Web3 innovation?

ChatGPT, OpenAI’s renowned AI-based application, appeared under the spotlight almost a year ago, and along with all the fascination and awe it inspired, it also awakened a sense of impending peril and fear. Accordingly, concerned and highly public voices started to appeal to authorities to regulate artificial intelligence and, so far, both the US and the EU have acquiesced. Although neither of these bills has been approved yet, it is important to ask whether their dispositions could indeed deliver the longed-for AI safety. Or, all they would result in is harming smaller-scale innovation like the one currently taking place in the Web3 space. 

Yet another AI boom

The development of artificial intelligence has always been marked by cycles, ever since the term was coined in the 1950s. Thus, AI winters characterized by quiet academic research were always followed by AI booms when funding and business ventures proliferated. Over the last ten years, deep learning, a revolutionary AI technique consisting of a neural network-like learning structure, produced several such booms. Fueled by Big Data and cheaper computation, it delivered enormous progress in image recognition, protein folding prediction, and natural language processing, to name just a few. 

The thing is that each and every time such an AI summer occurs, the general public gets puzzled, perplexed even, over how intelligent machines have gotten. And inevitably, science-fiction tales of robot uprisings and AI-powered apocalypse begin to circulate. Interestingly enough, AI tools and applications are always thought of as evil and seeking enemies’ extinction – emotions reserved entirely for humans.

This time around is no different. Large language models (LLMs) like GPT showcased how good AI systems are at understanding, synthesizing, and articulating information. So impressively good that they may start stealing people’s jobs, taking over their vital resources, and even threatening their safety. Actually, right now, fears like these are even more exacerbated by the fact that ChatGPT set a record for the fastest-growing user base and millions of people got to experience what it is capable of doing. However, all these concerns arise under the assumption that AI is no longer governed by people, but has gained its own consciousness – an opinion mixing science with science fiction.

Who is perpetuating scares about AI and why?

Over the last year, sinister scenarios about the risks AI could potentially pose to humanity shaped the public narrative, whereas very few opinions highlighting its benefits made the headlines. It is not surprising that it was exactly the doomsayers who triggered the current attempts to regulate the sector, so it’s worth taking a look at who they are. 

First, it was Elon Musk, who argued that artificial intelligence is the most destructive force in history. Back in March 2023, he called for a six-month pause on AI development, but, only to launch his own AI venture, xAI, a few months later. It turns out he was predominantly worried about being late to the AI race and wanted to secure some time to catch up. 

Then, it was Sam Altman, OpenAI’s CEO, who warned that AI could kill us all. In May 2023, he addressed US lawmakers with a plea to set up AI regulations to mitigate the multitude of risks associated with it. Together with other researchers and entrepreneurs, he ranked the degree of risk posed by AI alongside nuclear weapons and pandemics. Though, while inflating the challenges AI may induce, Sam Altman attracted millions of dollars in funding to tackle those same challenges with his Web3 project Worldcoin

Additionally, it is becoming obvious that Big Tech figures not only provoked the current zest for regulating AI all over the world, but are also practically tailoring the future legislation. Both Elon Musk and Sam Altman attended the AI Safety Summit recently organized by the UK government, along with representatives of Google DeepMind, Anthropic (where Google is to invest USD 2B), Microsoft, and Meta. Do you notice any small, open-source startups in that list? Yes, me neither.

In fact, this is exactly what Andrew Ng, a Stanford University professor and leading AI researcher, tried to warn us against. He cautioned that Big Tech is lying about some AI risks with the aim of inflicting stricter regulation on open-source companies and, thus, shutting down competition. Ng added that if startups are required to get AI licensing, that may crush innovation altogether. Finally, I will leave you with one more sobering Andrew Ng quote

“In recent months, I sought out people concerned about the risk that AI might cause human extinction. I wanted to find out how they thought it could happen. They worried about things like a bad actor using AI to create a bioweapon or an AI system inadvertently driving humans to extinction, just as humans have driven other species to extinction through a lack of awareness that our actions could have that effect. 

When I try to evaluate how realistic these arguments are, I find them frustratingly vague and nonspecific. They boil down to “it could happen.” Trying to prove it couldn’t is akin to proving a negative. I can’t prove that AI won’t drive humans to extinction any more than I can prove that radio waves emitted from Earth won’t lead space aliens to find us and wipe us out.”

Regulations in action

For months now, the European Union has been gearing towards adopting the AI Act – a landmark and “world-leading” law to regulate artificial intelligence. It aims at “boosting AI while ensuring EU citizens’ safety”. However, what it is expected to do in practice is put an extra bureaucratic burden on companies, especially those that build powerful AI models. And even though the bill hit a deadlock last week, and it is unclear whether it will pass at all, it already constitutes an existential threat to the European startup ecosystem. 

French generative AI startup Mistral stated that the AI Act could mean the end for the company if adopted. Meanwhile, a Europe-wide survey showed that half of the questioned startups expect the AI Act to slow down innovation in Europe, whereas 16% are considering relocating outside the EU. A staggering 73% of surveyed VCs anticipate that the law will significantly reduce Europe’s competitiveness in the field of AI. Founders are also worried that the act’s blanket rules will be applied across various areas of AI, and will outright ban some applications. 

European AI entrepreneurs have been pondering over moving to the US to save their businesses, but it is not safer there. The POTUS recently issued an executive order establishing six new standards for AI safety and security. The order envisions setting up tools ensuring AI’s safety, security, and trustworthiness, as well as standards and best practices for detecting AI-generated content. They will be directed at preventing AI-enabled fraud and deception.

The order also requires companies to share safety test results and “critical information” with the government. Imagine the bureaucratic delay! Slow down, innovators, you need the green light from that particular government official in order to go out and change the world!

Regulated AI meets Web3

Another notable rule from Biden’s order considers disclosing significant computing resource acquisitions for AI work. That measure determines computational power as a potential risk needing oversight and puts one entire category of We3 and AI startups under scrutiny and danger. 

Decentralized computation marketplaces employ blockchain technology to harness idle computational power and put it into training machine learning and other AI models. They come as a viable alternative amid the current GPU chip shortage. They also disrupt the excessive centralization of power rampant in the AI space and introduce more transparency into AI development. Gensyn is one of those decentralized computation startups threatened by AI regulations. His Head of Operations, Jeff Amico, declared that only “the large incumbents” would be able to comply with the new, unnecessarily stringent, obligations laid down in the US.

And that is just one single use case of the synergy between blockchain and artificial intelligence that may be affected by these new standards. There are hundreds of startups utilizing AI, and each one of them will have to take on additional time and cost burden to stay compliant. Blockchains whose architecture leverages AI capabilities like The Graph, SingularityNET, or Ocean Protocol, to name just a few, may be particularly affected. 

Moreover, the crypto industry has already been repeatedly hit and hurt by the regulatory uncertainty in the US, and these new AI rules may just be the final nail in the coffin of American blockchain innovation. Meanwhile, some entrepreneurs fear that the AI industry may face the same destiny since the proposed dispositions are too unclear and vague to ensure startups’ peace of mind. 

How to break the impasse? 

If guaranteeing security and privacy are what governments are after, then the cryptocurrency industry could share a lot of knowledge, expertise, and tools, especially with the ascent of privacy-preserving zero-knowledge-proof solutions. Not to mention that blockchain technology can be the best instrument to fight disinformation, as its innate immutability could help detect deep fakes and prove data’s authenticity. During its decade-long history, the crypto space has primarily cared for distributing control and democratizing access to products and services. So, since blockchain leaders are, in fact, those who can actually propose viable solutions to AI-induced problems, shouldn’t they get a place at the negotiation table? 

Albena Kostova-Nikolova

Albena Kostova-Nikolova is a seasoned blockchain professional with over five years of experience in crypto marketing. She recently launched Web3 and AI - web3plusai.xyz, a blog and weekly newsletter exploring the intersection of blockchain and artificial intelligence.

Share this article

Facebook
Twitter
LinkedIn

Sign in