header banner
Default

Crypto will soon see ChatGPT


Table of Contents

    If you’re unconvinced by the power of artificial intelligence, check the illustration above. I generated it in a number of seconds by thumbing the prompt “vaporwave robot carrying a briefcase full of cryptocurrency in a dark alley” into DALL-E 2, a free tool from OpenAI, the AI company funded by Elon Musk.

    OpenAI’s latest AI model is even more powerful. It’s an uncannily realistic chatbot called ChatGPT that can produce reams of insightful text on almost anything you can throw at it. And unlike other language generation models, it can remember what you’ve told it, allowing for conversations that construct a convincing impression of a mind at work.

    Impressively, the chatbot can also turn human prompts into lines of code: At my command, ChatGPT wrote a smart contract in Solidity, Ethereum’s programming language, that turned the DALL-E image I had generated into an NFT.

    Although ChatGPT is just a free research preview, it has already supercharged the tech world’s imagination, hitting a million users just five days after it launched late last month. By comparison, it took GitHub’s AI coding assistant about six months to cross the same threshold

    The prospect of outsourcing mental busywork to an AI assistant has also attracted the crypto crowd. The space makes ample room for those who shoot way above their abilities, making the application of an ultra-confident chatbot both exciting and dangerous. While innovative developers can use the technology to enhance their coding or cross language barriers, ChatGPT makes it easier than ever to produce malicious code or spin up a honeypot plausible enough to fool investors. 

    Some crypto professionals are already making good use of the tech. Hugh Brooks, head of security for the smart contract auditing firm CertiK, said the chatbot wasn’t half bad at finding bugs in code, and has become invaluable at summarizing complicated code and dense academic articles. 

    And Stephen Tong, founder of a small blockchain security company called Zellic, said his company’s already using it for sales and customer support. “It makes everyone on those teams more efficient,” he said, allowing cosplaying tech bros to provide a “super-buttoned-up, professional experience” without breaking a sweat.

    Also close to the front of the pack of crypto-utopians is Tomiwa Ademidun, a young Canadian software engineer who has used ChatGPT to code a cryptocurrency wallet from scratch, then generated a detailed guide, complete with diagrams, that teaches people how to use it.

    “It’s honestly very impressive,” he said. ChatGPT taught Ademidun complex cryptography concepts with the avuncular charm of a friendly high school computer science teacher, and then generated what he described as near-flawless code. And when Ademidun caught a mistake, the chatbot politely apologized, then corrected itself. That prompted a small career crisis for the young software engineer: After ChatGPT, “What do you still need me for?”

    Quite a lot, it turns out. The technology is far from perfect, and frequently spews hot garbage with supreme confidence when trapped with impossible questions. Stuck on a desert island with no arms or legs? “Use your arms to crawl or scoot,” then “create a makeshift wheelchair,” advised the chatbot. Need help delivering Chinese food to a spaceship on its way to Mars? “Many space agencies offer food delivery services to astronauts,” it asserted.

    Programmers, too, must be smart enough to wade through ChatGPT’s unshakeable belief in its own gobbledygook. When Outlier Venture’s head of engineering, Lorenzo Sicilia, experimented with the technology, he found it not ready for more advanced smart contract work. “As soon as you try it, you discover all the small details that don’t work,” he said. 

    Limited to an outdated dataset from 2021, ChatGPT’s code generated errors when pasted into the latest virtual machines. And as a blustering conversational AI, it lacks the ability to formally verify its own code. 

    While some crypto developers have found in ChatGPT a tireless debugging assistant, others are already trying to exploit the technology for quick cash. Daniel Von Fange, a stablecoin engineer, thwarted a submission for a lucrative “bug bounty” earlier this month he believes was generated by ChatGPT.

    “It had taken things from my reply with simulation code (written in one programming language), mixed it with testing code (in another language), and invented a third problem as bogus as the other two,” he explained to Fortune. “It’s like someone with all the swagger and sponsor-covered Nomex of a NASCAR driver, but who can’t find the steering wheel in a pickup truck,” he told cybersecurity blog The Daily Swig. 

    Artificial intelligence that can write convincingly about nonsense is also perfect for generating phishing campaigns that direct people to GPT-created malware, or coordinated harassment campaigns from annoyingly lifelike Twitter bots. It can also be used by opportunists looking for a fresh round of gullible investors.

    Just as harmful is so-called educational material that may bear no relation to the truth. Similarly, those unable to understand code could lose money to botched AI-generated trading bots, whose inner workings they don’t understand.

    And yet, despite the risks, Ademidun resides on the optimistic side of technological determinism. Like any tool, he said, ChatGPT can be used for good or ill—the more important point is that it could be very powerful, especially once OpenAI feeds it with more data, including real-time information.

    Indeed, if ChatGPT had succeeded in its attempt to find a bug in Von Fange’s code, the engineer said he would have happily paid out $250,000. The chatbot is proof that the train of progress has already left the station. “The question is, are you going to hop on or just watch it move past you?” said Ademidun.

    Outside of crypto, people are certainly putting it to work in everyday life. One consultant confided that he lobbed cursory recommendations about a factory he had visited into a prompt, then sent the AI-generated report to his unsuspecting boss, who made only minor changes before sending it to the client. A dazed lawyer at a top London law firm told me he used it to generate an infinite supply of bedtime stories for his children. 

    Yet in its current, limited incarnation, ChatGPT might better be understood as a fascinating science experiment. Sam Altman, a founder of OpenAI, said on Twitter that the technology has created “a misleading impression of greatness.” It’s just a “preview of progress,” he said. “It’s a mistake to be relying on it for anything important right now.”

    Our new weekly Impact Report newsletter examines how ESG news and trends are shaping the roles and responsibilities of today's executives. Subscribe here.

    Sources


    Article information

    Author: Bobby Martinez

    Last Updated: 1700236203

    Views: 730

    Rating: 3.7 / 5 (76 voted)

    Reviews: 87% of readers found this page helpful

    Author information

    Name: Bobby Martinez

    Birthday: 1925-03-23

    Address: PSC 3961, Box 7063, APO AE 40507

    Phone: +4053292426140844

    Job: IT Consultant

    Hobby: Mountain Climbing, Beer Brewing, Aquarium Keeping, Orienteering, Robotics, Photography, Woodworking

    Introduction: My name is Bobby Martinez, I am a vivid, forthright, intrepid, exquisite, bold, unwavering, strong-willed person who loves writing and wants to share my knowledge and understanding with you.