NewsTechnology

Impact of ChatGPT on Web3, Web2 and online security – a viewpoint from Naoris

3 Mins read
ChatGPT

ChatGPT and Web3

The dialogue-based AI chatbot, ChatGPT, capable of understanding natural human language, has taken the world by storm. With over 1 million registered users in just 5 days, it became the fastest growing tech platform ever. On being given a text input, ChatGPT generates impressively detailed human-like written text and thoughtful prose. ChatGPT can also write code. The Web3 community was impressed, curious and shocked by ChatGPT’s abilities.

Now ChatGPT can write, scan and hack Smart Contracts. Where do we go next? 

The ChatGPT AI code writer is a game changer for Web3. It can go two ways:

  • Near instant security audits of smart contract code for finding vulnerabilities & exploits (existing & prior to implementation).
  • On the other side, bad actors can program AI to find vulnerabilities for exploiting SC code (thousands existing SC could suddenly find themselves exposed).

The Naoris Protocol POV:

In the long term ChatGPT will be a net positive for the future of Web3 security.

In the short term AI will find out vulnerabilities which will need to be addressed as there could be a potential spike in breaches.

AI will highlight where humans need to improve.

For Web3 devs & development (pre-deployment)

Demand for Web3 developers and auditors will be less. The future may look like this:

  • Devs will use AI to instruct, write and generate code
  • Devs will read and criticise AIs output, learning patterns while looking for weak spots
  • Auditors will have to understand errors, mistakes and code patterns
  • Auditors will need to know the limitations of AI
  • AI will work in tandem with dev teams for strengthening future code and systems
  • AI will become part of the development to production pipeline
  • It will be survival of the fittest for devs and auditors
  • Only the best working with instruct and evaluate AI will survive
  • Dev teams will reduce in numbers with AI on the team

For Web3 security (post-deployment)

  • Swarm AI will be used to scan the status of Smart Contracts in near real time
  • Code will be monitored for finding anomalies, code injections and hacks
  • The attack position is to find bugs and errors of the AI, instead of the code
  • This will hugely improve Web3 smart contract security ($3billion hacked in 2022 to date)
  • This will also affect the CISOs and IT teams’ ability to monitor in real time
  • Security budgets will decrease, and cybersecurity teams will reduce in numbers
  • Only those who are able to work with and interpret AI will be in demand

Conclusion

AI is not a human being so, it will miss basic preconceptions, knowledge and subtleties that only humans see. It is a tool that will reduce vulnerabilities that are coded in error by humans. It will greatly improve the quality of coding in Smart Contracts. But we can never completely trust its output.

#ChatGPT / Web2 and Enterprise

Last week saw the release of ChatGPT, the dialogue-based AI chatbot that can understand natural human language.  ChatGPT generates impressive and detailed human-like written text and thoughtful prose when given a text input prompt. Moreover, ChatGPT can write and hack code which is a major issue from an infosec point of view. This AI is capable of analyzing and finding the answer in seconds example tested: https://twitter.com/gf_256/status/1598104835848798208

  • Is the genie out of the bottle capable of threatening traditional infosec and the enterprise?
  • Can centralised AI pose a risk to the world?
  • What if it was programmed with biases that could change the AIs output to be evil?
  • Remember the Apple AI bot that became a racist misogynist?
  • Will AI help hackers in phishing attacks for e.g. shaping language around social engineering, which will make them more powerful than they already are?
  • Will adding safeguards turn out to be self-defeating?

The Naoris Protocol POV:

Artificial Intelligence that writes and hacks code could create trouble for enterprises, systems and networks. Current cybersecurity is already failing as there are exponential rises in hacks across every sector in recent years with 2022 reporting 50% rise compared to 2021.

ChatGPT can be used positively within an enterprises security and development workflow. This will increase the defence capabilities above the current (existing) security standards. However, bad actors can increase the attack vector by working smarter and much quicker by instructing AI to find exploits in well-established code and systems. Well-regulated enterprises like FSI spaces, for example, would not be able to react or recover in time because of the way current cybersecurity and regulation is configured.

For example, the current breach detention time as measured by IBM (IBM’s 2020 Data security report) on average is up to 280. Using AI in enterprise defence in depth posture breach detection time could be reduced to less than 1 second, which is a game changer.

The coming of AI platforms like ChatGPT requires enterprises to up their game. They will need to implement and use AI services within their security QA workflow processes before launching any new code / programmes.

Conclusion

As soon as the genie is out of the bottle, if one side does not use the latest technology, they’re going to be in a losing position. So, if there’s offensive AI out there, enterprises will require the best defensive AI to come back. It’s just an arms race to who’s got the best tool.

Read next: Developers, you’ve got your AI pair programmer – GitHub Copilot is now generally available

Leave a Reply

Your email address will not be published. Required fields are marked *

+ 88 = ninety five