Home AI ChatGPT Impact on Web3, Web2 and Online Security

ChatGPT Impact on Web3, Web2 and Online Security

0
ChatGPT Impact on Web3, Web2 and Online Security

Source: news.google.com

ChatGPT and Web3

The dialogue-based AI chatbot, ChatGPT, capable of understanding natural human language, has taken the world by storm. With over 1 million registered users in just 5 days, it became the fastest growing technology platform ever. Upon receiving text input, ChatGPT generates impressively detailed human-like written text and thoughtful prose. ChatGPT can also write code. The Web3 community was impressed, curious and shocked by the capabilities of ChatGPT.

Now ChatGPT you can write, scan and hack smart contracts. Where do we go next?

ChatGPT AI code writer is a game changer for Web3. It can go two ways:

  • Near-instantaneous security audits of smart contract code to find vulnerabilities and exploits (existing and pre-deployment).
  • On the other hand, bad actors can program AI to find vulnerabilities to exploit SC code (thousands of existing SCs could suddenly be exposed).

The point of view of the Naoris Protocol:

In the long term, ChatGPT will be positive for the future of Web3 security.

In the short term, AI will discover vulnerabilities that will need to be addressed as there could be a potential increase in breaches.

AI will highlight where humans need to improve.

For developers and Web3 development (before deployment)

Demand for Web3 developers and auditors will be lower. The future can look like this:

  • Developers will use AI to instruct, write and generate code
  • Developers will read and critique the output of the AI, learning patterns while looking for weak spots.
  • Auditors will need to understand bugs, bugs, and code patterns.
  • Auditors will need to know the limitations of AI
  • AI will work in conjunction with development teams to strengthen future code and systems
  • AI will become part of the development-to-production process
  • It will be survival of the fittest for developers and auditors.
  • Only the best who work with train and test AI will survive
  • Development teams will be reduced in number with AI on the team

For Web3 security (after deployment)

  • Swarm AI will be used to scan the status of smart contracts in near real time
  • The code will be monitored for anomalies, code injections, and hacks.
  • The attack position is to find bugs and bugs from the AI, rather than the code.
  • This will greatly improve the security of Web3 smart contracts ($3 billion hacked in 2022 to date)
  • This will also affect the ability of CISOs and IT teams to monitor in real time
  • Security budgets will decrease and cybersecurity teams will be reduced in number
  • Only those who are capable of working with and interpreting the AI ​​will be in demand.

conclusion

The AI ​​is not a human being, so it will miss out on preconceived ideas, knowledge, and basic subtleties that only humans see. It is a tool that will reduce vulnerabilities that humans code by mistake. It will greatly improve the quality of coding in Smart Contracts. But we can never fully trust his output.

#ChatGPT / Web2 and Company

ChatGPT, the dialog-based AI chatbot that can understand natural human language, was launched last week. ChatGPT generates impressively detailed human-like written text and thoughtful prose when provided with a text input message. Also, ChatGPT can write and hack code, which is a major problem from an information security point of view. This AI is able to analyze and find the answer in seconds. Example tested: https://twitter.com/gf_256/status/1598104835848798208

  • Is the genie out of the bottle capable of threatening the security of traditional information and the company?
  • Can centralized AI pose a risk to the world?
  • What if it was programmed with biases that could change the output of the AI ​​to be evil?
  • Remember the Apple AI bot that became a racist misogynist?
  • Will AI help hackers in phishing attacks to, for example, shape the language around social engineering, making them more powerful than they already are?
  • Will it be counterproductive to add safeguards?

The point of view of the Naoris Protocol:

Artificial intelligence writing and hacking code could create problems for businesses, systems, and networks. Today’s cybersecurity is already failing, as there are exponential increases in attacks across all sectors in recent years, with a 50% increase in 2022 compared to 2021.

ChatGPT can be positively used within an enterprise development and security workflow. This will increase defense capabilities above current (existing) security standards. However, bad actors can increase the attack vector by working smarter and much faster by telling the AI ​​to find vulnerabilities in well-established code and systems. Well-regulated companies like FSI spaces, for example, might not be able to react or recover in time due to the way cybersecurity and current regulation is set up.

For example, the current time to stop for breaches as measured by IBM (IBM 2020 Data Security Report) on average is up to 280. The use of AI in enterprise defense in depth could reduce the time to detect breaches of posture less than 1 second, which is a game. changing table

The advent of AI platforms like ChatGPT requires companies to up their game. They will need to implement and use AI services within their security QA workflow processes before releasing any new code/programs.

conclusion

As soon as the genie is out of the bottle, if one side doesn’t use the latest technology, they are in a losing position. Therefore, if there is an offensive AI, companies will require the best defensive AI to return. It’s just an arms race to see who has the best tool.

Read Next: Developers, Have Your AI Peer Programmer: GitHub Copilot Is Now General Public

Read More at news.google.com