Surprisingly, a minority of organizations review AI-generated code for security.
Even though many are already using or testing out these coding assistants, many do not really know where or how this code is being used in their projects—which leaves a lot of room for mistakes.
The report found that a huge chunk of AI-written code fails at basic input validation (think: 86% in cross-site scripting cases). Languages like Java and Python are especially affected.
Plus, misconfigured AI agents often give out too many permissions, making systems easier to attack.
Experts say regular human reviews and better security checks are a must if we want to keep using these tools safely.
Contact to : xlf550402@gmail.com
Copyright © boyuanhulian 2020 - 2023. All Right Reserved.