Software security was analyzed in a recent report by Black Duck. According to the data, a wave of artificial intelligence (AI) adoption is radically shifting how software goes from ideation to deployment. Nearly all survey respondents – over 90% – said that they are using AI in some capacity for their software development process, demonstrating just how crucial it is for organizations to take the proper security measures throughout the entire development lifecycle. And yet, 67% of respondents were concerned about securing AI-generated code.

Industries across the technology, cybersecurity, fintech, education, banking/financial, healthcare, media, insurance, transportation and utilities sectors reported similar high adoption, underscoring the importance of having seamless security mechanisms in place. In the nonprofit sector, which is traditionally slower to technological advancements due to constrained resources, at least half of organizations surveyed reported that they were using AI. Unsurprisingly, the larger the organization, the more likely it has significantly adopted some facet of AI in its software development.

A large majority (85%) of survey respondents noted that they have at least some measures in place to address the challenges posed by AI-generated code, such as potential IP, copyright and license issues that an AI tool may introduce into proprietary software. However, less than a quarter (24%) are "very confident" in their policies and processes for testing this code.

More than half of respondents (61%) said that security testing moderately or severely slows down development. Fifty percent of those that feel this way also say that most projects are still being added manually.

Eighty-two percent of organizations are using between 6 and 20 different security testing tools, making it challenging to effectively integrate and correlate results across platforms and pipelines, leading to difficulty in distinguishing between genuine issues and false positives.

Download the report