Endor Labs, the creator of the Code Governance Platform, has recently published a research report titled "State of Dependency Management 2023." This report explores emerging trends and risks related to the use of open-source software (OSS) in software development and the importance of incorporating them into security strategies. With modern software development embracing distributed architectures, microservices, and third-party components, the report highlights the popularity of ChatGPT's API, the limitations of existing large language model (LLM)-based AI platforms in classifying malware risk accurately, and the lack of security-sensitive API calls in nearly half of all applications.
The "State of Dependency Management 2023" report was compiled by Endor Labs' research team, Station 9. Comprising experts from diverse industries worldwide, the team aims to explore the complexities of supply chain security and provide guidelines for selecting, securing, and maintaining OSS.
Henrik Plate, lead security researcher at Endor Labs Station 9, emphasizes the need to monitor the risks associated with the rapid expansion of AI technologies. While these advancements offer remarkable capabilities, they can also introduce significant harm if software packages contain malware and other risks. Early adopters of appropriate security protocols stand to benefit the most from these capabilities.
Key findings from the report indicate that current LLM technologies are unreliable in assisting with malware detection and scaling, accurately classifying malware risk in only a small percentage of cases. The report emphasizes the need for organizations to prioritize analysis of security-sensitive API usage through open-source dependencies as 45% of applications do not make any calls to such APIs in their code base. Additionally, organizations tend to underestimate risk when they fail to consider dependencies. The report also reveals that a large portion of imported code in applications is left unused, making vulnerability remediation efforts more efficient when organizations gain insights into reachable code.
The report sheds light on the rising popularity of ChatGPT's API, which has been integrated into numerous packages across different domains. However, this popularity combined with a lack of historical data poses potential security risks, necessitating caution when selecting packages.
The research report emphasizes the security implications of LLM applications, including the ability to create and hide malware, even acting as a challenge for defensive LLM applications. To address these concerns, organizations are advised to go beyond standard Software Bill of Materials (SBOMs) and gain a deeper understanding of how components are used within their applications to identify exploitable vulnerabilities.
Comments