The CVE refers to a Time-of-check Time-of-use (TOCTOU) Race Condition vulnerability that occurs during JSP (JavaServer Pages) compilation in Apache Tomcat. Under certain conditions, this flaw can lead to Remote Code Execution (RCE) on systems with case-insensitive file systems (e.g., Windows).
An attacker could exploit this by quickly uploading a malicious JSP file with a different case after Tomcat checks for the file but before it compiles the JSP. If the default servlet is configured to allow write operations (which is not the default setup), this can lead to the compilation and execution of the malicious JSP., and results in a remote code execution.
It is essential to understand that two conditions must be true for your system to be vulnerable to that CVE:
**UPDATE**
CVE-2024-56337, was released on Dec 20, 2024, to inform that the mitigation for CVE-2024-50379 was incomplete. The fix remains the same, but the Apache team made it clear that users running Tomcat on a case-insensitive file system with the default servlet write enabled (readonly initialisation parameter set to the non-default value of false) may need additional configuration to fully mitigate CVE-2024-50379 depending on which version of Java they are using with Tomcat:
Moreover, versions 11.0.3, 10.1.35 and 9.0.99 onwards will include checks that sun.io.useCanonCaches is set appropriately before allowing the default servlet to be write enabled on a case insensitive file system.
The sun.io.useCanonCaches property is a JVM (Java Virtual Machine) setting that controls how the JVM standardizes file paths.
CVE-2024-50379 highlights the importance of timely updates and careful configuration management when deploying servers. For any organization running Apache Tomcat, particularly in environments where case-insensitive file systems are in use, this vulnerability demands immediate attention to secure your infrastructure from potential exploits. Always keep your systems patched, and your configurations secure to mitigate risks like these.
]]>Launched as an internal project by Spotify in 2016, Backstage was released under the Apache 2.0 open source license in 2020 to help other growing engineering teams deal with similar challenges. Backstage aims to provide a consistent developer experience and centralize tools, documentation, and services within a single platform.
What started as a way to help new developers onboard faster is now a fully fleshed out developer portal that standardizes how teams interact with their internal services, APIs, and resources. Backstage includes features for service catalogs, continuous delivery, observability, and plugin integrations—all customizable to fit specific workflows.
For application security teams, Backstage offers wide views and controls across the development process and with the Mend.io plugin, deep insights into application risks overall or by project, too.
Switching across multiple tools and projects isn’t just annoying, it can also lead to potential security blindspots and delayed response times. Likewise, missing opportunities to address vulnerabilities early in the development process can lead to costly rework, delays in releases, and vulnerabilities in production.
We want to save you from that trouble and consolidate security information from SCA, SAST, and Container scans into a single view within Backstage, providing you with a comprehensive overview of projects’ security.
We built the Mend.io plugin for Backstage to help you:
Keeping applications secure is a tough job and Mend.io is here to assist you wherever you’re doing it.
Installing the Mend.io plugin for Backstage is simple. All you need is a simple script to install the back-end plugin plus a Mend.io API token. Information on both and everything else you need to get started can be found on the plugin installation page.
]]>The incident focused on versions 1.95.6 and 1.95.7 of the @solana/web3.js library, which were compromised through what seems to have been a phishing attack on the credentials for publishing npm packages. Here is how it worked:
addToQueue
. This function was designed to capture and exfiltrate private keys used for signing transactions and accessing wallets. The attacker used what looks like CloudFlare headers to stay less suspicious in the network logs.Keypair.fromSecretKey
and Keypair.fromSeed
, effectively hijacking these operations to steal keys.Figure 1. addToQueue
backdoor introduced on version 1.95.6
Figure 2. Mert Mumtaz’s post
Remediating Suggestions:
We have been tracking this issue as MSC-2024-17462 and MSC-2024-17463 since it started, so our customers using this library will get an alert on the two compromised versions.
Moreover, today, the Solana team issued a CVE profile to address this issue.
It’s the third supply chain security attack on a very popular open-source library after the Lottie player and the polyfill attacks that we have encountered in the last six months. All those incidents concatenate with the unforgettable XZ incident we had at the start of the year, and of course, all of the North Korean attacks on developers, together with other so-called “regular attacks” we see daily on the main open-source registries.
As we approach the new year, it’s time to stop and think about our supply chain security. As far as I’m concerned, companies are taking vulnerabilities more seriously than malicious packages, despite the fact that having a malicious package in their code means they are immediately compromised. Now is the time to stop closing our eyes to supply chain incidents and invent more resources to secure our supply chain and all our outsourced operations.
The @solana/web3.js incident reminds us of the complexities and risks associated with supply chain security. While the immediate financial impact was contained, the long-term lesson is clear: the supply chain security space requires constant vigilance from individual developers and the entire community.
]]>Modern applications make use of thousands of third-party components, mostly but not entirely open source software (OSS), and it’s critical to keep track of them. That’s why, in theory, an SBOM is great. Requiring an SBOM seemed like such a good idea that both government agencies and customers started insisting that everyone have them. You get an SBOM, you get an SBOM, everyone gets an SBOM!
However, once everyone started getting SBOMs, a new problem arose: what do we do with these things, anyway? While an SBOM is important, when you just get a list of all the components that are in your code, it’s incomplete information that isn’t actionable.
In order to make SBOMs really useful, security pros needed something else—something to contextualize the mass of findings in an SBOM to help them understand what was really concerning and what was just a false alarm. As Frank Costanza once said, there has to be another way.
Thankfully, there is another way– VEX (Vulnerability Exploitability eXchange). VEX is a framework for communicating the exploitability of known vulnerabilities in the context of where and how they are used.
SBOMs enriched with VEX data make it easier for organizations to prioritize risk management by providing actionable insights into the exploitability of vulnerabilities. This allows your business to allocate resources effectively and focus on addressing the most critical risks.
A VEX producer can designate vulnerabilities as:
With VEX, instead of just having the SBOM data of software dependencies, you also have information about the specific vulnerabilities within the code that you’re using and whether they actually need to be addressed. It saves you the time and dev hours tracking down false positives and prioritizing the biggest risks first, overall providing a lot more value to an SBOM.
Notes about what steps have been taken or need to be taken can also be included in the VEX.
Additionally, documents in VEX format are machine readable (either CycloneDX or SPDX), allowing integration into asset management tools. This enables greater automation of risk management, once again saving time and ultimately, money.
VEX also includes actionable data about how severe a vulnerability is, whether mitigations exist, and if patches are necessary. The information provided by VEX allows security teams to determine how risky a vendor’s software truly is, which is impossible to know from an SBOM alone.
Having a method to consistently describe and share vulnerability data between organizations addresses some of the biggest headaches for security engineers. If the VEX data indicates that a vulnerability is non-exploitable because end users don’t have access to the affected function, it saves you the time of both tracking down that information and mitigating something that doesn’t need to be addressed imminently.
Say you scan your systems and discover 40 vulnerabilities within your software supply chain. Without VEX, you might spend days addressing all of them. However, with VEX data, you see that 10 of the vulnerabilities are non-exploitable, 20 of them are low severity and can be remediated later, while 10 are critical and exploitable. Now you’re able to prioritize the most severe vulnerabilities, schedule later remediations for the lower risk ones, and ignore the non-exploitable ones. Think of the time (and headaches) you’ve saved thanks to VEX.
Adopting SBOMs is an essential step toward modern software security, but without the right context, they remain incomplete. VEX is the key to transforming SBOMs into actionable, insightful tools. By providing crucial exploitability data, VEX allows organizations to allocate resources efficiently, reduce false positives, and focus on addressing real threats—saving time, money, and ensuring faster, more secure software deployments.
At Mend.io, we’re excited to offer SBOM exports enriched with VEX data, empowering our customers to turn their SBOMs into effective risk management tools. Interested in making your SBOMs more actionable? Schedule a demo today.
]]>However, not every vendor approaches solving this challenge the same way. The Forrester Wave Software Composition Analysis Q4 2024, which evaluates 10 SCA vendors against 25 criteria, helps developers, engineers, and application security professionals better understand the leading solutions on the market so they can identify a tool that best fits their priorities.
According to the report, SCA customers should look for software that “assists developers in remediating vulnerabilities and keeping libraries current, provides visibility into software supply chain risk, and prevents software supply chain attacks.”
We’re honored to be recognized as a Strong Performer in the Forrester Software Composition Analysis (SCA) Q4 2024 report. Our top scores in over seven key criteria underscore our mission to help teams move from reactive to proactive application security.
At Mend.io, we’ve always believed that gaining visibility into your open source components and securing the risk that comes with them shouldn’t be a laborious, expensive hindrance to development.
We designed Mend SCA to go beyond simple detection and shallow coverage. It provides rich prioritization context and guidance, automated remediation, and elastic scalability empowering our customers to proactively secure their open source components and software supply chain.
We’re thrilled to see our top-scoring criteria reflect our ethos and approach to empowering security teams to shift from reactive to proactive security. We received the highest scores across the following criteria:
Let’s explore how our approach aligns with our top scores.
Mend SCA received the top scores in:
With a profusion of vulnerabilities to manage and a shrinking amount of resources, AppSec teams need their SCA tools to go beyond simply identifying vulnerabilities. Mend SCA makes this possible with extensive coverage across 200+ programming languages (for both security vulnerability and compliance/licensing analysis), 30+ package managers, and coverage for containers (Docker containers, Kubernetes, several registries), and Linux OS.
When Mend SCA scans your code, it not only inventories and analyzes your direct and transitive dependencies for vulnerabilities but also surfaces essential risk context, including reachability, exploitability, malicious package insights, and license and compliance issues. This gives you the insight you need to understand the risk likelihood and impact and prioritize and remediate risks appropriately.
“Mend.io pioneered reachability”
Mend SCA received the top scores in:
Developers, engineers, and AppSec teams must cut through the noise and understand, “What is a critical risk to me? What do I need to address right now? What is the best path to fix?”
Fusing risk-specific context (like application architecture, fix availability, open source health information such as library age, and CVSS 3 and CVSS 4 severity scores) with likelihood factors (like our best-in-class reachability analysis, malicious package detection, public exploit availability and maturity, EPSS scoring, production information such as whether an image is deployed to production), and impact factors (like customer defined labels or policies, compliance standards, SLAs), Mend SCA prioritizes your most critical risks and provides the best path to remediate.
Unique to Mend SCA, each SCA finding includes the sink-to-source trace in code, package health data (like package age, adoption rate, data gathered on failure rates of builds between versions, and merge confidence ratings), risk reduction impact statistics, and the optimal upgrade path for your vulnerable package – the newest, most stable, least vulnerable library version that provides the most significant risk reduction.
Automated workflows and auto-remediation options for newly discovered vulnerabilities make it easier than ever for our customers to remediate at scale, all without breaking the build.
“Autoremediation for newly discovered vulnerabilities is a strength.”
Mend SCA received the top scores:
The application security risk landscape is expanding and transforming at an insane rate. Add AI, ML, and LLMs into the mix, and it feels like we’ve unleashed Pandora’s box. While risk may expand exponentially… unfortunately, most budgets do not.
To remain secure and compliant, you need to be able to optimize and scale your AppSec programs with ease, including expanding and deepening security coverage. The Mend AppSec platform offers customers everything needed to build proactive application security through one solution at one price, meeting your evolving needs and budget constraints.
“Mend.io’s new pricing strategy is a strength: It offers one price for all products and services, including SCA, dependency updates, SAST, container security, and AI security, and it reflects the vision that customers need a holistic view of the application stack.”
The Forrester Wave states, “Mend.io is a great fit for enterprises that need an all-in-one solution for security, license, operational risk, and supporting services.”
But we’re not done! As noted in the report, we’re in the midst of reshaping and transforming the Mend AppSec Platform so our customers have a unified, holistic view of their AppSec risk.
A holistic approach allows findings to be correlated across the entire application attack surface. It enhances workflows and policies, integrates insights from additional tools, and ultimately enables our customers to proactively and significantly improve their AppSec posture.
Read the full Forrester Wave
: Software Composition Analysis, Q4 2024 report to learn more about what to look for in a software composition analysis vendor and for additional information on Mend.io’s Strong Performer ranking.
The increasing reliance on open-source software coupled with the accelerated pace of software development has created a growing need for support of deprecated packages. The significant majority of open-source software packages are not actively maintained, meaning vulnerabilities are not patched, thereby leaving systems open to attack. Malicious actors often target deprecated open-source packages for this very reason.
In addition to increased vulnerability risks, deprecated packages can become incompatible with modern systems or libraries. This leads to performance issues, making it more difficult to extend your application’s life.
Using deprecated packages also increases your technical debt. The longer you put off replacing updated code, the more complicated it becomes to resolve the issues it incurs.
This is why we are excited to announce an exclusive partnership between Mend.io and HeroDevs. HeroDevs NES (Never-Ending Support) keeps deprecated packages maintained, saving you the cost and hassle of migration while also keeping your software versions secure and compliant.
Mend.io helps developers keep their applications secure by identifying outdated and vulnerable open-source packages and providing recommendations for updating to newer, safer versions. However, sometimes those updates don’t exist because the package is no longer supported.
While developers are capable of fixing issues with deprecated packages they use, it is a risky, costly, and time-consuming task. That’s where HeroDevs comes in. They provide continued support for deprecated packages, ensuring there’s always a safe, updated version available.
By combining the power of the Mend AppSec Platform with HeroDevs NES, our joint customers achieve:
With the power of both the Mend AppSec Platform and HeroDevs NES, you can rest easy that your software supply chain will be well-protected from vulnerabilities, malicious packages, and performance issues that stem from deprecated packages.
]]>RAG adds some extra steps to typical use of a large language model (LLM) so that instead of working off just the prompt and its training data, the LLM has additional, usually more up-to-date, data “fresh in mind”.
It’s easy to see how huge this can be for business; being able to reference current company data without having to actually train an AI model on it has many, many useful applications.
RAG requires orchestration of two models, an embedder and a generator. A typical RAG system starts with a user query and a corpus of data such as company PDFs or Word documents.
Here’s how a typical architecture works:
During a pre-processing stage, the corpus is processed by an AI model called an embedder which transforms the documents into vectors of semantic meaning instead of plain words. Technically speaking, this stage is optional, but it makes things a lot faster if the documents are pre-processed and accessed from a vector database, rather than processed at runtime.
When a user query comes in, the prompt is also fed to the embedder, for the same reason.
Next, the embedded user query is used by a retrieval system to pull relevant pieces of text from the pre-embedded corpus. The retrieval system returns with a ranked set of relevant vectors.
The embedded user query and relevant documents are fed into a generative AI model, specifically a pre-trained large language model (LLM), which then combines the user query and retrieved documents to form a relevant and coherent output.
The two biggest risks associated with RAG systems are poisoned databases and the leakage of sensitive data or personally identifiable information (PII). We’ve already seen instances where malicious actors manipulate databases by inserting harmful data. Attackers can skew the system’s outputs by making their data disproportionately influential, effectively controlling the AI’s responses, which poses a serious security threat.
When implementing RAG, it’s essential to ask key questions: What models are you using for embedding and generation, and where are you storing your data?
Choosing the right models is crucial because different models handle security, accuracy, and privacy differently. Ensuring that these models are fine-tuned for security and privacy concerns or that services are blocking malicious behavior is key, as poorly selected models and third-party services can introduce vulnerabilities.
If you’re using a vector database like Pinecone or LlamaIndex, you must ensure that your data storage complies with security and privacy regulations, especially if you’re working with sensitive data. These databases store the map between the embedding and text, and ensuring that they are properly encrypted and access-controlled is vital to prevent unauthorized manipulation. Developers often choose platforms like OpenSearch, a low-code vector database solution, because it offers easier management of these security aspects, with built-in monitoring, access control, and logging to help avoid data poisoning and leakage.
In addition to model selection and secure data storage, all AI systems operate with a system prompt—a hidden instruction set that initializes every task or conversation. Adjusting this system prompt can help mitigate security issues, such as preventing the model from generating harmful or sensitive content. However, while strengthening the system prompt can help reduce certain risks, it’s not a comprehensive solution. A strong system prompt serves as the first line of defense, but addressing AI vulnerabilities requires a broader approach, including fine-tuning the models for safety, ensuring data compliance, and implementing real-time monitoring, code sanitizers, and guardrails.
In summary, securing a RAG system involves more than just selecting the right models and storage solutions. It requires robust encryption, data governance policies, and continuous oversight to protect against data poisoning, information leakage, and other evolving security threats.
Protecting AI systems, including RAG systems, requires a multi-layered approach that combines proactive testing, security mechanisms, and safeguards to prevent vulnerabilities from being exploited.
One effective strategy is to red-team your model. Red-teaming RAG systems involves simulated attacks to identify weaknesses in your AI system, such as prompt injection or data poisoning, before they can be exploited in real-world scenarios.
To protect RAG systems, there are several key approaches to consider:
In AI, firewalls act as monitoring layers that evaluate both input and output. They can use heuristic techniques to detect suspicious activity, such as attempts to inject harmful prompts or commands. For example, if a user tries to manipulate the AI to ignore its initial instructions (via prompt injection) and generate unintended or harmful output, the firewall can flag this as a potential attack. While firewalls provide an extra layer of security, they aren’t foolproof and may miss more sophisticated attacks that don’t match known patterns.
Guardrails are predefined rules or constraints that limit the behavior and output of AI systems. These can be customized based on the use case, ensuring the AI follows certain safety and ethical standards.
NVIDIA NeMo Guardrails offers several types of guardrails:
NVIDIA Garak, another tool from NVIDIA, is an open-source red-teaming tool for testing vulnerabilities in large language models (LLMs). Garak helps identify common vulnerabilities, such as prompt injection or toxic content generation. It learns and adapts over time, improving its detection abilities with each use. Promptfoo is another tool that might be used.
RAG systems can also incorporate self-checking mechanisms to verify the accuracy of generated content and prevent hallucinations—instances where the AI produces false information. Integrating fact-checking features can reduce the risk of presenting incorrect or harmful responses to users.
A shift-left approach focuses on integrating security practices early in the development process. For RAG systems, this means ensuring that the data used for training and fine-tuning is free of bias, sensitive information, or inaccuracies from the start. Additionally, many RAG vulnerabilities may be in the code itself, so it’s worth scanning the code and organizing for fixes to take place before the production stage. By addressing these issues early, you minimize the risk of the system inadvertently sharing PII or being manipulated by malicious input.
As AI systems like RAG become more advanced, it’s critical to implement these protective measures to guard against an increasing array of security threats. Combining firewalls, guardrails, fact-checking, early security practices, and robust monitoring tools creates a comprehensive defense against potential vulnerabilities.
]]>While keeping dependencies up to date is important, immediately moving to a new version can introduce risks, including the potential for application instability or breakage due to unforeseen regressions in dependent software. Finding a balance between quality and security can seem like a Sisyphean task, as an ever-growing number of updates are required, especially if you must spend time searching for crucial information about each dependency update.
Package health refers to the overall security and reliability of a particular version of a software package (or library), including:
Let’s break each of those down a little further.
Nearly all packages have some known vulnerabilities, but some are higher risk than others. If there are known exploits available, that vulnerability immediately becomes higher risk than vulnerabilities with only theoretical risks. Developers should know what new risks they’re subjecting applications to when updating dependencies.
It’s useful to know how many CVEs a particular version of a dependency has as well as how severe they are. If an update removes a few medium and low CVEs but introduces a new critical CVE, it may not be worth it.
The age of a package is critical for knowing how trustworthy it is. If a package is more than a year old, it likely contains some vulnerabilities that have been addressed in newer versions. While the newest package is not necessarily the best, if a package version is too old, it is probably riskier to use.
By monitoring the number of users who are actively using a specific version of a package, developers can assess the overall popularity and reach of that version. If the latest version has a low adoption rate, it might indicate that other developers have tried it and rolled it back after it caused issues.
This might be the most critical metric related to a package’s health. Knowing the percentage of users who have successfully updated from your current version to a specific later version empowers you to make informed decisions about the ease or difficulty of updating without breaking your build. For example, if a version boasts a 90% success rate among adopters, you can be confident that your update will likely go smoothly as well.
Developers rely on the responsiveness of library maintainers to address vulnerabilities and ensure the ongoing security of their projects. An active, well-maintained package provides peace of mind, while an abandoned or deprecated one raises a red flag, signaling potential risks and the need for alternative solutions.
Comprehensive package health information allows you to make informed decisions, reducing the risk for negative side effects in your applications—namely, creating new vulnerability risks or breaking an application’s usability. The more you know about a package, the better decisions you can make about it.
Just like a doctor needs a complete patient history to prescribe the right medication, developers need comprehensive information about a software package before updating it. Think of it this way: a new drug might promise to cure a disease, but what about the side effects? If a patient is already vulnerable, is it worth the risk when older, proven medications exist?
The same applies to software. While a later version might seem best, it could introduce bugs or break compatibility with existing code, disrupting users. It’s a delicate balance between innovation and stability—just like a doctor weighing the best treatment options for their patient.
Comprehensive package health information helps you make informed decisions, minimizing the risk of unintended consequences. Ultimately, the more you know about a package, the better equipped you are to make the right call.
Balancing security with application stability is crucial. Mend.io offers valuable package health data, sourced from its widely-used dependency update tool Mend Renovate. This data helps developers make informed decisions about updating packages and mitigating vulnerabilities, enabling them to strike the balance between security and stability.
To update or not to update? That is the question—and one we can help you answer.
]]>A long cultural history of sci-fi movies and books featuring all-powerful artificial intelligences that do not always have the best interests of humans at heart has scared many people away from using real-life AI technology in any form. Many at Mend.io believe this is a bad idea; workers and companies that shy away from utilizing AI miss out on the impressive capabilities AI tools provide and may be left behind. You may have already heard of large language models like ChatGPT, Claude, and Microsoft Copilot, as well as AI text-to-image programs like DALL-E and Stable Diffusion, and we think you should use them, but with a few security points in mind.
While AI might be more difficult to understand than other, traditional technologies, Maria Korlotian, Director of Development, points out that AI is merely a new tool and “not some mystical force beyond our control.”
According to Maria, “Using AI isn’t fundamentally different from using any other everyday technology – it’s just more advanced. Think of it like a supercharged calculator. We don’t fear calculators because we understand their purpose and limitations. AI is similar, just with a broader scope of applications.”
“Right now, we’re in the early stages of widespread AI adoption, and that naturally causes fear and misunderstanding. But this is normal for any revolutionary technology. The key is to approach AI with curiosity, not fear. It’s hard to use a tool efficiently if you don’t know what to expect from it. We need to understand AI’s capabilities and limitations. As we become more familiar with AI, much of the current uncertainty will fade. It’s just another tool in our technological toolkit – incredibly powerful, yes, but still a tool designed to augment human capabilities, not replace them.”
Others at Mend.io backed Maria on the values of AI. Bar-El Tayouri, Head of Mend AI, described AI as “much less predictable than traditional technology. Unlike regular code, which can be read and understood, an AI model consists of complex weights that make it difficult to predict its reactions and behavior. As a result, AI has much greater power and flexibility but requires careful regulation and guardrails to control its outputs.” Again, while AI might seem frightening to new users, the power and capabilities it brings to the table make it worth learning how to use.
Rhys Arkins, VP Product Management, described AI as being similar to “a new, brilliant colleague with unlimited time to help you be successful, while at other times being the colleague most at risk of completely misunderstanding you.”
Yael Barnoy, General Counsel at Mend.io, agreed and called AI “revolutionary” because the uses for it “are unlimited and no prior knowledge is required to produce excellent content. Even children can speak to an AI model and write their own books complete with pictures by using currently available AI programs. Also, many AI programs are free and available to the general public.”
We asked our expert teammates what advice they would give to friends and family about using AI safely and compiled this list.
AI is an exciting new tool that will help us all reach new heights. As long as you keep these basic cybersecurity principles in mind, you will be able to achieve great things and maintain your safety while using AI.
Share this blog with friends and family who you think could use some tips on using AI safely.
]]>Many teams think of DAST as almost an afterthought, just the dessert you may or may not order just before the software is released. But modern DAST solutions offer powerful insights necessary for a well-balanced security posture.
Before we get into suggestions on why and how to run DAST scans more frequently, let’s talk about why many organizations have limited their DAST scans. Typically, these reasons include:
Despite these challenges, there are ways to integrate DAST more seamlessly into the development pipeline that allows teams to run more frequently and catch issues earlier.
Running DAST scans once a quarter or only before a major release can create blind spots where vulnerabilities are introduced but remain undetected for extended periods. Here’s why running DAST scans more often makes sense:
See how Mend.io and Invicti extend your AppSec coverage from code to runtime.
To make DAST scans a more regular part of your development cycle, it’s essential to address the time, resource, and manual intervention barriers. We recommend the following tactics:
One of the most effective ways to run DAST scans more frequently is to integrate them into your CI/CD pipeline. By automating the process, you eliminate the need for manual scans, allowing DAST to run automatically whenever code is committed or deployed. Use incremental scanning to focus on only the recently changed parts of code, saving time and resources.
Traditional on-premises DAST tools can be resource-intensive, but modern cloud-based DAST tools offer scalability and flexibility. By leveraging these improved solutions, teams can offload the heavy lifting away from local resources.
DAST can be integrated with other testing processes to run in parallel and reduce bottlenecks. For instance, while functional tests run, a DAST scan can simultaneously check for vulnerabilities. When DAST is integrated with other types of testing, teams can get a holistic view of both the functionality and security of their application in one go.
Security is a shared responsibility. By working closely with developers, security teams can ensure that security is considered from the earliest stages of development. Frequent DAST scans help security teams provide more on-time feedback, empowering developers to write code that’s less likely to introduce new security issues in the first place.
Static Application Security Testing (SAST) offers a chance to find insecure coding early, before it goes anywhere near production, but some things still fall through the cracks. DAST helps teams discover vulnerabilities that make it into the build.
Utilizing both SAST and DAST frequently helps security teams stay on top of vulnerabilities and provide developers with crucial and on-time feedback about the security of their code.
With that in mind, Mend.io has partnered with Invicti to provide comprehensive solutions and pair Invicti’s DAST and API Security domains with Mend’s SAST, SCA, and Container Security solutions to give customers full code coverage and continuous security. One login grants access to everything you need from vulnerability scanning, analysis, and tracking. It’s like having a master key to the entire AppSec kingdom.
]]>