Systemic Bugs in Third Party Developed Code

  Luke Rogerson   software development training report writing client communication sdlc

It’s been a few weeks since we launched our report writing training course “The Art of Report Writing” and since then we’ve had some great conversations with folks on their challenges around reporting and client communication. One conversation that stood out was around a client using a third-party developer for several products. When a testing team assessed each product over a year it became apparent that not only were the vulnerabilities similar from product to product, but many instances of the same vulnerability were also present, indicating a systemic issue with both the code and the development team’s practices. Worse still, the remediation of the vulnerabilities was almost always point-fixing - fixing the instances highlighted by the testing team rather than fixing the root cause of the issue. Additionally, the client was pressuring the testing team to ensure that all instances of a given vulnerability type were discovered during an assessment – something that isn’t always possible or feasible.

This scenario is obviously painful for all parties involved. Clients don’t want to see the same issues appearing again and again, testers may feel like the development team are not learning from their mistakes and may get more pressure from clients when the same issues keep appearing (“why didn’t you find every instance?!”), and the developers might be remediating with a wack-a-mole approach without dealing with the root cause of the problem due to reasons that are not easy to disclose to the other parties…

Obviously, we can only speculate the reasoning on the pressure from the client and the lack of proper remediation by the supplier, however, let’s discuss options each party may have in this scenario for pushing security in the right direction.

Actions as a Third-Party Testing Vendor

As a tester you have a duty to inform your client of what risk you have discovered, and the steps required to remediate it. When a significant vulnerability class is systemic throughout an application (i.e., it appears many times), it is important to articulate the significance of this. For example, if SQL injection is discovered throughout a large application, it may not be feasible in the testing time to discover every instance – this could mean there are instances remaining that an attacker with more time could discover. Additionally, given this vulnerability leads to the compromise of data held by an application you should look to explain the impact of this to the client. For example:

  • What could someone do with any exposed data or access via the vulnerability? What could be the attackers next steps?
  • Are there usernames and passwords that could be extracted? And what could that mean for these users? (e.g. replaying usernames and recovered passwords against other websites)
  • What about the potential reputational damages that could occur to the client?
  • What compliance implications could there be? (e.g. GDPR) Such discussion within the finding as well as the executive summary of the report needs to highlight the importance of the discovery and why it should be investigated, not just at a point fix level, but at a broader level across the entire application – remember that the likelihood of additional instances existing should drive up the risk level.

In terms of recommendations, it is common to detail in general terms how to remediate a discovered issue, potentially with some examples. However, when many instances of the same vulnerability class are present through an entire application this indicates there may be a deeper problem which needs to be reviewed. Along with suggesting that additional instances of the vulnerability be checked for by the development team, it is crucial to highlight that given the abundance of the vulnerability, a review at the implementation level should be conducted. This should look to confirm what changes could be made from preventing the vulnerability appearing again. In combination with this advice here are some other options which should be considered:

  • If you know what underlying coding framework is involved, can you give more a more tailored recommendation?
    • If you have access to code, can you recommend an approach that might fix all the issues at once or propose a more secure approach for future implementation? e.g. if the vulnerability was SQL injection, the use of a known secure ORM for interaction with the database rather than raw SQL queries using user input would be preferable
  • Can you suggest preventative actions the development team could make?
  • Static code analysis?
  • Secure coding guideline improvements?
  • Code linting within the development team’s IDE?

If this approach to reporting has already been performed and you are once again faced with the same vulnerabilities in a new feature or product it might be time to have an open and candid discussion with the client. How you approach this may depend on your relationship with them, but if you are spotting trends across multiple tests and products it may be that the development team is not performing effective remediation or learning from the findings in your reports. If you take this route, ensure you are prepared with some statistics across assessments performed, as well as some reasoning on why it isn’t always possible for you to find every instance of a vulnerability (time limitations and lack of code access may be contributing factors although honestly it can be hard to bring this up without sounding like you’re making excuses, so tread carefully). If you can demonstrate the same vulnerability class being present in new products or features this should be a key discussion point - this shows that lessons are not being learned. However, if you’ve articulated the concern with significant gravitas and the client does not seem to care, that might be all you can do. It’s best to try though - you’re giving them value whether they want to pay attention or not. Like suppliers (which we’ll talk to later in this post), clients may also want to take the path of least resistance to get their products out the door, and if this means settling for quick fixes to speed up delivery, they may want the supplier to take this option. We can only highlight the risk – it is their risk to manage.

Actions as a Client

It can be very painful for the same issues to be reported again and again by a testing team. It can certainly seem like instances are being missed if new ones appear in future assessments; however, if the instances are numerous and start appearing again when new functionality is developed, it might be time to think about different approaches you could take rather than having endless pentesting.

It is well known that the cost of security increases the closer vulnerabilities are discovered and fixed to the release. If your supplier is repeating mistakes and potentially not performing basic security development practices, they can be adding significant development costs onto your product or service. Costs may not always be obvious - asking a testing vendor to perform remediation verification (also known as re-testing) to see if a supplier has fixed issues that have appeared again for the third time has a clear price tag around it. However, the cost in terms of delays this may introduce for releasing revenue generating features or products may not be so clear.

Developers that have a well-oiled development pipeline and instilled security practices will often end up with a smaller number of vulnerabilities when it comes around for third party pentesting. Even less if good threat modelling and internal review has taken place before a third-party vendor has appeared.

So, what can be done? I guess it depends. Development teams are often stretched, and if they are working on other things for you already, they are going to have to prioritise. That said, if there are systemic critical findings appearing consistently there are deeper problems that need to be reviewed. There are some things which can be done to investigate the cause of repeat issues (which may or may not work depending on contractual agreements and relationship with your supplier):

  • If not done so already, see if it is possible for the supplier to provide your testing team access to the code to perform a code review. Going into this with the objective to find out why certain vulnerabilities classes are frequently being found would be a good goal, as well as providing a general behind-the-scenes review. Open-box assessments are more valuable as you may discover vulnerabilities that are not trivial or impossible to identify in a closed-box assessment, as well as more cost-effective at discovering certain vulnerability classes.
  • Discuss with your supplier what their development pipeline and practices are. Consider bringing in a trusted third party to sit on calls or review what is sent to you. Understanding these gaps may help shed some light on what might be contributing to the problem. Equally, if you get responses that are a little rose-tinted that may also be cause for concern. Some initial questions might be:
    • Do they perform threat modelling? Can they be shared?
    • Do they perform static code analysis? If so, what do they use? When is it run?
    • Do they perform peer review on code changes? What’s the process?
    • Do they have any internal secure coding standards, and can these be shared?
  • Do they perform any of their own internal security testing? Either manually or by dynamic testing tools?
  • It might also be a good idea to simply ask your supplier if they understood what needed to be done to remedy the findings and ensure they do not appear again. If not, there may be some improvements your pentest provider needs to implement in their reports, or perhaps in the communications between parties.

I don’t suggest the above to back the supplier into a corner, but to discover what might be the source of the problem and give you enough information to decide where you might want your supplier to spend their time. That said, such discussions may be eye opening enough that more drastic measures need to be taken. As a client that is being provided code by a third-party supplier you place your trust in them to deliver working and secure code. If you detect that the latter is not being given as much focus as it should be and leads to a significant security risk, you may need to evaluate which options are necessary to ensure this is remedied.

Actions as a Supplier

Many development teams are significantly pressured to get code out the door quickly. It’s no secret that security has often taken a back seat for development teams for many years, resulting in vulnerabilities being present by the time a third-party pentest is required or simply performed to check the state of the solution.

If you’re a supplier being asked to remediate the same issues repeatedly, it might be time to ask why that might be. In the same way a client might ask you about your practices you can also ask yourself the same questions:

  • Could we be creating threat models to ensure that vulnerability classes are handled correctly? (especially for new features that could be affected by commonly reported vulnerability classes)
  • Can we perform static code analysis automatically to ensure vulnerabilities are prevented from reaching production code?
    • Can we implement specific rules to match patterns that have been discovered by third party pentesting teams to find the rest of the instances post-assessment, as well as finding them during development if they appear again?
  • Should we improve our peer review process with considerations to certain vulnerability classes that have appeared during pentests?
  • Could we improve our internal secure coding standards and give clear recommendations that might prevent common insecure coding patterns from appearing?
  • Is there any internal testing (manual or dynamic) that could be performed to check for vulnerabilities during development in our test environments?

It might be that the root cause remediation for a systemic vulnerability isn’t straight forward and there may be some reluctance sharing this with your client. I’ve certainly seen cases where an in-house framework has been constructed in a way that resulted in widespread vulnerabilities, or being stuck on an end-of-life web framework that does things one way, and the latest does it very differently. How you handle these types of problems will be up to you, but it is important to ask yourself about the effort spent to maintain vs performing the work to remediate.

Finally, I think it’s fair to highlight that not all developers will have the same security awareness and experience. It is sometimes forgotten that pentesters are experts in cyber security and the development teams they are testing are likely not. If you feel like you need additional help to remediate vulnerabilities, see if you can reach out to the vendor that performed the assessments. Additionally, providing them access to the code may allow them to make useful suggestions for remediating findings that might not be possible without seeing what’s under the hood of your application.

Summary

When a client is not developing the code for their products and services, remediation can be very challenging for all parties involved in a pentest assessment. This is especially true of scenarios where the same issues are being found again and again and the client is pressured to release a product, the pentest vendor is pressured to find all instances of the vulnerabilities, and the supplier is pressured to quickly fix everything.

In these cases, it’s important for open and honest communication between a client and their testing vendor - if there’s a systemic issue within one product and it’s re-appearing or appearing in other products developed by the same vendor this may represent a larger problem that needs fixing. Clients should look to open a dialogue with their supplier, not necessary to scold, but to push forward the security of their platform. Suppliers should also take the opportunity to self-assess in these situations - are you performing the remediation the correct way or are you simply applying a quick fix to a much larger problem? Do you need help from the testing vendor to perform effective remediation? What processes and procedures could be put in place to improve the situation? It may also be the case that the supplier was not provided with enough information to successfully remediate the issue correctly, either due to limited information provided by the client or pentest vendor, or because the report lacked sufficient detail. By working together to address these challenges, all parties can contribute to a more secure and reliable product.

If you’re interested in having a chat on any of the above do reach out to use at [email protected]

To find out more about how real-life scenarios such as these can be tackled with effective report writing and client communication, check out our report writing training course on the Zero-Point Security training website here!

Luke has over ten years of experience in cyber security, specialising in technical due diligence for mergers and acquisitions. His work includes leading teams through complex projects as well as direct involvement in code reviews, web application assessments, and threat modelling.

Twitter — LinkedIn — GitHub — Blog