ThreadFix

Applied ThreadFix: Automated Vulnerability Exception Reporting

Blog Images 2022 TF Coalfire logo grey

One of the most valuable things about ThreadFix is that it centralizes the results of all your testing, assurance, and remediation activities so you no longer have separate silos of data. This is really valuable from a reporting standpoint. If you need to you can drill down into specific parts of your program or the results of different tools or testing activities, but by default you can look across these to understand your program as a whole. Combined with ThreadFix’s extensive APIs, this allows you to automate a lot of analysis and reporting that you would otherwise have to do with laborious manual tasks. No one ever feels like they have too many people on their security team or that their security analysts don’t have enough to do, so ThreadFix helps to tip the scales back in your favor.

Governance, Risk, and Compliance is a critical function and in many organization lives a lot closer to actual decision-making than application security teams do. Reporting requirements for GRC are therefore pretty important. However that doesn’t mean that you want security analysts spending their time pulling together reports that meet whatever criteria your GRC group has determined they need.

In one ThreadFix deployment we saw a scenario where every week the application security vulnerability management team needed to submit a report of all the vulnerabilities that met a set criteria for vulnerabilities requiring exceptions because they were out of line with the organization’s remediation policies. For the sake of this blog post, let’s say those were: for all application subject to PCI compliance where the applications are public facing and the testing was done in production, the exception report had to contain all critical or high vulnerabilities that were more than 30 days old.

One way to address this without ThreadFix would be to:

  • Review your SAST platform to find all the vulnerabilities that match the criteria and copy/paste them into a spreadsheet
  • Review your DAST platform to find all the vulnerabilities that match the criteria and copy/paste them into a spreadsheet
  • Review the results coming from your 3rd party pen test team to find all the vulnerabilities that match the criteria and copy/paste them into a spreadsheet
  • Oh wait – did you de-dupe those vulnerabilities to find any overlap? Never mind – hopefully no one will notice if there are duplicates in there
  • Email your Excel spreadsheet down the line and congratulate yourself on a job … done

With ThreadFix you have a couple of options. An easier way that still requires some manual work would be to:

  • Use the “Vulnerability Search” report to get a specific view into the current state of vulnerabilities in your ThreadFix system
  • Use the Tagging filters to only view vulnerabilities from applications subject to PCI compliance, that are public-facing, where the testing was done in production
  • Use the Vulnerability Detail filters to only look at vulnerabilities with Critical or High severity
  • Use the Aging filters to only view vulnerabilities that are more than 30 days old
  • From there, export the results as a CSV file
  • To save time for the following week, save the filter and re-run it as necessary

That’s a lot better than trying to track this data across multiple systems using different formats where you might end up with duplicated data. But wait there’s more – you can also run the same type of filtering via the ThreadFix API. In this case you can:

  • Set up a simple script that will query your ThreadFix instance to pull that same list of vulnerabilities
  • Take the JSON returned from the API calls and create a CSV file
  • Email that on down the line the GRC team
  • For even more automation, run this script via a cron job

What are you going to do with all your newfound free time?

To see an example of how this might work, you can install this ThreadFix Python API client (shortly this should be in PyPI to update/replace some older ThreadFix Python packages in there) and then run this example script:

 #!/usr/bin/python3
  
 import csv
 import sys
 from ThreadFixProApi import threadfixpro
  
 def make_tags_list(tf_con, tags):
 ret_val = []
  
 my_tags = tags.split(',')
 for tag_name in my_tags:
 tag_id = tf_con.TagsAPI.get_tag_by_name(tag_name)
 if tag_id.success:
 ret_val.append(tag_id.data[0]['id'])
  
 return(ret_val)
  
 def make_severities_list(severities):
 ret_val = []
  
 severity_list = severities.split(',')
 for severity in severity_list:
 if(severity == 'Critical'):
 ret_val.append(5)
 elif(severity == 'High'):
 ret_val.append(4)
 elif(severity == 'Medium'):
 ret_val.append(3)
 elif(severity == 'Low'):
 ret_val.append(2)
 elif(severity == 'Info'):
 ret_val.append(1)
 else:
 print('Got unknown severity: ' + severity)
  
 return(ret_val)
  
 def create_vuln_location(vulnerability):
 ret_val = 'Unknown'
  
 file_path = vulnerability['calculatedFilePath']
 path = vulnerability['path']
 parameter = vulnerability['parameter']
  
 ret_val = 'File Path: ' + str(file_path) + ' Path: ' + str(path) + ' Parameter: ' + str(parameter)
  
 return(ret_val)
  
 def create_vuln_record(vulnerability):
 ret_val = None
  
 vuln_id = vulnerability['id']
 application = vulnerability['app']['name']
 generic_vulnerability = vulnerability['genericVulnerability']
 vuln_type = None
 if(generic_vulnerability != None):
 vuln_type = generic_vulnerability['name']
 else:
 vuln_type = 'Unspecified'
 severity = vulnerability['genericSeverity']['name']
 vuln_location = create_vuln_location(vulnerability)
  
 ret_val = [vuln_id, application, vuln_type, severity, vuln_location]
  
 return ret_val
  
 if len(sys.argv) < 7:
 print('Usage: generate_exception_report.py <tf_server> <api_key> <outfile> <tags> <severities> <aging>')
 exit(2)
  
 tf_server = sys.argv[1]
 api_key = sys.argv[2]
 outfile = sys.argv[3]
 tags = sys.argv[4]
 severities = sys.argv[5]
 aging = sys.argv[6]
  
 print('Using tf_server: ' + tf_server)
 print('Using api_key: ' + api_key)
 print('Using outfile: ' + outfile)
 print('Using tags: ' + tags)
 print('Using severities: ' + severities)
 print('Using aging: ' + aging)
  
 # Get our connection to the ThreadFix server
 tfp = threadfixpro.ThreadFixProAPI(tf_server, api_key, verify_ssl=False)
 # Get our output CSV file and write the header
 csvoutfile = open(outfile, 'w')
 csvout = csv.writer(csvoutfile)
 header_list = ['Vulnerability ID', 'Application', 'Vulnerability Type', 'Severity', 'Location']
 csvout.writerow(header_list)
  
  
 # Look up the tags from the command line
 tags_list = make_tags_list(tfp, tags)
 severities_list = make_severities_list(severities)
  
 page = 0
 num_vulns_in_batch = -1
  
 while num_vulns_in_batch != 0:
 vulnerabilities = tfp.VulnerabilitiesAPI.vulnerability_search(generic_severities=severities_list, tags=tags_list, days_old=aging, number_vulnerabilities=100, page=page, show_open=True, show_not_false_positive=True)
 if vulnerabilities.success:
 num_vulns_in_batch = len(vulnerabilities.data)
 print ('Found ' + str(num_vulns_in_batch) + ' vulnerabilities')
 for vulnerability in vulnerabilities.data:
 print(str(vulnerability))
 output_line_list = create_vuln_record(vulnerability)
 csvout.writerow(output_line_list)
 else:
 print("ERROR: {}".format(vulnerabilities.message))
 page = page + 1
  
 csvoutfile.close()


view rawgenerate_exception_report.py hosted with ❤ by GitHub

You can also find a copy of this script in the ThreadFix example GitHub repository in the automated_exception_reporting/ subdirectory.

API automation was exactly how the ThreadFix users I mentioned before solved this problem, and it freed up an afternoon for one of their analysts every week going forward. Over time that time savings adds up and helps you make sure that your analysts are spending their time on the most valuable activities possible – rather than rote data munging and reporting.

This is just a one example of how centralizing data in ThreadFix makes reporting easier and more flexible as well as how the ThreadFix API gives you the power to script data management and reporting operations that you may currently be handling manually. Contact us for help automating more of your application security and vulnerability management program.