SATEC First Draft


If you are looking for the latest version of SATEC, please visit http://projects.webappsec.org/w/page/60671848/SATEC%20Second%20Draft

 

 

First Draft For SATEC 1.0 - Work In Progress (please don't link to this page)

 

4. Product Signatures Update

Any static analysis tool comes bundled with rule packs for the different technologies that it supports. These rule packs contain product signatures and rules that are followed by the static code analyzer during the process of code analysis. When making a choice of a static analysis tools, one should take into consideration the following related to how product signature updates are handled by the vendor.  

4.1 Frequency of signature update:

4.1.1 What is the frequency of signature updates provided by the vendor?

Does the product support options for signature updates in real time as well as on a scheduled basis? How often in a year does the vendor release new product signature updates?

4.1.2 Does the product have a notification feature that alerts the user to availability of new product signatures?

4.1.3 Does the product allow the user to make a choice of turning on or off automated updates?

The product should allow the user to make a choice of selecting automated updates or turning it off and synchronizing new updates when desired.

4.1.4 Does the user have a choice to roll back the updated signature? If yes, then how many previous versions? (Is this really important enough to call out?)

4.1.5 Can the signature update be monitored by an administrator to see or collect the status of signature updates?

4.1.6 Can the signature update be downloaded by an administrator and then provided to the user community in a corporate environment rather than users directly downloading it from the vendor web site or the support web site?

4.1.7 Can the signature files be copied from one system to another in a USB or a CD for standalone systems or servers which are not connected to the internet?

 

4.2 Relevance of signature to evolving threats:

New threats and zero day vulnerabilities surface regularly related to technologies served by static analysis tools. One should gather information on how the product signatures supported by the tool maintain their relevance. Factors that should be considered are as follows:

4.2.1 As technologies supported by the tool progress and new versions are released, new security vulnerabilities and trends emerge. Are the product signatures being kept up to date along with relevant documentation?

4.2.2 Are the product signatures updated quickly whenever there is a new emerging threat or zero day vulnerability?

4.2.3 What research is being done to keep on top of new threats and attack vectors?

 

4.3 User signature feedback

Users might find issues with the signatures provided by the vendor and users might find or want to create a signature for a previously unknown or custom threat.

4.3.1 Do the users have a feedback mechanism to tell the vendors if the signature provided is flawed or if it has some issues?

4.3.2 Can the user submit a signature to a new threat identified as part of this mechanism?

 

 

5. Reporting Capabilities

The ability to communicate clear and actionable analysis results to project stakeholders, is as important as the analysis process itself. That is why reporting functionality is oftentimes considered a critical factor in the process of choosing a static analysis solution.


Depending on the nature of the SAST offering, reports can usually be viewed in several ways:

         

Organizations should consider the typical usage model and the user types of the SAST tool before making a decision. For example:

            

Does the tool support different levels of reports? For example, can managers view aggregated reports of their employees? Can users be limited to viewing only their own scan results? Etc.
Assuming locally generated reports (standalone or IDE): - What authentication model governs report generation? Are users authenticated when accessing the product?

 

5.1 Support for Role-based Reports: 

 

Security reports are oftentimes considered confidential and access to them should be restricted based on user role. Organizations should consider the following criteria:

 

Assuming reports are accessed over a Web interace: Does the product offer Role-based Access Control to reports (RBAC)?

- Does the product offer user permissions management? If so – does it support an integration with popular enterprise directory services such as LDAP?, 

 

- Does the tool support different levels of reports? For example, can managers view aggregated reports of their employees? Can users be limited to viewing only their own scan results? Etc.

Assuming locally generated reports (standalone or IDE): - What authentication model governs report generation? Are users authenticated when accessing the product?

 

 

5.2 Finding-level reporting information: 

- What is the granularity allowed to generate the report?

- Can results of a scan of the new versionof the applicationbe incrementally merged with the results of  the previous scan?
- Can scans & reports be generated incrementally? Or does each configuration change requires a re-analysis of the application? - Can reports be customized prior to and after report generation? - Does the tool provide actionable results, which developers will find easy to remediate? What capabilities exist to make sure that each finding is communicated clearly to developers? (source & sink information, graphical data flow representation, File name and line numbers, secure coding guidelines and recommendations, automatic fix capability, etc.)

 

- Does the tool offer a way to dismiss or hide noisy or false positive results? Can this capability persist across scans? Can it persist across users in an organization?

 

- Can reports be generated based on common industry standards such as OWASP Top 10, WASC Threat Classification, SANS Top 25? Can organizations map security issues to internal policies and generated reports based on such policies?

 

 
5.3 Support for different report formats:

- What report formats does the tool support (XML, HTML, PDF, Word, CSV, Exceletc.)?
- Can reports be exported in a format compatible with other relevant tools? For example – popular defect tracking systems, SIEM products, other SAST tools, etc.
 

 

5.4 Support for template based reports: 
- Does the tool offer the creation of template-based reports?   - Can reports be created in different languages?   - Can users insert their own custom header & footer?   - Can users control the look & feel of a report?   - Can users define organization-specific issue description, remediation advice, code samples, etc.

-       Does the tool offer a way to completely control the report using a popular and common templating capability such as MS-Word templates & smart tags?

 

5.5 DAST/SAST Correlation Reports 

The ability to correlate SAST and DAST results offer several benefits for organizations, for example – the ability to prioritize issues that have both DAST and SAST information, and also to provide developers with more actionable information such as an actual HTTP exploit request that proves an issue is real, and to reproduce it when needed.

 

-       Does the tool support results correlation with DAST products? Which products does the tool correlate issues with?

-       What programming languages are supported for results correlation?

-       Can the tool handle results correlation for applications built using popular programming frameworks such as Struts, Spring MVC, JSF, etc.

 

 

6. Triage and Remediation Support

6.1 Findings Data
The information provided around a finding (explanation of the vulnerability, recommendations, accuracy level) and the relevance to the actual finding.

 

6.1.1. Findings - Meta Information
This section represents the meta-information contained within a set of findings and/or an individual finding.

 

6.1.1.1.  Does the findings data separate out findings from the rules used to discover the finding?
Findings may be used in a variety of contexts even external to the tool used to create the finding and therefore separation of rules is essential. Rules used to create findings may also evolve over time.

 

6.1.1.2. Does the findings data separate out findings from the rules used to classify the finding?

How a particular finding may be classified is heavily dependent upon not just what an initial scan shows, but what an auditor deems to be the most significant sinks, information protection policies around authentication, encryption and audit as well as the nature of the application itself (e.g. Internet facing, contains financial information, contact personally identifiable information, etc)

 

6.1.1.4.                 Does the findings file contain metadata allowing a user to determine relevancy of a particular finding? 
A finding should allow for an assessor to prioritize the relative importance of a particular finding relative to other findings.

 

6.1.2 Findings – Language Support
Findings should ideally represent the richness of the target language under assessment. This allows for higher quality remediation in a time manner if appropriately aligned.

 

6.1.2.1. Does the finding have the ability to point to a particular line of code if it is a compiled application? (e.g. java, etc)
The ability to point a developer to a particular line of code aids in productivity.

 

6.1.2.2. Does the finding have the ability to point to a particular object if it is a language that doesn’t use line numbers
There are some languages that do not use line numbers. This can include languages such as smalltalk or object-oriented approaches such as executable UML.

 

6.1.3. Findings – Externalization and File Support
The ability for a finding to be externalized such that it can be consumed by third-party tools can provide additional value in many settings. Minimally, this is useful when you may want to exchange findings with business partners and third-party organizations.

 

6.1.3.1.                 Is there a standalone utility that allows you to publish an external finding file to a repository
Standalone utilities are best since they can be invoked via scheduled batch scripts without requiring user intervention. This is useful when you want to import an external finding into your repository.

 

6.1.3.2. Is there a standalone utility that allows you to create a findings file from a repository?
Standalone utilities are best since they can be invoked via scheduled batch scripts without requiring user intervention. This is useful when you want to export an external finding from your repository.

 

6.1.3.3. Do the vendors provide published documentation outlining the file format for each of the mentioned schemas?
Sometimes, third party tools may use a format that is not known to other tools. In this scenario, you may need to develop custom conversion routines to enable support.

 

6.1.4. Findings – Environment
Understanding the context of a particular set of findings aids in understanding the path to remediation.

 

6.1.4.1. Does a finding indicate when the scan was ran and metadata about the target (e.g. Build, version, lines of code, etc)
It is vital to understand when a particular set of findings was created. Otherwise, developers may spend time attempting to remediate based on obsolete information.

 

6.1.4.2.                 In scenarios where a scan was prematurely terminated (e.g. out of memory, etc), does the findings indicate level of completeness and what remainder needs to be re-scanned?
There are many scenarios where an incomplete set of findings could be created. This could occur due to running a scan on a 32-bit machine using a very large code base as one example.

 

6.1.4.3.                 When a finding is created, is it contextually aware of a particular database implementation or does it flag it with general information?
Consider the scenario of using a JDBC Connection to LDAP. In this scenario, if database implementation is known, the findings should not create false positives related to SQL Injection.

 

6.1.5. Finding – Remediation
With a set of findings, a developer will ultimately need to remediate their applications. The tool should aid in productivity by providing proper contextual remediation guidance.

 

6.1.5.1. Please rate the detail of supplemental information provided for each finding. For example, does it provide a generic example of input validation? Does it occur in the target language? Does it go deep enough to specify exact code changes on a line-by-line basis that a developer would need to change?
The more detail provided for a specific finding aids in the ability for an otherwise unskilled developer to remediate

 

6.1.5.2 Are the remediation recommendations for input validation solely based on looking for “if” logic or can it also make recommendations around regular expressions?
It is vital that proper input validation occurs and that shallow attempts to make findings go away don’t occur.

 

6.1.5.3. Does a finding have awareness of depth of validation?
Consider a scenario where I have a java.lang.Object. Is InstanceOf sufficient? If I also check to see that it is a StringBuffer after casting, can I check to see if it is empty? If it is a string, can I see that it has a regular expression and so on.

 

6.1.5.4. Is  there a way to tag the finding as irrelevant in a given configuration?
For example, SQL injection may not be relevant in a NoSQL implementation. Findings related to authentication may not be relevant if you are using a web access management product[AW1] .

 

6.1.5.5. For a given finding, does it provide a Flow graph that visualizes going from source to sink?
The ability for a tool to indicate the flow allows the developer to understand how it can be exploited and can provide opportunities to not just remediate at the source level but also in multiple places throughout the application.

 

6.1.6. Finding –Classification
The ability to rollup allows you to find patterns that otherwise may not be visible.

 

6.1.6.1. For a given finding, can it map the weakness to the Common Weakness Enumeration?
Different organizations may align to different taxonomies.

 

6.1.6.2. For a given finding, can it map the weakness to the OWASP Top Ten?
Different organizations may align to different taxonomies.

 

6.1.6.3 For a given finding, can it map the weakness to user/company-defined hierarchies?
Different organizations may align to different taxonomies. This is especially useful if you are classifying findings not just in a security sense, but also in terms of quality, disaster recovery and other considerations.

 

6.1.6.4. Does the findings contain predicate analysis to prevent false positives?
Over time, developers may grow tired if they are chasing down findings that prove to be false positives. The ability to indicate its potential aids in perception amongst the development community.

 

6.1.6.5. Does a tool indicate “Confidence” – in a particular finding?
It is important that a finding not just indicate Boolean false positive and/or false negative but also indicates it along a spectrum of possibilities.

 

6.1.6.6. How are findings grouped together in reports? Does it treat each finding as a variant of the same logical issue?
Having the ability to understand a particular finding in a non-redundant way reduces the clutter in understanding what needs to be remediated.

 

6.1.6.7 Do users manage findings or are there higher level constructs such as issues where all variants if fixed, all findings are also considered fixed.
This reduces the time required in order to mark a finding as remediated.

 

6.1.6.8. Does it have the ability to rollup findings based on language-specific constructs?
For example, all of the findings belong to a given Java package aids in the ability to target remediation to certain component owners or alternatively rollup if the component/library is developed by a third party.

 

6.1.7. Findings – Management
The management of findings and its associated lifecycle may require integrating and reconciling against other tools/approaches in the enterprise.

 

6.1.7.1 An enterprise may want to also perform manual code reviews but have a holistic way of managing findings. Does the tool provide a utility to allow for entry of Manual findings?
An enterprise may sometimes use components where source code is unavailable or where the language is not supported by the tool. This does not mean that an enterprise would not desire a consistent approach to managing findings.

 

6.1.8 Findings – Third Party Support
The ability to have findings consumed by third-party tools can provide additional insights.

 

6.1.8.1. Are there adapters/connectors that allow third-party visualization tools such as OWASP O2 to inspect findings?
O2 is an open source visualization tool for findings. It has the ability to identify patterns that aren’t always visible via traditional reporting means.

 

6.1.8.2. Are there adapters/connectors that allow for integration with Governance, Risk and Compliance (GRC) tools?
Many enterprises may want to manage all findings from both applications and infrastructure within a GRC tool such as EMC’s Archer or Relational Security RSAM.

 

6.1.9 Findings – Advanced Scenarios
Sometimes, there is a need to look for findings that are not related to the traditional view of web application security.

 

6.1.9.1. Does the findings provide customized guidance for validation of web services? Such as ways to validate XML, JSON, etc?
Validation of web services has many similarities to validating a web application but also some distinctions as well. For example, a web application does not have a schema, where an incoming web services request could be validated via schema.

 

6.1.9.2. When the transport is a message queue, does it understand validation around destination queues, size of message and transaction semantics?
This is useful in identifying MQ injection attacks. In many enterprises, the usage of MQ may be used to isolate the mainframe from the web. This could be used to cause mainframe abends.

 

6.1.9.3. Does the findings file indicate methods that may not be thread-safe?
Attacks aren’t always about stealing some form of data. Sometimes it is feasible to find ways to cause denial of service on enterprise applications. Thread safety is one weakness that can be exploited in this regard.

 

6.1.9.4. Does the findings file indicate methods that may be thread confined? For example, SWING APIs must be invoked on the Event-Dispatch thread.
Applications that use SWING or even Java Applets have their own security considerations.

 

6.1.9.5. Can the findings detect race conditions at compile time? Looks for shared data fields are undergoing two separate read and write operations without synrconization.
Race conditions can be used for denial of service attacks. They are sometimes exploited whenever an application returns a stacktrace to the user indicating the possibility of exploit.

 

6.1.9.6 Does the findings indicate methods/variables that do not conform to user-defined naming conventions?
There is value to have consistent naming conventions not just for stylistic purposes, but also as an indicator as to how the method should be invoked. This is useful for advanced remediation especially when visualizing the call stack.

 

6.2 Ability to Merge assessments
As a traditional enterprise application undergoes maintenance activities, so will the need arise to manage the assessments that are conducted over a time horizon.

6.2.1. Does the tool provide the ability to merge two assessments?
This functionality may be used in a variety of contexts. For example, you may desire to scan open source projects separately from in-house written applications. How you remediate is different than getting a holistic view of the application security posture.

 

6.2.2. When merging assessments, does it retain the ability to correlate back to a particular set of findings?
Even though assessments may be merged, it is sometimes useful to still retain the lineage. It may also be useful to understand findings that originate from static analysis that are merged with findings from dynamic analysis.

 

6.2.3. When findings are merged, does it union together findings (remove duplications)?
Otherwise, an assessment would contain duplicate findings and throw off many metrics

 

6.3 Ability to Diff Assessments

6.3.1. Does the tool quickly visualize at a high-level whether an application is getting more or less secure over a series of releases?
The notion of trending is important to many compliance initiatives as well as management-level reporting.

6.3.2. Can we easily drill into what modules changed to cause the biggest difference in results (positive or negative) between scans?
In terms of impact analysis, you may not only need to understand how to group findings at higher-levels but to trend this over time. This is useful whenever you use enterprise common components or libraries from the open source community.

 

6.4 Remediation Advise Customization
Whether the tool support customizing remediation advice presented for each vulnerability.

6.4.1. If an enterprise wanted to include its own remediation advice, how does this survive version releases?
Sometimes, you may need the ability to provide more prescriptive guidance. One example may include a custom finding for hard-coded IP addresses which may be used for disaster recovery, cloud, etc type findings.

6.4.2. Can advice be provided in multiple languages (e.g. Spanish, Portuguese, etc)
While much of IT software development occurs in English. It is contextually more efficient to provide remediation advice in the native language of developers in a particular country.

6.4.3. Does the advice only live within online help or can it travel with findings reports that may be sent to third parties?
In scenarios where an enterprise outsources software development to third parties in other countries, the advice may need to be attached to reports so that remediation can be enabled.

6.4.4. Is the remediation advice only language-specific (e.g. Java) or can it provide additional guidance using other constructs.
For example, if you are scanning a web-service, you can validate XML by having a richer schema. Likewise, the advice in remediating a compiled language may be different than how to remediate using a scripting language independent of language similarity.


 [AW1]This one seems very simliary to 6.1.4.3. I think it should be consolidated.