Showing posts with label OWASP. Show all posts
Showing posts with label OWASP. Show all posts

Sunday, May 21, 2017

Toolsmith #125: ZAPR - OWASP ZAP API R Interface

It is my sincere hope that when I say OWASP Zed Attack Proxy (ZAP), you say "Hell, yeah!" rather than "What's that?". This publication has been a longtime supporter, and so many brilliant contibutors and practitioners have lent to OWASP ZAPs growth, in addition to @psiinon's extraordinary project leadership. OWASP ZAP has been 1st or 2nd in the last four years of @ToolsWatch best tool survey's for a damned good reason. OWASP ZAP usage has been well documented and presented over the years, and the wiki gives you tons to consider as you explore OWASP ZAP user scenarios.
One of the more recent scenarios I've sought to explore recently is use of the OWASP ZAP API. The OWASP ZAP API is also well documented, more than enough detail to get you started, but consider a few use case scenarios.
First, there is a functional, clean OWASP ZAP API UI, that gives you a viewer's perspective as you contemplate programmatic opportunities. OWASP ZAP API interaction is URL based, and you can invoke both access views and invoke actions. Explore any component and you'll immediately find related views or actions. Drilling into to core via http://localhost:8067/UI/core/ (I run OWASP ZAP on 8067, your install will likely be different), gives me a ton to choose from.
You'll need your API key in order to build queries. You can find yours via Tools | Options | API | API Key. As an example, drill into numberOfAlerts (baseurl ), which gets the number of alerts, optionally filtering by URL. You'll then be presented with the query builder, where you can enter you key, define specific parameter, and decide your preferred output format including JSON, HTML, and XML.
Sure, you'll receive results in your browser, this query will provide answers in HTML tables, but these aren't necessarily optimal for programmatic data consumption and manipulation. That said, you learn the most important part of this lesson, a fully populated OWASP ZAP API GET URL: http://localhost:8067/HTML/core/view/numberOfAlerts/?zapapiformat=HTML&apikey=2v3tebdgojtcq3503kuoq2lq5g&formMethod=GET&baseurl=.
This request would return




in HTML. Very straightforward and easy to modify per your preferences, but HTML results aren't very machine friendly. Want JSON results instead? Just swap  out HTML with JSON in the URL, or just choose JSON in the builder. I'll tell you than I prefer working with JSON when I use the OWASP ZAP API via the likes of R. It's certainly the cleanest, machine-consumable option, though others may argue with me in favor of XML.
Allow me to provide you an example with which you can experiment, one I'll likely continue to develop against as it's kind of cool for active reporting on OWASP ZAP scans in flight or on results when session complete. Note, all my code, crappy as it may be, is available for you on GitHub. I mean to say, this is really v0.1 stuff, so contribute and debug as you see fit. It's also important to note that OWASP ZAP needs to be running, either with an active scanning session, or a stored session you saved earlier. I scanned my website, holisticinfosec.org, and saved the session for regular use as I wrote this. You can even see reference to the saved session by location below.
R users are likely aware of Shiny, a web application framework for R, and its dashboard capabilities. I also discovered that rCharts are designed to work interactively and beautifully within Shiny.
R includes packages that make parsing from JSON rather straightforward, as I learned from Zev Ross. RJSONIO makes it as easy as fromJSON("http://localhost:8067/JSON/core/view/alerts/?zapapiformat=JSON&apikey=2v3tebdgojtcq3503kuoq2lq5g&formMethod=GET&baseurl=&start=&count=")
to pull data from the OWASP ZAP API. We use the fromJSON "function and its methods to read content in JSON format and de-serializes it into R objects", where the ZAP API URL is that content.
I further parsed alert data using Zev's grabInfo function and organized the results into a data frame (ZapDataDF). I then further sorted the alert content from ZapDataDF into objects useful for reporting and visualization. Within each alert objects are values such as the risk level, the alert message, the CWE ID, the WASC ID, and the Plugin ID. Defining each of these values into parameter useful to R is completed with the likes of:
I then combined all those results into another data frame I called reportDF, the results of which are seen in the figure below.
reportDF results
Now we've got some content we can pivot on.
First, let's summarize the findings and present them in their resplendent glory via ZAPR: OWASP ZAP API R Interface.
Code first, truly simple stuff it is:
Summary overview API calls

You can see that we're simply using RJSONIO's fromJSON to make specific ZAP API call. The results are quite tidy, as seen below.
ZAPR Overview
One of my favorite features in Shiny is the renderDataTable function. When utilized in a Shiny dashboard, it makes filtering results a breeze, and thus is utilized as the first real feature in ZAPR. The code is tedious, review or play with it from GitHub, but the results should speak for themselves. I filtered the view by CWE ID 89, which in this case is a bit of a false positive, I have a very flat web site, no database, thank you very much. Nonetheless, good to have an example of what would definitely be a high risk finding.


Alert filtering

Alert filtering is nice, I'll add more results capabilities as I develop this further, but visualizations are important too. This is where rCharts really come to bear in Shiny as they are interactive. I've used the simplest examples, but you'll get the point. First, a few, wee lines of R as seen below.
Chart code
The results are much more satisfying to look at, and allow interactivity. Ramnath Vaidyanathan has done really nice work here. First, OWASP ZAP alerts pulled via the API are counted by risk in a bar chart.
Alert counts

As I moused over Medium, we can see that there were specifically 17 results from my OWASP ZAP scan of holisticinfosec.org.
Our second visualization are the CWE ID results by count, in an oft disdained but interactive pie chart (yes, I have some work to do on layout).


CWE IDs by count

As we learned earlier, I only had one CWE ID 89 hit during the session, and the visualization supports what we saw in the data table.
The possibilities are endless to pull data from the OWASP ZAP API and incorporate the results into any number of applications or report scenarios. I have a feeling there is a rich opportunity here with PowerBI, which I intend to explore. All the code is here, along with the OWASP ZAP session I refer to, so you can play with it for yourself. You'll need OWASP ZAP, R, and RStudio to make it all work together, let me know if you have questions or suggestions.
Cheers, until next time.

Tuesday, July 01, 2014

toolsmith: ThreadFix - You Found It, Now Fix It



 

Prerequisites
ThreadFix is self-contained and as such runs on Windows, Mac, and Linux systems
JEE based, Java 7 needed

Introduction
As an incident responder, penetration tester, and web application security assessor I have long participated in vulnerability findings and reporting. What wasn’t always a big part of my job duties was seeing the issues remediated, particularly on the process side. Sure, some time later, we’d retest the reported issue to ensure that it had been fixed properly but none of the process in between was in scope for me. Now, as part of a threat intelligence and engineering team I’ve been enabled to take a much more active role in remediation, often even providing direct solutions to discovered problems. I’m reminded of a London underground (Tube) analogy for information security gap analysis (that space between find and fix) taken whilst stepping on the train. Mind the gap!


But with new responsibilities comes new challenges. How best to organize all those discovered issues to see them through to repaired nirvana? As is often the case, I keep an eye on some of my favorite tool sources, and NJ Ouchn’s Toolswatch came through as it often does.  There I discovered ThreadFix, developed by Denim Group, a team I was already familiar thanks to my work with ISSA. When, in 2011 I presented Incident Response in Increasingly Complex Environments to the ISSA Alamo Chapter in San Antonio, TX, I met Lee Carsten and Dan Cornell of Denim Group. They’ve had continued growth and success in the three years since and ThreadFix is part of that success. After pinging Lee regarding ThreadFix for toolsmith he turned me over to Dan who has been project lead for ThreadFix from its inception and provided me ample insight. Dan indicated that while working with their clients, they saw two common scenarios – teams just getting started with their software security programs and teams trying to find a way to scale their programs – and that ThreadFix is geared toward helping these groups. They’d seen lots of teams that had just purchased a desktop scanning tool who’d run some scans and the results would end up stored on a shared drive or in a Sharepoint Document Repository. Dan pointed out that these results though were just blobs of data such as PDFs being emailed around to development teams with no action being taken. ThreadFix gives organizations in this situation an opportunity to start treating the results of their scanning as managed data so they can lay out their application portfolio, track the results of scanning over time and start looking at their software security programs in a much more quantitative manner. Per Dan, this lets them have much more "grown up" conversations with management about application and software risk. A natural byproduct of managed data leads to conversations that evolve from "Cross-site scripting is scary" to "We've only remediated 50% of the XSS vulnerabilities we've found and on average it takes us 120 days which is twice as slow as what others in our industry are doing." WHAT!? An informed conversation is more effective than a FUD conversation? Sweet! Dan described more sophisticated organizations who are tracking this "mean time to fix" metric as better managing their window of exposure, and that public data sets, such as those released by Veracode and WhiteHat Security, can provide a basis for benchmarking. Amen, brother. Mean time to remediate is one of my favorite metrics.

Dan and the Denim team, while working with bigger organizations, saw huge struggles with teams getting bogged down trying to deal with different technologies across huge portfolios of applications. He cites the example of the Information Security group buying scanner X while the IT Audit group purchased scanning service Y and the QA team was starting to roll out static analysis engine Z. He summed this challenge up best with “The only thing worse than approaching a development team with a 300 page PDF report with a color graph on the front page is approaching them with two or three PDFs and expecting them to take action.” Everyone familiar with Excel hell? That’s where these teams and many like them languish, trying to track mountains of vulnerabilities and making no headway. Dan and Denim intended for ThreadFix to enable these teams to automatically normalize and consolidate the results of different scanning tools even across dynamic (DAST) and static (SAST) application security testing technologies. This is achieved with Hybrid Analysis Mapping as developed under a contract with the US Department of Homeland Security (DHS). According to Dan, with better data management, security teams can focus on high value tasks such as working with development teams to actually implement training and remediation programs. Security teams can take the data from ThreadFix and export it to developer defect tracking tools and IDEs that developers are already using. This reduces the friction in the remediation process and helps them fix more vulnerabilities, faster.
Great stuff from, Dan. The drive to remediate has to be the primary goal. The industry has proven its ability to find vulnerabilities, the harder challenge and the one I’m spending the vast majority of my focus on, is the remediation work. Threat modeling, security development lifecycles, and secure coding best practices are a great start but one way to take your program to the next level is the tuning your vulnerability data management efforts with ThreadFix. There is a Community Edition, free under the Mozilla Public License (MPL), which we’ll focus on here, which includes a central dashboard, SAST and DAST scanner support, defect tracker integration, virtual patching via WAF/IDS/IPS options, trend analysis & reporting, and IDE integration.
If you seek an enterprise implementation you can upgrade for LDAP & Active Directory integration, role based user management, scan orchestration, enhanced compliance reporting, and technical support.

Preparing ThreadFix

First, I tested both the 2.0.1 stable version and the 2.1M1 development version and found the bleeding edge to be perfectly viable. ThreadFix includes a number of plugins, and most importantly for our scenario, for OWASP ZAP and Burp Suite Pro. There is also a plugin for Eclipse too though for defect tracking and IDE I’m a Microsoft TFS/Visual Studio guy (shocker!). Under Defect Tracking there is support for TFS but I can’t wait until Dan and team implement a plugin for VS. J To get started ThreadFix installation is a download-and-run-it scenario. ThreadFix Community Edition includes a self-contained .ZIP download containing a Tomcat web & servlet engine along with an HSQL database. That said, most production environment installations of ThreadFix use a MySQL database for scalability; if you wish to do so instructions are provided. As ThreadFix uses Hibernate for data access, other database engines are also be supported.
Once you’ve downloaded ThreadFix, navigate to your installation directory and double-click threadfix.bat on a Windows host or run sh threadfix.sh on *nix systems. Once the server has started, navigate to https://localhost:8443/threadfix/ in a web browser and log in with the username user and the password password. Then immediately proceed to change the password, please.
Click Applications on the ThreadFix menu and add a team, then an application you’ll be assessing and managing. My team is HolisticInfoSec and my application is Mutillidae as it has obvious flaws we can experiment with for remediation tracking.
After you download the appropriate plugins, unpack each (I did so as subdirectories in my ThreadFix path) and fire up the related tool. Big note here: Burp and XAP default proxy ports conflict with ThreadFix’s API interface, you’ll have contention for port 8080 if you don’t configure Burp and ZAP to run on different ports. For Burp, click the Extender tab, choose Add, navigate to the Burp plugin path and select threadfix-release-2.jar. You’ll then see a new ThreadFix tab in your Burp UI which will include Import Endpoints and Export Scan. You’ll need to generate API keys as follows: click the settings gear in the upper right hand of the menu bar and select API keys as seen in Figure 1.

FIGURE 1: Create ThreadFix API keys for plugin use
Click Export Scan and paste in the API key you created as mentioned above. Similarly in ZAP, choose File then Load Add-On File and choose threadfix-release-1.zap. After restarting ZAP you’ll see ThreadFix: Import Endpoints and ThreadFix: Export Scan under Tools.
You may find it just as easy to save scan results from Burp and ZAP in an .xml format and upload them via the ThreadFix UI. Go to Applications, then Expand All, select your Application, and click Upload Scan. You’ll benefit from immediate results as seen from an incomplete Burp and ZAP scans of Mutillidae in Figure 2.

FIGURE 2: Scan results uploaded into ThreadFix
The ThreadFix dashboard then updated to give me a status overview per Figure 3.

FIGURE 3: ThreadFix dashboard provides application vulnerability status
Drilling into your target via the Application menu will provide even more nuance and detail with the ability to dig into each vulnerability as seem in Figure 4.

FIGURE 4: ThreadFix vulnerability details
In order to enable IDE support for the like of Eclipse you’ll need to take a few steps from here
  • Have Team/Application setup in Threadfix
  • Have source code for an Application linked in Threadfix
  • Have a scan for the Application in Threadfix
  • Have the Applications scan linked to a Defect Tracker
Once you have it configured, you can select specific vulnerabilities and submit them directly to your preferred Defect Tracker under the Application view, then click Action. This is vital if you’re pushing repairs to the development team via the likes of Jira or TFS.
Additionally, if you’re interested in virtual patching, first create a WAF under Settings and WAFs where you choose from Big-IP ASM (F5), DenyAll rWeb, Imperva SecureSphere, Snort, and mod_security, which I selected and named it HolisticInfoSec. Click Applications again, drill into the application you’ve added scans for, then click Action and Edit/Delete. The Edit menu will allow you to Set WAF. I then selected HolisticInfoSec and click Add WAF. You can also simply add a new WAF here as well. Regardless, go back to Settings, then WAFs, then choose Rules. I selected HolisticInfoSec/Mutillidae and deny then Generate WAF rules. The results as seen in Figure 5 can then be imported directly into mod_security. Tidy!

FIGURE 5: ThreadFix generates mod_security rules
So many other useful features with ThreadFix too. Under Settings and Remote Providers you can configure ThreadFix to integrate with QualysGuard WAS, Veracode, and WhiteHat Sentinel. There are tons of reporting options including trending, snapshots (point in time), scan comparisons (Burp versus ZAP for this scenario), and vulnerability searching. Try the scan comparisons; you’ll often be surprised, amused, and angry all at the same time. That said, trending is vital for tracking mitigation performance over time and quite valuable for denoting improvement or decline.

In Conclusion

Make use of the ThreadFix wiki to fill you in on the plethora of detail I didn’t cover here and really, really consider standing up an instance if you’re at all involved application security discovery and repair. This tool is one I am absolutely making use of in more than one venue; you should do the same. You’re probably used to me saying it every few months but I’m really excited about ThreadFix as it is immediately useful in my every day role. You will likely find it equally useful in your organization as you push harder for find and fix versus find and…
Ping me via email if you have questions (russ at holisticinfosec dot org).
Cheers…until next month.

Acknowledgements

Dan Cornell, CTO, Denim Group, ThreadFix project lead
Lee Carsten, Senior Manager, Business Development, Denim Group

Friday, November 01, 2013

toolsmith: OWASP Xenotix XSS Exploit Framework

Prerequisites
Current Windows operating system

Introduction
Hard to believe this month’s toolsmith marks seven full years of delivering dynamic content and covering timely topics on the perpetually changing threat-scape information security practitioners face every day. I’ve endeavored to aid in that process 94 straight months in a row, still enjoy writing toolsmith as much as I did day one, and look forward to many more to come. How better to roll into our eighth year than by zooming back to one of my favorite topics, cross-site scripting (XSS), with the OWASP Xenotix XSS Exploit Framework. I’d asked readers and Twitter followers to vote for November’s topic and Xenotix won by quite a majority. This was timely as I’ve also seen renewed interest in my Anatomy of an XSS Attack published in the ISSA Journal more than five years ago in June 2008. Hard to believe XSS vulnerabilities still prevail but according to WhiteHat Security’s May 2013 Statistics report:
1)      While no longer the most prevalent vulnerability, XSS is still #2 behind only Content Spoofing
2)      While 50% of XSS vulnerabilities were resolved, up from 48% in 2011, it still took an average of 227 for sites to deploy repairs
Per the 2013 OWASP Top 10, XSS is still #3 on the list. As such, good tools for assessing web applications for XSS vulnerabilities remain essential, and OWASP Xenotix XSS Exploit Framework fits the bill quite nicely.
Ajin Abraham (@ajinabraham) is Xenotix’s developer and project lead; his feedback on this project supports the ongoing need for XSS awareness and enhanced testing capabilities.
According to Ajin, most of the current pool of web application security tools still don't give XSS the full attention it deserves, an assertion he supports with their less than optimal detection rates and a high number of false positive. He has found that most of these tools use a payload database of about 70-150 payloads to scan for XSS. Most web application scanners, with the exception of few top notch proxies such as OWASP ZAP and Portswigger’s Burp Suite, don't provide much flexibility especially when dealing with headers and cookies. They typically have a predefined set of protocols or rules to follow and from a penetration tester’s perspective can be rather primitive. Overcoming some of these shortcomings is what led to the OWASP Xenotix XSS Exploit Framework.
Xenotix is a penetration testing tool developed exclusively to detect and exploit XSS vulnerabilities. Ajin claims that Xenotix is unique in that it is currently the only XSS vulnerability scanner with zero false positives. He attributes this to the fact that it uses live payload reflection-based XSS detection via its powerful triple browser rendering engines, including Trident, WebKit and Gecko. Xenotix apparently has the world's second largest XSS payload database, allowing effective XSS detection and WAF bypass. Xenotix is also more than a vulnerability scanner as it also includes offensive XSS exploitation and information gathering modules useful in generating proofs of concept.
For feature releases Ajin intends to implement additional elements such as an automated spider and an intelligent scanner that can choose payloads based on responses to increase efficiency and reduce overall scan time. He’s also working on an XSS payload inclusive of OSINT gathering which targets certain WAF's and web applications with specific payloads, as well as a better DOM scanner that works within the browser. Ajin welcomes support from the community. If you’re interested in the project and would like to contribute or develop, feel free to contact him via @ajinabraham, the OWASP Xenotix site, or the OpenSecurity site.

Xenotix Configuration

Xenotix installs really easily. Download the latest package (4.5 as this is written), unpack the RAR file, and execute Xenotix XSS Exploit Framework.exe. Keep in mind that antimalware/antivirus on Windows systems will detect xdrive.jar as a Trojan Downloader. Because that’s what it is. ;-) This is an enumeration and exploitation tool after all. Before you begin, watch Ajin’s YouTube video regarding Xenotix 4.5 usage. There is no written documentation for this tool so the video is very helpful. There are additional videos for older editions that you may find useful as well. After installation, before you do anything else, click Settings, then Configure Server, check the Semi Persistent Hook box, then click Start. This will allow you to conduct information gathering and exploitation against victims once you’ve hooked them.
Xenotix utilizes the Trident engine (Internet Explorer 7), the Webkit engine (Chrome 25), and the Gecko engine (Firefox 18), and includes three primary module sets: Scanner, Information Gathering, and XSS Exploitation as seen in Figure 1.

FIGURE 1: The Xenotix user interface
We’ll walk through examples of each below while taking advantage of intentional XSS vulnerabilities in the latest release of OWASPMutillidae II: Web Pwn in Mass Production. We covered Jeremy Druin’s (@webpwnized) Mutillidae in August 2012’s toolsmith and it’s only gotten better since.

Xenotix Usage

These steps assume you’ve installed Mutillidae II somewhere, ideally on a virtual machine, and are prepared to experiment as we walk through Xenotix here.
Let’s begin with the Scanner modules. Using Mutillidae’s DNS Lookup under OWASP Top 10 à A2 Cross Site Scripting (XSS) à Reflected (First Order) à DNS Lookup. The vulnerable GET parameter is page and on POST is target_host. Keep in mind that as Xenotix will confirm vulnerabilities across all three engines, you’ll be hard pressed to manage output, particularly if you run in Auto Mode; there is no real reporting function with this tool at this time. I therefore suggest testing in Manual Mode. This allows you to step through each payload and as seen Figure 2, we get our first hit with payload 7 (of 1530).

FIGURE 2: Xenotix manual XSS scanning
You can also try the XSS Fuzzer where you replace parameter values with a marker, [X], and fuzz in Auto Mode. The XSS Fuzzer allows you to skip ahead to a specific payload if you know the payload position index. Circling back to the above mentioned POST parameter, I used the POST Request Scanner to build a request, establishing http://192.168.40.139/mutillidae/index.php?page=dns-lookup.php as the URL and setting target_host in Parameters. Clicking POST then populated the form as noted in Figure 3 and as with Manual mode, our first hits came with payload 7.
FIGURE 3: Xenotix POST Request Scanner
You can also make use of Auto Mode, as well as DOM, Multiple Parameter, and Header Scanners, as well as a Hidden Parameter Detector.

The Information Gathering modules are where we can really start to have fun with Xenotix. You first have to hook a victim browser to make use of this tool set. I set the Xenotix server to the host IP where Xenotix was running (rather than the default localhost setting) and checked the Semi Persistent Hook checkbox. The resulting payload of
was then used with Mutillidae’s Pen Test Tool Lookup to hook a victim browser on a different system running Firefox on Windows 8.1. With the browser at my beck and call, I clicked Information Gathering where the Victim Fingerprinting module produced:
Again, entirely accurate. The Information Gathering modules also include WAF Fingerprinting, as well as Ping, Port, and Internal Network Scans. Remember that, as is inherent to its very nature, these scans occur in the context of the victimized browser’s system as a function of cross-site scripting.

Saving the most fun for last, let’s pwn this this thang! A quick click of XSS Exploitation offers us a plethora of module options. Remember, the victim browser is still hooked (xooked) via:
I sent my victim browser a message as depicted in Figure 4 where I snapped the Send Message configuration and the result in the hooked browser.

FIGURE 4: A celebratory XSS message
Message boxes are cute, Tabnabbing is pretty darned cool, but what does real exploitation look like? I first fired up the Phisher module with Renren (the Chinese Facebook) as my target site, resulting in a Page Fetched and Injected message and Renren ready for login in the victim browser as evident in Figure 5. Note that my Xenotix server IP address is the destination IP in the URL window.

FIGURE 5: XSS phishing Renren
But wait, there’s more. When the victim user logs in, assuming I’m also running the Keylogger module, yep, you guessed it. Figure 6 includes keys logged.

FIGURE 6: Ima Owned is keylogged
Your Renren is my Renren. What? Credential theft is not enough for you? You want to deliver an executable binary? Xenotix includes a safe, handy sample.exe to prove your point during demos for clients and/or decision makers. Still not convinced? Need shell? You can choose from JavaScript, Reverse HTTP, and System Shell Access. My favorite, as shared in Figure 7, is reverse shell via a Firefox bootstrapped add-on as delivered by XSS Exploitation --> System Shell Access --> Firefox Add-on Reverse Shell. Just Start Listener, then Inject (assumes a hooked browser).

FIGURE 7: Got shell?
Assuming the victim happily accepts the add-on installation request (nothing a little social engineering can’t solve), you’ll have system level access. This makes pentesters very happy. There are even persistence options via Firefox add-ons, more fun than a frog in a glass of milk.

In Conclusion

While this tool won’t replace proxy scanning platforms such as Burp or ZAP, it will enhance them most righteously. Xenotix is GREAT for enumeration, information gathering, and most of all, exploitation. Without question add the OWASP Xenotix XSS Exploit Framework to your arsenal and as always, have fun but be safe. Great work, Ajin, looking forward to more, and thanks to the voters who selected Xenotix for this month’s topic. If you have comments, follow me on Twitter via @holisticinfosec or email if you have questions via russ at holisticinfosec dot org.
Cheers…until next month.

Acknowledgements

Ajin Abraham, Information Security Enthusiast and Xenotix project lead

Moving blog to HolisticInfoSec.io

toolsmith and HolisticInfoSec have moved. I've decided to consolidate all content on one platform, namely an R markdown blogdown sit...