Showing posts with label logging. Show all posts
Showing posts with label logging. Show all posts

Friday, July 07, 2017

Toolsmith #126: Adversary hunting with SOF-ELK

As we celebrate Independence Day, I'm reminded that we honor what was, of course, an armed conflict. Today's realities, when we think about conflict, are quite different than the days of lining troops up across the field from each other, loading muskets, and flinging balls of lead into the fray.
We live in a world of asymmetrical battles, often conflicts that aren't always obvious in purpose and intent, and likely fought on multiple fronts. For one of the best reads on the topic, take the well spent time to read TJ O'Connor's The Jester Dynamic: A Lesson in Asymmetric Unmanaged Cyber Warfare. If you're reading this post, it's highly likely that your front is that of 1s and 0s, either as a blue team defender, or as a red team attacker. I live in this world every day of my life as a blue teamer at Microsoft, and as a joint forces cyber network operator. We are faced, each day, with overwhelming, excessive amounts of data, of varying quality, where the answers to questions are likely hidden, but available to those who can dig deeply enough.
New platforms continue to emerge to help us in this cause. At Microsoft we have a variety of platforms that make the process easier for us, but no less arduous, to dig through the data, and the commercial sector continues to expand its offerings. For those with limited budgets and resources, but a strong drive for discovery, that have been outstanding offerings as well. Security Onion has been forefront for years, and is under constant development and improvement in the care of Doug Burks.
Another emerging platform, to be discussed here, is SOF-ELK, part of the SANS Forensics community, created by SANS FOR572, Advanced Network Forensics and Analysis author and instructor Phil Hagen. Count SOF-ELK in the NFAT family for sure, a strong player in the Network Forensic Analysis Tool category.
SOF-ELK has a great README, don't be that person, read it. It's everything you need to get started, in one place. What!? :-)
Better yet, you can download a fully realized VM with almost no configuration requirements, so you can hit the ground running. I ran my SOF-ELK instance with VMWare Workstation 12 Pro and no issues other than needing to temporarily disable Device Guard and Credential Guard on Windows 10.
SOF-ELK offers some good test data to get you started with right out of the gate, in /home/elk_user/exercise_source_logs, including Syslog from a firewall, router, converted Windows events, a Squid proxy, and a server named muse. You can drop these on your SOF-ELK server in the /logstash/syslog/ ingestion point for syslog-formatted data. Additionally, utilize /logstash/nfarch/ for archived NetFlow output, /logstash/httpd/ for Apache logs, /logstash/passivedns/ for logs from the passivedns utility, /logstash/plaso/ for log2timeline, and  /logstash/bro/ for, yeah, you guessed it.
I mixed things up a bit and added my own Apache logs for the month of May to /logstash/httpd/. The muse log set in the exercise offering also included a DNS log (named_log), for grins I threw that in the /logstash/syslog/ as well just to see how it would play.
Run down a few data rabbit holes with me, I swear I can linger for hours on end once I latch on to something to chase. We'll begin with a couple of highlights from my Apache logs. The SOF-ELK VM comes with three pre-configured dashboards including Syslog, NetFlow, and HTTPD. You can learn more in the start page for the SOF-ELK UI, my instance is http://192.168.50.110:5601/app/kibana. There are three panels, or blocks, for each dashboard's details, at the bottom of the UI. I drilled through to the HTTPD Log Dashboard for this experiment, and immediately reset the time period for analysis (click the time marker in the upper right hand part of the UI). It defaults to the last 15 minutes, if you're reviewing older data it won't show until you adjust to match your time stamps. My data is from the month of May so I selected an absolute window from the beginning of May to its end. You can also select quick or relative time options, it's great to get comfortable here quickly and early. The resulting opening visualizations for me made me very happy, as seen in Figure 1.
Figure 1: HTTPD Log Dashboard
Nice! An event count summary, source ASNs by count (you can immediately see where I scanned myself from work), a fantastic Access Source map, a records graph by HTTP verbs, and one by response codes.
The beauty of these SOF-ELK dashboards is that they're immediately interactive and allow you to drill right in to interesting data points. The holisticinfosec.org website is intentionally flat and includes no active PHP or dynamic content. As a result, my favorite response code as a web application security tester, the 500 error, is notably missing. But, in both the timeline graphs we note a big traffic spike on 8 MAY 2017, which correlates nicely with my above mention scan from work, as noted in the ASN hit count, and seen here in Figure 2.

Figure 2: Traffic spike from scan
This visualizes well but isn't really all that interesting or uncommon, particularly given that I know I personally ran the scan, and scans from the Intarwebs are dime a dozen. What did jump out for me though, as seen back in Figure 1, was the presence of four PUT requests. That's usually a "bad thing" where some @$$h@t is trying to drop something on my server. Let's drill in a bit, shall we? After clicking the graph line with the four PUT requests, I quickly learned that two requests came from 204.12.194.234 AS32097: WholeSale Internet in Kansas City, MO and two came from 119.23.233.9 AS37963: Hangzhou Alibaba Advertising in Hangzhou, China. This is well represented in the HTTPD Access Source panel map (Figure 3).

Figure 3: Access Source
The PUT request from each included a txt file attempt, specifically dbhvf99151.txt and htjfx99555.txt, both were rejected, redirected (302), and sent to my landing page (200).
Research on the IPs found that 119.23.233.9 was on the "real time suspected malware list as detected by InterServer's intrusion systems" as seen 22 MAY, and 204.12.194.234 was found twice in the AbuseIPDB, flagged on 18 MAY 2017 for Cknife Webshell Detected. Now we're talking. It's common to attempt a remote file include attack or a PUT, with what is a web shell. I opened up SOF-ELK on that IP address and found eight total hits in my logs, all looking for common PHP opportunities with the likes of GET and POST for /plus/mytag_js.php, noted in PHP injection attack attempts.
SOF-ELK made it incredibly easy to hunt down these details, as seen in Figure 4 from the HTTPD Discovery panel.
Figure 4: Discovery
That's a groovy little hunting trip through HTTPD logs, but how about a bit of Syslog? I spotted I likely oddity that could be correlated across a number of the exercise logs, we'll see if the correlation is real. You'll notice tabs at the top of your SOF-ELK UI, we'll use Discover for this experiment. I started from the Syslog Dashboard with my time range set broadly on the last two months. 7606 records presented themselves, sliced neatly by hosts and programs, as seen in Figure 5.

Figure 5: Syslog Dashboard
Squid proxy logs showed the predominance of host entries (6778 or 57.95% of 11,696 to be specific), so I started there. Don' laugh, but I'll often do keyword queries just to see what comes up, sometimes you land a pointer to a good rabbit hole. Within the body of 6778 proxy events, I searched malware. Two hits came back for GET request via a JS redirector to bleepingcomputer.com for your basic how-to based on "random websites opening in Chrome". Ruh-roh.
Figure 6: Malware keyword
More importantly, we have an IP address to pivot on: 10.3.59.53. A search of that IP across the same 6778 Squid logs yielded 3896 entries specific to this IP, and lots to be curious about:
  • datingukrainewomen.com 
  • anastasiadate.com
  • YouTube videos for hair loss
  • crowdscience.com for "random pop-ups driving me nuts"
Do I need to build this user profile out for you, or are you with me? Proxy logs tell us so much, and are deeply worthy of your blue team efforts to collect and review.
I jumped over to the named_log from the muse host to see what else might reveal itself. Here's where I jumped to Discover, the Splunk-like query functionality inherent to SOF-ELK (and ELK implemetations). I did reductive query to see what other oddities might surface: 10.3.59.53 AND dns_query: (*.co.uk OR *.de OR *.eu OR *.info OR *.cc OR *.online OR *.website). I used these TLDs based on the premise that bots using Domain Generation Algorithms (DGA) will often use the TLDs. See The DGA of PadCrypt to learn more, as well as ISC Diary handler John Bambanek's OSINT logic. The query results were quite satisfying, 29 hits, including a number of clearly randomly generated domains. Those that were most interesting all included the .cc TLD, so I zoomed in further. Down to five hits with 10.3.59.53 AND dns_query: *.cc, as seen in Figure 7.
Figure 7:. CC TLD hits
Oh man, not good. I had a hunch now, and went back to the proxy logs with 10.3.59.53 AND squid_request:*.exe. And there you have it, ladies and gentlemen, hunch rewarded (Figure 8).

Figure 8: taxdocs.exe
It taxdocs.exe isn't malware, I'm a monkey's uncle. Unfortunately, I could find no online references to these .cc domains or the .exe sample or URL, but you get the point. Given that it's exercise data, Phil may have generated it to entice to dig deeper.
When we think about the IOC patterns for Petya, a hunt like this is pretty revealing. Petya's "initial infection appears to involve a software supply-chain threat involving the Ukrainian company M.E.Doc, which develops tax accounting software, MEDoc". This is not Petya (as far as I know) specifically but we see pattern similarities for sure, one can learn a great deal about the sheep and the wolves. Be the sheepdog!
Few tools better in the free and open source arsenal to help you train and enhance your inner digital sheepdog than SOF-ELK. "I'm a sheepdog. I live to protect the flock and confront the wolf." ~ LTC Dave Grossman, from On Combat.

Believe it or not, there's a ton more you can do with SOF-ELK, consider this a primer and a motivator.
I LOVE SOF-ELK. Phil, well done, thank you. Readers rejoice, this is really one of my favorites for toolsmith, hands down, out of the now 126 unique tools discussed over more than ten years. Download the VM, and get to work herding. :-)
Cheers...until next time.

Monday, July 02, 2012

toolsmith: Collective Intelligence Framework






Prerequisites
Linux for server, stable on Debian Lenny and Squeeze, and Ubuntu v10
Perl for client (stable), Python client currently unstable

Introduction

As is often the case when plumbing the depths of my feed reader or the Dragon News Bytes mailing list I found toolsmith gold. Kyle Maxwell’s Introduction to the Collective IntelligenceFramework (CIF) lit up on my radar screen. CIF parses data from sources such as ZeuS and SpyEye Tracker, Malware Domains, Spamhaus, Shadowserver, Dragon Research Group, and others. The disparate data is then normalized into repository that allows chronological threat intelligence gathering.   Kyle’s article is an excellent starting point that you should definitely read, but I wanted to hear more from Wes Young, the CIF developer, who kindly filled me in with some background and a look forward. Wes is a Principal Security Engineer for REN-ISAC whose mission is to aid and promote cyber security operational protection and response within the higher education and research (R&E) communities. As such the tenor of his feedback makes all the more sense.
The CIF project has been an interesting experiment for us. When we first decided to transition the core components from incubation in a private trust-based community, to a more traditional open-source community model, it was merely to better support our existing community. We figured, if things were open-source, our community would have an easier time replicating our tools and processes to fit their own needs internally. If others outside the educational space benefited from that (private sector, government sector, etc), then that'd be the icing on the cake.
Years later, we discovered that ratio has nearly inverted itself. Now the CIF community has become lopsided, with the majority of users being from the international public and private spaces. Furthermore, the contribution in terms of testing, bug-fixes, documentation contributions and [more importantly] the word-of-mouth endorsements has driven CIF to become its own living organism. The demonstrated value it has created for threat analysts, who have traditionally had to beg-borrow-and-steal their own intelligence, has become immeasurable in relation to the minor investment of adoption.
As this project's momentum has given it a life all its own, future roadmaps will build off its current success. The ultimate goal of the CIF project is to create a uniform presence of your intelligence, somewhere you control. It'll read your blogs, your sandboxes, and yes, even your email (if you allow it), correlating and digging out threat information that's been traditionally locked in plain, wiki-fied or semi-formatted text. It has enabled organizations to defend their networks with up to the second intelligence from traditional data-sources as well as their peers. While traditional SEMs enable analysts to search their data, CIF enables your data to adapt your network, seamlessly and on the fly. It's your own personal Skynet. :)

Readers may enjoy Wes’ recent interview on the genesis of CIF, available as a FIRST 2012 podcast.
You may also wish to take a close look at Martin Holste’s integration of CIF with his Enterprise Log Search and Archive (ELSA) solution, a centralized syslog framework. Martin has utilized the Sphinx full-text search engine to create accelerated query functionality and a full web front end.

Installing CIF

The documentation found on the CIF wiki should be considered “must read” from top to bottom before proceeding. I won’t repeat what’s also been said (Kyle’s article has some installation pointers too), but I went through the process a couple of times to get it right so I’ll share my experience. There are a number of elements to consider if implementing CIF in a production capacity. While I installed a test instance on insignificant hardware running Debian Squeeze, if you have a 64-bit system with 8GB of RAM or more and a minimum of four cores with drive space to grow into, definitely use it for CIF. If you can also install a fresh OS, pay special attention to your disk layout while configuring partition mapping during the Large Volume Manager (LVM) setup. Also follow the postgres database configuration steps closely if working from a fresh install. You’ll be changing ident sameuser to trust in pg_hba.conf for socket connections. On weak little systems such as my test server, Kyle’s suggestion to update work_mem to 512MB and checkpoint_segments to 32 in postgresql.conf is a good one. The BIND setup is quite straightforward, but again per Kyle’s feedback, make sure your forwarder IP addresses in /etc/resolv.conf match those you configure in /etc/bind/named.conf.options.
From there the install steps on the wiki can be followed verbatim. During the Load Data phase of configuration you may run into an XML parsing issue. After executing time /opt/cif/bin/cif_crontool -f -d && /opt/cif/bin/cif_crontool -d -p daily && /opt/cif/bin/cif_crontool -d -p hourly you may receive an error. The cif_crontool script is similar to cron, as I hope you’ve sagely intuited for yourself, where it calls cif_feedparser to traverse and load CIF configuration files then instructs cif_feedparser based on the configs. The error, :170937: parser error : Sequence ']]>' not allowed in content, crops up when cif_crontool attempts to parse the cleanmx feed definition in /opt/cif/etc/misc.cfg. You can resolve this by simply commenting out that definition. Wes is reaching out to clean-mx.de to get this fixed, right now there are no other options than to comment out the feed.
To install a client you need only follow the Client Setup steps, and in your ~/.cif file apply the apikey that you created during the server install as described in CIF Config. Don’t forget to configure .cif to generate feed as also described in this section.
A final installation note: if you don’t feel like spending the time to do your own build you have the option to utilize a preconfigured Amazon EC2 instance (limited disk space, not production-ready).

Using CIF

You should set the following up, per the Server Install, as a cron job but for manual reference if you wish to update your data at random intervals, run as sudo su - cif:
1)  PATH=/bin:/usr/local/bin:/opt/cif/bin
2)      Pull feed data:
a.  cif_crontool -p daily -T low
b.  cif_crontool -p hourly -T low
3)      Crunch the data: cif_analytic -d -t 16 -m 2500 (you can up –t and –m on beefier systems but it my grind your system down)
4)      Update the feeds: cif_feeds
You can run cif from the command line; cif –h will give you all the options, cif –q where query string is an IP, URL, domain, etc. will get you started. Pay special attention to the –p parameter as it helps you define output formats such as HTML or Snort.
I immediately installed the Firefox CIF toolbar, you’ll find details on the wiki under Client | Toolbars | Firefox as it make queries via the browser, leveraging the API a no-brainer. See WebAPI on the wiki under API. Screen shots included hereafter will be of CIF usage via this interface (easier than manually populating query URLs).
There a number of client examples available on the wiki, but I’m always one to throw real-world scenarios at the tool du jour. As ZeuS developers continue to “innovate” and produce modules such as the recently discovered two-factor authentication bypass, ZeuS continues in increased usage by cybercriminals. As may likely be the common scenario, an end user on the network you try desperately to protect has called you to say that they tried to update Firefox via a link “someone sent them” but it “didn’t look right” and that they were worried “something was wrong.” You run netstat –ano on their system and see a suspicious connection, specifically 193.106.31.68. Ruh-roh, Rastro, that IP lives in the Ukraine. Go figure. What does Master Cifu say? Figure 1 fills us in.

FIGURE 1: CIF says “here be dragons”
I love mazilla-update.com, bad guy squatter genius. You need only web search ASN 49335 to learn that NCONNECT-AS Navitel Rusconnect Ltd is not a good neighborhood for your end user to be playing in. Better yet, cif –q AS49335 at the command line or drop AS49335 in the Firefox search box.
Figure 2 is a case in point, Navitel Rusconnect Ltd is definitely the wrong side of the tracks.

FIGURE 2: Can I catch a bus out of here?
 ZeuS configs and binaries, SpyEye, stolen credit card gateway, oh my.
This is a good time for a quick overview of taxonomy. Per the wiki, severity equates to seriousness, confidence denotes faith in the observation, and impact is a profile for badness (ZeuS, botnet, etc.).
Our above mentioned user does show mazilla-update.com in their browser history, let’s query it via CIF.
Figure 3 further validates suspicions.

FIGURE 3: Mazilla <> Mozilla
 You quickly discern that your end user downloaded bt.exe from mazilla-update.com. You take a quick md5sum of the binary and drop the hash in the CIF search box. 756447e177fc3cc39912797b7ecb2f92 bears instant fruit as seen in Figure 4.

FIGURE 4: CIF hash search
 Yep, looks like your end user might have gotten himself some ZeuS action.
With a resource such as CIF at your fingertips you should be able to quickly envision value added when using a DNS sinkhole (hello 127.0.0.1) or DNS-BH from malwaredomains.com where you serve up fake replies to any request for the likes of mazilla-update.com. Bonus! Beefy server for CIF: $2499. CIF licensing: $0. Bad guy fail? Priceless.

In Conclusion

Check out the Idea List in the CIF Projects Lab; there is some excellent work to be done including a VMWare appliance, further Snort integration, a Virus Total analytic, and others. This project, like so many others we’ve discussed in toolsmith, grows and prospers with your feedback and contributions. Please consider participating by joining the CIF Google Group and jumping in. You’ll also want to check out the DFIR Journal’s CIF discussions, including integration with ArcSight, as well as EyeIS’s CIF incorporation with Splunk. These are the same folks who have brought us Security Onion 1.0 for Splunk, so I’m imaging all the possibilities for integration. Get busy with CIF, folks. It’s a work in progress but a damned good one at that.
Ping me via email if you have questions (russ at holisticinfosec dot org).
Cheers…until next month.

Acknowledgements

Wes Young, CIF developer, Principal Security Engineer, REN-ISAC

Tuesday, April 03, 2012

toolsmith: Log Parser Lizard









Prerequisites
Windows

Introduction
At RSA Conference 2012 I gave a presentation called Evil Through The Lens of Web Logs. This presentation is built on research I’m conducting for a SANS Gold paper for graduate school and pays particular attention to SQL injection and Remote File Include attacks. One of the tools discussed as very useful for analysis tactics is Log Parser Lizard. You’re probably familiar with Log Parser, but I’ll bet you didn’t there was a great GUI-based tool with which to leverage its raw power with ease. Log Parser Lizard (LPL) is the brainchild of Dimce Kuzmanov, a Macedonian software engineer, who started Lizard Labs in 1998. In 2006 while also working as a part time sysadmin on financial systems, Dimce recognized that he was using Logparser on a daily basis for creating reports, analyzing logs, automatic error reporting, transferring data with txt files, etc. Over time his collection of queries became unmanageable and difficult to maintain so he created LPL for his personal use and because, having benefited from free software himself, wanted to release a useful freeware product to give back to the community. While LPL very successfully harnesses Log Parser’s capabilities Dimce firmly believes that as a great UI it help users learn and organize their queries with less effort. When he added log4net and regex input support the Logparser community really began to embrace LPL. LPL releases are a bit sporadic, usually based on a few new features, bug or code fixes and future releases are planned but not with a known frequency. Today LPL has a user base of about 2000 installations each month based on trend analysis for the last three years and approximately 80000 users worldwide.
The current production release of LPL is 2.1 and features include:
·         Ability to organize queries along with an improved source code editor that includes enhanced source navigation and analysis capability, syntax-highlighting, automatic source code completion, method insight, undo/redo, bookmarks, and more
·         Support for Facebook Query Language (FQL). This feature was introduced to help Facebook developers organize their queries
·         Code snippets (code templates) and constants. Log Parser Lizard also supports “constants” binding to static/shared properties from Microsoft .Net
·         Numerous other user-interface features including advanced grid with filtering and grouping as well as support for charts without requiring a Microsoft Office installation as is a dependcy for  a standalone instance of Logparser
·         Support for printing and exporting results to Excel and PDF documents
o   For registered users ($26.51 USD)
·         Support for inline VB.Net code to create LogParser SQL queries
Inline VB.net support allows you to drop your code between <% and %> marks; it will then be executed and the resulting string will be replaced in the query. Lizard Labs believes this feature will be very useful for LPL users. Before parsing logs you can move-copy-rename files, download via FTP, shutdown IIS, etc. You can also use .Net data types like DateTime for arithmetic operations and/or System.Environment settings in query parameters.

As I write this I’m testing the beta for LPL 2.5 and the new feature set includes:
·         Conditional field formatting (color, font, size, image) to identify required information. As an example, you can set the conditions to change error colors to red, warnings to yellow, etc. or highlight a specific field if it contains a string value of interest
·         Store and organize queries in SQL Server database for ease of use among multiple users and computers in an organization as well as backups, auditing and all other benefits that database storage allows
·         Excel-style row filtering
·         Ability to add columns with Excel style formulas (with most Excel functions) and support for exporting in Excel 2007 format (more than 65365 rows)

What would a toolsmith article be without a tool roadmap so let’s not break a good habit, eh? LPL 3.0 will likely include out of the box queries for IIS web reports (as in other commercial log analysis products), support for query execution scheduling, reports sent via e-mail from LPL, command line support, a query builder tool, text file input format (where a single file is one record and fields can be extracted with RegEx or with Logparser functions), and improved log4net input format. As with most of the tools we discuss, Dimce is certainly open to good ideas for the product and welcomes feedback and ideas from the user community. In total fantasy land the future of LPL may even include queries “in the cloud”, an LPL ASP.net web app that can be installed right on the server, a web service supporting LPL, mobile apps that can use this service, and a global query dictionary that users can submit, comment and rate the queries. “The future’s so bright, I gotta wear shades.” Whoa, 80’s flashback, sorry. 


Using Log Parser Lizard

Installing Log Parser Lizard is so straightforward it doesn’t even warrant a section. Ensure you have Log Parser and .Net 3.5 installed, then execute the LPL installer. Finito.

As described above, I’ve been working on research for a paper which includes analysis of a mass SQL injection attack, well described in detail this past December by Mark Hofman on the SANS Internet Storm Center Diary. In addition to Mark’s analysis, this popular post included many comments and replies from readers who had suffered or noted the attack in their logs and even some helpful folks who submitted log samples. You likely remember the LizaMoon attack and the Lilupophilupop attack was quite similar. In both cases, injected sites offered a URL that then caused redirection to a fake antivirus offering. Specifically, was embedded in victim sites where sl.php bounced you to the likes of hxxp://ift72hbot.rr.nu, the on to rogue AV. I actually had to look up the .rr.nu TLD; it’s the Republic of Moldova, and has been implicated recently in massive SPAM campaigns as well as the current WordPress hacks (as of this writing). 
Figure 1 represents a victim site still exhibiting typical signs of compromise.

Figure 1: Lilupophilupop victim site
 Victim sites were most often running ASP.net apps on IIS with MS-SQL back-ends. It was quickly learned that a few identifying traits of the Lilupophilupop attack included the fact that a rather large hex blob that was evident in IIS logs. I’ve always found that checking logs for 500 errors when analyzing for SQL injection attacks can typically point you down the right path. Using a log file submitted by an ISC reader (anonymized for obvious reasons), I first built a query to seek ASP application errors from a default query included in LPL. I launched LPL, clicked IIS Logs, then ASP App Errors, replaced #IISW3C# in the FROM statement with the path to my anonymized log file, and finally clicked Run Query as seen in Figure 2. Email me if you’d like me send you the log file so you can experiment for yourself.

Figure 2: LPL parsing error messages
Using this query, including FROM D:\logs\lilupophilupop\ex111201anon.log WHERE (sc-status = 500) AND (cs-uri-stem LIKE '%.asp'), prior to being aware of lilupophilupop as a keyword or part of an injected URL, would have immediately narrowed the search vectors.
Also common to attacks of this nature might be a DECLARE statement (defines variable(s)) visible in logs. A query as seen in Figure 3 produced three results that included a DECLARE statement followed by a CAST (converts an expression of one data type to another) statement wherein an attempt to pass the hex blob to the backend was noted.

Figure 3: LPL parsing DECLARE statements
 I clicked one of the results from 78.46.28.97, chose Select All, then Copy, and dropped the content to a text editor. I then grabbed the hex from just after the CAST statement to just prior to the AS VARCHAR statement and copied into a Burp Suite decoder window and chose decode as ascii hex.
Figure 4 shows the converted attack string.

Figure 4: Burp decoder converts hex
Long and short of it, the attack loops through all columns in all tables and updates their value by adding JavaScript to point to hxxp://lilupophilupop.com/sl.php.
This took all of 5 to 10 minutes with LPL and a little experimentation. Yes, you can do all of this with Log Parser at the command line but if you’re looking for strong query management, tidy reporting exports including charts, and downright convenience, LPL is the way to go.

In Conclusion

Log Parser Lizard is one of those indispensable tools that treads lightly on your system but offers a huge bang for the buck. Free or $26? Puhleeze. Keep in mind that while I used an IIS log sample for the article you can throw LPL at generic XML, CSV, TSV and W3C based logs all day long. Download it and put it to good use right away. Dimce would love to hear from you, and I look forward to hearing your success stories.
Ping me via email if you have questions (russ at holisticinfosec dot org).
Cheers…until next month.

Acknowledgements

Dimce Kuzmanov, lead developer and founder, Lizard Labs

Wednesday, September 30, 2009

Using OSSEC to monitor ModSecurity and Wordpress

As the October ISSA Journal begins to make the rounds, readers will note OSSEC as the topic of my toolsmith column.
The topic was chosen by Doug Burks of Security Onion as part of the Pick a Toolsmith Topic contest (we'll do it again).
As a result Doug won Zero Day Threat: The Shocking Truth of How Banks and Credit Bureaus Help Cyber Crooks Steal Your Money and Identity. Thanks again, Doug.
The article is available for all readers here.

While I discussed OSSEC as it pertains to Snort logs, PCI compliance, application (misuse) monitoring and auditing, as well as malware behavioral analysis, I spent very little time discussing the use of OSSEC with ModSecurity or Wordpress.
So here's where I magically tie it all together. ;-)
Given the title of the book Doug won, what's one way we might help prevent cyber crooks from stealing our money and identity?
Monitor our web applications, of course! With OSSEC. See how I did that?

OSSEC and mod_security

As an example, on an Ubuntu server running Apache generating mod_security audit logs, include the following in ossec.conf (var/ossec/etc):



OSSEC will then alert on mod_security events.
You'll need to tune and filter; you may receive quite a few alerts, but once optimized the results will be quite useful.



OSSEC and Wordpress

Using OSSEC HIDS with Wordpress is already nicely documented.

Highlights from OSSEC pages:
WPsyslog2 is a global log plugin for Wordpress that keeps track of all system events and writes them to syslog. It tracks events such as new posts, new profiles, new users, failed logins, logins, logouts, etc.
It also tracks the latest vulnerabilities and alerts if any of them are triggered, becoming very useful when integrated with a log analysis tool, such as OSSEC HIDS.



No matter what you wish to monitor, even if it's simple server well being, you'll find OSSEC indispensable. Making use of it as part of your web application security arsenal is a giant step in the right direction.

Feedback welcome, as always, via comments or email.
Cheers.

del.icio.us | digg | Submit to Slashdot

Please support the Open Security Foundation (OSVDB)

Sunday, September 20, 2009

CSRF attacks and forensic analysis

Cross-site request forgery (CSRF) attacks exhibit an oft misunderstood yet immediate impact on the victim (not to mention the organization they work for) whose browser has just performed actions they did not intend, on behalf of the attacker.
Consider the critical infrastructure operator performing administrative actions via poorly coded web applications, who unknowingly falls victim to a spear phishing attack. The result is a CSRF-born attack utilized to create an administrative account on the vulnerable platform, granting the attacker complete control over a resource that might manage the likes of a nuclear power plant or a dam (pick your poison).

Enough of an impact statement for you?

There's another impact, generally less considered but no less important, resulting from CSRF attacks: they occur as attributable to the known good user, and in the context of an accepted browser session.
Thus, how is an investigator to fulfill her analytical duties once and if CSRF is deemed to be the likely attack vector?

I maintain two views relevant to this question.
The first is obvious. Vendors and developers should produce web applications that are not susceptible to CSRF attacks. Further, organizations, particularly those managing critical infrastructure and data with high business impact or personally identifiable information (PII), must conduct due diligence to ensure that products used to provide their service must be securely developed.

The second view places the responsibility squarely on the same organization to:
1) capture verbose and detailed web logs (especially the referrer)
2) stored and retained browser histories and/or internet proxy logs for administrators who use hardened, monitored workstations, ideally with little or no internet access
Strong, clarifying policies and procedures are recommended to ensure both 1 & 2 are successful efforts.

DETAILED DISCUSSION

Web logs
Following is an attempt to clarify the benefits of verbose logging on web servers as pertinent to CSRF attack analysis, particularly where potentially vulnerable web applications (all?) are served. The example is supported by the correlative browser history. I've anonymized all examples to protect the interests of applications that are still pending repair.

A known good request for an web application administrative function as seen in Apache logs might appear as seen in Figure 1.


Figure 1

As expected, the referrer is http://192.168.248.102/victimApp/?page=admin, a local host making a request via the appropriate functionality provided by the application as expected.

However, if an administrator has fallen victim to a spear phishing attempt intended to perform the same function via a CSRF attack, the log entry might appear as seen in Figure 2.


Figure 2

In Figure 2, although the source IP is the same as the known good request seen in Figure 1, it's clear that the request originated from an unexpected location, specifically http://badguy.com/poc/postCSRFvictimApp.html as seen in the referrer field.
Most attackers won't be so accommodating as to name their attack script something like postCSRFvictimApp.html, but the GET/POST should still stand out via the referrer field.

Browser history or proxy logs
Assuming time stamp matching and enforced browser history retention or proxy logging (major assumptions, I know), the log entries above can also be correlated. Consider the Firefox history summary seen in Figure 3.


Figure 3

The sequence of events shows the browser having made a request to badguy.com followed by the addition of a new user via the vulnerable web applications add user administrative function.

RECOMMENDATIONS

1) Enable the appropriate logging levels and format, and ensure that the referrer field is always captured.

For Apache servers consider the following log format:
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\"" combined
CustomLog log/access_log combined


For IIS servers be sure to enable cs(Referer) logging via IIS Manager.
Please note that it is not enabled by default in IIS and that W3C Extended Log File Format is required.

2) Retain and monitor browser histories and/or internet proxy logs for administrators who conduct high impact administrative duties via web applications. Ideally, said administrators should use hardened, monitored workstations, with little or no internet access.

3) Provide enforced policies and procedures to ensure that 1 & 2 are undertaken successfully.

Feedback welcome, as always, via comments or email.
Cheers.

del.icio.us | digg | Submit to Slashdot

Please support the Open Security Foundation (OSVDB)

Moving blog to HolisticInfoSec.io

toolsmith and HolisticInfoSec have moved. I've decided to consolidate all content on one platform, namely an R markdown blogdown sit...