Improvise Adapt Overcome

Tuesday, April 10, 2012 Posted by Corey Harrell 3 comments

Everybody has a story about how they became involved in DFIR. Showing the different avenues people took to reach the same point can be helpful to others trying to break into the field. I’ve been thinking about my journey and the path that lead me to become the forensicator who I am today. This is my story …

My story doesn’t start with me getting picked up by another DFIR team, being shown the reins by an experienced forensicator, or being educated in a digital forensic focused curriculum. My story starts many years ago when I took the oath and became a United States Marine. The Marines instilled into me the motto: improvise, adapt, and overcome. When I was in the Marines, I didn’t get the newest equipment, the latest tools, or other fancy gadgets. Things happen and it was not always the best of circumstances but I had to make do with what I had by improvising, adapting, and overcoming. This motto was taught to me when I first entered the Corps. Gradually it became a part of who I was; it became second nature when I was faced with any kind of adversity. Reflecting back on my journey I can easily see I ended up in DFIR by improvising, adapting, and overcoming the various situations I found myself in. Before I discuss those situations I think it’s necessary to define what exactly the Marines’ motto means:


jIIr (Star Wars Character)

Improvise: leverage the knowledge and resources available. You need to be creative to solve the situation you are going through.

Adapt: adjust to whatever situation being faced. Whether if its things not going as planned, lack of resources, issues with employment, or just adversity while doing your job. Whatever happens you need to make adjustments and adapt to the situation at hand.

Overcome: prevail over the situation. With each situation conquered you come out more knowledgeable and in a better position to handle future adversity.

Did I Take the Wrong Job


I was first exposed to the information security field in my undergraduate coursework and the field captivated my interest. However, at the time security jobs in my area were scarce so I opted to go into I.T. One of my first jobs after I graduated was not the most ideal conditions. I picked up on this on my first day on the job. A few hours were spent showing me the building locations throughout the city, introducing me to a few people, and pointing out my desk. That was it; there was no guidance on what was expected of me, explaining the network, training, etc. In addition, hardly any resources were provided to us to do our jobs. To illustrate, we needed some basic equipment (cabling, crimpers, connectors, …) so I did research and identified the most cost effective equipment which came in around $300. My purchase request was denied and then I narrowed the equipment down to the bare minimum for about a cost of $70. This was still denied since it was $70 too much. This lack of support went across the board for everything in our office. You were asked to do so many things but virtually no support was provided to make you successful. As I mentioned before, this was not the most ideal working condition.

I adapted to the environment by dedicating my own resources to improve myself by increasing my skillset and knowledge. I didn’t have access to a budget so I learned how to use free and open source software to get the job done. I couldn’t rely on any outside help so I used my problem solving skills to find my own answers to problems or coming up with my own solutions. Within a short period of time I went from questioning my decision to take the job to becoming the one managing the entire Windows network. I had the flexibility to try and do what I wanted on the network. I even used the position to increase my security skills by learning how to secure the Windows network. In the end the job became one of the best places I worked at and my knowledge grew by leaps and bounds.

Landed My First InfoSec Gig


The way I improvised, adapted, and overcame the issue I faced at a previous employer helped me land my first information security position. I joined a network security unit within an organization’s auditing department. My initial expectation was to bring my technical expertise to the table to help perform security assessments against other New York State agencies. My first week on the job I encountered my first difficulty. The other technical person I was supposed to work with resigned and his last week was my first week. My other co-worker was an auditor so I didn’t have a technical person to bring me up to speed on what I needed to do. Adapting to this situation was easier because of the resources my organization provided me. I had at my disposal: books, Internet, a test network, servers, clients, great supervisors, access to previous completed work, and time. In addition to these resources, I drew on my years of experience in IT and the information security knowledge I gained in my Windows admin days. Over time I increased my knowledge about information security (at management and technical levels) and I honed my skills in performing security assessments. On my first engagement where I helped come up with the testing methodology against an organization we were highly successfully. Within an extremely short period of time we had full control over their network and the data stored on it.

Welcome to DFIR


As I said I’m in a security unit within an auditing department. One activity other units in my department perform is conducting fraud audits. As a result, at times auditors need assistance with not only extracting electronic information from networks but help in validating if and how a fraud is occurring. I was tasked with setting up a digital forensic process to support these auditors even though I didn’t have any prior experience. I accepted the challenge but I didn’t take it lightly because I understood the need to do forensics properly. I first drew on my previous experience in evidence handling I gained when I managed the video cameras not only mounted in vehicles but scattered throughout the city. I even reached out to a friend who was a LE forensicator in addition to using the other resources I had available (training, books, Internet, test network, and time). I overcame the issue of setting up a digital forensic process from scratch. I established a process that went from supporting just my department to numerous departments within my organization. A process capable of processing cases ranging from fraud to investigations to a sprinkle of security incidents.

Improvise – Adapt – Overcome


The Marines instilled in me how to overcome adversity in any type of situation. This mentality stayed with me as I moved onto to other things in life and it was a contributing factor to how I ended up working DFIR. Whenever you are faced with adversity just remember Gunny Highway’s words:


Forensic4cast Awards


Forensic4Cast released the 2012 award nominees. I was honored to see my name listed among the nominees (blog of the year and examiner of the year). I am in outstanding company with Melia Kelley (Girl, Unallocated) and Eric Huber (A Fistful of Dongles) both of which are outstanding blogs. For Examiner of the Year I’m accompanied with Kristinn Gudjonsson (log2timeline literal changed how I approach timelines) and Cindy Murphy whose everyday efforts are improving our field. Both of these individuals are very deserving of this award. It’s humbling to see my work reflected in the Forensic4Cast awards especially since it was only about four years ago when my supervisor’s simple request launched me into the DFIR community. I wanted to say thank you to those who nominated me and wanted to encourage anyone who hasn’t voted for any of the nominees to do so. People have put in a lot of their own time and resources to improve our community and they deserve to be recognized for their efforts.
Labels:

Tale as Old as Time: Don’t Talk To Strangers

Sunday, April 1, 2012 Posted by Corey Harrell 3 comments
I was enjoying my Saturday afternoon doing various things around the house. My phone started ringing the caller ID showed it was from out of the area. I usually ignore these types of calls, but I answered this time because I didn’t want the ringing to wake my boys up from their nap. Dealing with a telemarketer is a lot easier than two sleep deprived kids.

Initially when I answered there was a few seconds of silence---then the line started ringing. My thought was “wait a minute, who is calling who here.” A female voice with a heavy accent picked up the phone; I immediately got flashbacks from my days dealing with foreign call centers when I worked in technical support. Then our conversation started:

Me: “Hello”
Female Stranger: “Is this Corey Harrell?”
Me: “Yes … who’s calling?”
Female Stranger: “This is Christina from Microsoft Software Maintenance Department calling about an issue with your computer. Viruses can be installed on computers without you knowing about it.”
Me: “What company are you with again?”
Female Stranger said something that sounded like “Esolvint”
Me in a very concerned tone: “Are you saying people can infect my computer without me even knowing it?”
Female Stranger: “Yes and your computer is infected.”

I knew immediately this was a telephone technical support scam, but I stayed on the line and pretended I knew nothing because I wanted to get first-hand experience about how these criminals operate. Conversation continued:

Female Stranger: “Are you at your computer?”
Me: “Yes”
Female Stranger: “Can you click the Start button then Run”
Me: “Okay …. The Start button then what? Something called Run”
Female Stranger: “What do you see?"
Me: “A box”
Female Stranger: “What kind of box”
Me: “A box that says Open With”
Female Stranger: “What do you see in the Open With path?”
Me: “Nothing” (At this point I had to withhold what I saw because then she might be on to me.)
Female Stranger: “You need to open the Event viewer to see your computer is infected”
Female Stranger: “Can you type in e-v-e-n-t-v-w-r”
Me: “I just typed in e-v-e-n-t-v-w-r”
Female Stranger: “Can you spell what is showing in the Open with path”
Me: “Eventvwr”
Female Stranger: “Can you spell what is showing in the Open with path”

The Female Stranger was taking too long to get to her point. I knew she was trying to get me to locate an error…any kind of error on my computer…to convince me my computer was infected and then from there she would walk me through steps to either give her remote access to my computer, actually infect my computer with a real virus or try to get my credit card information. I ran out of patience and changed the tone of the conversation.

Me: “Why are you trying to get me to access the Windows event viewer if you are saying I’m infected? The only thing in the Event viewer showing my computer was infected would be from an antivirus program but my computer doesn’t have any installed. The event viewer won’t show that my computer is infected”
Female Stranger sticking to the script: “You need to access the event viewer ….”
Me (as I rudely cut her off): “You can stop following your script now”
Female Stranger: complete silence
Me: “I know your scam and I know you are trying to get me to either infect my computer or give you remote access to my computer….”


She then hung up. I believe she knew I was on to her. It’s unfortunate since I wish she had heard everything I had to say about how I feel about people like her who try to take advantage of others. My guess is she wouldn’t care and just moved onto the next potential victim. Could that victim be you?


I’m sharing this cautionary tale so others remember the tale as old as time…”Don’t Talk To Strangers.” Especially when it comes to your private information….especially in the cyber world. Companies will not call you about some issue with your computer. Technical support will not contact you out of the blue knowing your computer is infected (unless it’s your help desk at work). Heck … even your neighborhood Geek won’t call you knowing there is something wrong with your computer.


If someone does then it’s a scam. Plain and simple some criminal is trying to trick you into giving them something. It might be to get you to infect your computer, give them access to your computer, or provide them with your credit card information. The next time you pick up a phone and someone on the other end says there is an issue with your computer let your spidey sense kick in and HANG UP.


Information about this type of scam is discussed in more detail at:


* Microsoft’s article Avoid Tech Support Phone Scams


* Sophos’ article Canadians Increasingly Defrauded by Fake Tech Support Phone Calls


* The Guardian’s article Virus Phone Scam Being Run from Call Centers in India



Updated links courtesy of Claus from Grand Stream Dreams:

Troy Hunt's Scamming the scammers – catching the virus call centre scammers red-handed

Troy Hunt's Anatomy of a virus call centre scam


I reposted my Everyday Cyber Security Facebook page article about my experience to reach a broader audience to warn others. The writing style is drastically different then what my blog readers are accustomed. My wife even edits the articles to make sure they are understandable and useful to the average person.
Labels:

Volume Shadow Copy Timeline

Sunday, March 25, 2012 Posted by Corey Harrell 2 comments
Windows 7 has various artifacts available to help provide context about files on a system. In previous posts I illustrated how the information contained in jump lists, link files, and Word documents helped explain how a specific document was created. The first post was Microsoft Word Jump List Tidbit where I touched on how Microsoft Word jump lists contain more information than the documents accessed because there were references to templates and images. I expanded on the information available in Word jump lists in my presentation Ripping VSCs – Tracking User Activity. In addition to jump list information I included data parsed from link files, documents’ metadata, and the documents’ content. The end result was that these three artifacts were able to show –at a high level - how a Word document inside a Volume Shadow Copy (VSC) was created. System timelines are a great technique to see how something came about on a system but I didn’t create one for my fake fraud case study. That is until now.

Timelines are a valuable technique to help better understand the data we see on a system. The ways in how timelines are used is limitless but the one commonality is providing context around an artifact or file. In my fake fraud case I outlined the information I extracted from VSC 12 to show how a document was created. Here’s a quick summary of the user’s actions: document was created with bluebckground_finance_charge.dotx template, Microsoft Word accessed a Staples icon, and document was saved. Despite the wealth of information extracted about the document, there were still some unanswered questions. Where did the Staples image come from? What else was the user doing when the document was being created? These are just two questions a timeline can help answer.

The Document of Interest


Creating VSC Timelines


Ripping VSCs is a useful technique to examine VSCs copies but I don’t foresee using it for timeline creation. Timelines can contain a wealth of information from one image or VSC so extracting data across all VSCs to incorporate into a timeline would be way too much information. The approach I take with timelines is to initially include the artifacts that will help me accomplish my goals. If I see anything when working my timeline I can always add other artifacts but starting out I prefer to limit the amount of stuff I need to look at. (For more about how I approach timelines check out the post Building Timelines – Thought Process Behind It). I wanted to know more about the fraudulent document I located in VSC 12 so I narrowed my timeline data to just that VSC. I created the timeline using the following five steps:

        1. Access VSCs
        2. Setup Custom Log2timeline Plug-in Files
        3. Create Timeline with Artifacts Information
        4. Create Bodyfile with Filesystem Metadata
        5. Add Filesystem Metadata to Timeline

Access VSCs


In previous posts I went into detail about how to access VSCs and I even provided references about how others access VSCs (one post was Ripping Volume Shadow Copies – Introduction). I won’t rehash the same information but I didn’t want to omit this step. I identified my VSC of interest was still numbered 12 and then I created a symbolic link named C:\vsc12 pointing to the VSC.

Setup Custom Log2timeline Plug-in Files


Log2timeline has the ability to use plug-in files so numerous plug-ins can run at the same time. I usually create custom plug-in files since I can specify the exact artifacts I want in my timeline. I setup one plug-in file to parse the artifacts located inside a specific user profile while a second plug-in file parses artifacts located throughout the system. I discussed in more depth how to create custom plug-in files in the post Building Timelines – Tools Usage. However, a quick way to create a custom file is to just copy and edit one of the built-in plug-in files. For my timeline I did the following on my Windows system to setup my two custom plug-in files.

        - Browsed to the folder C:\Perl\lib\Log2t\input. This is the folder where log2timeline stores the input modules including plug-in files.

        - Made two copies of the win7.lst plug-in file. I renamed one file to win7_user.lst and the other to win7_system.lst (the files can be named anything you want).

        - Modified the win7_user.lst to only contain iehistory and win_link to parse Internet Explorer browser history and Windows link files respectfully.

        - Modified the win7_system.lst to only contain the following: oxml, prefetch, and recycler. These plug-ins parse Microsoft Office 2007 metadata, prefetch files, and the recycle bin.

Create Timeline with Artifacts Information


The main reason why I use custom plug-in files is to limit the amount of log2timeline commands I need to run. I could have skipped the previous step which would have caused me to run five commands instead of the following two:

        - log2timeline.pl -f win7_user -r -v -w timeline.csv -Z UTC C:/vsc12/Users/harrell

        - log2timeline.pl -f win7_system -r -v -w timeline.csv -Z UTC C:/vsc12

The first command ran the custom plug-in file win7_user (-f switch) to recursively (-r switch) parse the IE browser history and link files inside the harrell user profile. The Users folder inside VSC 12 had three different user profiles so pointing log2timeline at the one let me avoid adding unnecessary data from the other user profiles. The second command ran the win7_system plug–in file to recursively parse 2007 Office metadata, prefetch files, and recycle bins inside VSC 12. Both log2timeline commands stored the output in the file timeline.csv in UTC format.

Create Bodyfile with Filesystem Metadata


At this point my timeline was created and it contained timeline information from select artifacts inside VSC 12. The last item to add to the timeline is data from the filesystem. Rob Lee discussed in his post Shadow Timelines And Other VolumeShadowCopy Digital Forensics Techniques with the Sleuthkit on Windows how to use the sleuthkit (fls.exe) to create a bodyfiles from VSCs. I used the method discussed in his post to execute fls.exe directly against VSC 12 as shown below.

        - fls -r -m C: \\.\HarddiskVolumeShadowCopy12 >> bodyfile

The command made fls.exe recursively (-r switch) search VSC 12 for filesystem information and the output was redirected to a text file named bodyfile in mactime (-m switch) format.

Add Filesystem Metadata to Timeline


The timeline generated by Log2timeline is in csv format while the sleuthkit bodyfile is in mactime format. These two file formats are not compatible so I opted to convert the mactime bodyfile into the Log2timeline csv format. I did the conversion with the following command:

        - log2timeline.pl -f mactime -w timeline.csv -Z UTC bodyfile

Reviewing the Timeline


The timeline I created included the following information: filesystem metadata, Office documents’ metadata, IE browser history, prefetch files, link files, and recycle bin information. I manually included the information inside Microsoft Word’s jump list since I didn’t have the time to put together a script to automate it. The timeline provided more context about the fraudulent document I located as can be seen in the summary below.

1. Microsoft Word was opened to create the Invoice-#233-staples-Office_Supplies.docx (Office metadata)

2. BlueBackground_Finance_Charge.dotx Word template was created on the system (filesystem)

3. User account accessed the template (link files)

4. Microsoft Word accessed the template (jump lists)

5. User performed a Google search for staple (web history)

6. User visited Staples.com (web history)

7. User accessed the staples.png located in C:/Drivers/video/images/ (link files)

8. The staples.png image was created in the images folder (filesystem)

9. Microsoft Word accessed the staples.png image (jump lists)

10. User continued accessing numerous web pages on Staples.com

11. Microsoft Word document Invoice-#233-staples-Office_Supplies.docx was created on the system (office metadata and filesystem)

12. User accessed the Invoice-#233-staples-Office_Supplies.docx document (link files and jump lists)


Here are the screenshots showing the activity I summarized above.













Second Look at Prefetch Files

Monday, March 19, 2012 Posted by Corey Harrell 1 comments
The one thing I like about sharing is when someone opens your eyes about additional information in an artifact you frequently encounter. Harlan has been posting about prefetch files and the information he shared changed how I look at this artifact. Harlan’s first post Prefetch Analysis, Revisited discussed how the artifact contains strings -such as file names and full paths to modules that were either used or accessed by the executable. He also discussed how the data can not only provide information about what occurred on the system but it could be used in data reduction techniques. One data reduction referenced was searching on the file paths for words such as temp. Harlan’s second post was Prefetch Analysis, Revisited...Again... and he expanded on what information is inside prefetch files. He broke down what was inside a prefetch from one of my test systems where I ran Metasploit against a Java vulnerability. His analysis provided more context to what I found on the system and validated some of my findings by showing Java did in fact access the logs I identified. Needless to say, his two posts opened my files to additional information inside prefetch files. Additional information I didn’t see the first the first time through but now I’m taking a second look to see what I find and to test out how one of Harlan's data reduction techniques would have made things easier for me.

Validating Findings

I did a lot of posts about Java exploit artifacts but Harlan did an outstanding job breaking down what was inside one of those Java prefetch files. I still have images from other exploit artifact testing so I took a look at prefetch files from an Adobe exploit and Windows Help Center exploit. The Internet Explorer prefetch files in both images didn’t contain any references to the attack artifacts but the exploited applications’ prefetch files did.

The CVE-2010-2883 (PDF Cooltype) vulnerability is present in the cooltype.dll affecting certain Adobe Reader and Acrobat versions. My previous analysis identified the following: the system had a vulnerable Adobe reader version, a PDF exploit appeared on the system, the PDF exploit is accessed, and Adobe Reader executed. The strings in the ACRORD32.EXE-3A1F13AE.pf prefetch file helped to validate the attack because it shows that Adobe Reader did in fact access the cooltype.dll as shown below.

\DEVICE\HARDDISKVOLUME1\PROGRAM FILES\ADOBE\READER 9.0\READER\COOLTYPE.DLL

The prefetch file from the Windows Help Center URL Validation vulnerability system showed something similar to the cooltype.dll exploit. The Seclists Full disclosure author mentioned that Windows Media Player could be used in an attack against the Help Center vulnerability. The strings in the HELPCTR.EXE-3862B6F5.pf prefetch file showed the application did access a Windows Media Player folder during the exploit.

\DEVICE\HARDDISKVOLUME1\DOCUMENTS AND SETTINGS\ADMINISTRATOR\LOCAL SETTINGS\APPLICATION DATA\MICROSOFT\MEDIA PLAYER\

Finding Malware Faster

Prefetch files provided more information about the exploit artifacts left on a system. By itself this is valuable enough but another point Harlan mentioned was using the strings inside prefetch files for data reduction. One data reduction technique is to filter on files' paths. To demonstrate the technique and how effective it is at locating malware I ran strings across the prefetch folder in the image from the post Examining IRS Notification Letter SPAM. (note, strings is not the best tool to analyze prefetch files and I’m only using the tool to illustrate how data is reduced) I first ran the following command which resulted in 7,905 lines.

strings.exe –o irs-spam-email\prefetch\*.pf

I wanted to reduce the data by only showing the lines containing the word temp to see if anything launched from a temp folder. To accomplish this I ran grep against the strings output which reduced my data to 84 lines (the grep -w switch matches on whole word and –i ignores case).

strings.exe –o irs-spam-email\prefetch\*.pf | grep –w –i temp

The number of lines went from 7,905 down to 84 which made it fairly easy for me to spot the following interesting lines.

\DEVICE\HARDDISKVOLUME1\DOCUME~1\ADMINI~1\LOCALS~1\TEMP\TEMPORARY DIRECTORY 1 FOR IRS%20DOCUMENT[1].ZIP\IRS DOCUMENT.EXE

\DEVICE\HARDDISKVOLUME1\DOCUME~1\ADMINI~1\LOCALS~1\TEMP\PUSK3.EXE

Using one filtering technique enabled me to quickly spot interesting executables in addition to the possibly finding the initial infection vector (a malicious zip file). This information was obtained by running only one command against the files inside a prefetch folder. In hindsight, my original analysis on the prefetch files was fairly limited (executable paths, runcounts, and filenames) but going forward I'll look at this artifact and the information they contain in a different light.

Ripping VSCs – Tracking User Activity

Tuesday, March 13, 2012 Posted by Corey Harrell 5 comments
For the past few months I have been discussing a different approach to examining Volume Shadow Copies (VSCs). I’m referring to the approach as Ripping VSCs and the two different methods to implement the approach are the Practitioner and Developer Methods. The multipart Ripping VSCs series is outlined in the Introduction post. On Thursday (03/15/2012) I’m doing a presentation for a DFIROnline Meet-up about tracking user activity through VSCs using the practitioner method. The presentation is titled Ripping VSCs – Tracking User Activity and the slide deck can be found on my Google sites page.

I wanted to briefly mention a few things about the slides. The presentation is meant to compliment the information I’ve been blogging about in regards to Ripping VSCs. In my Ripping VSCs posts I outlined why the approach is important, how it works, and examples showing anyone can start applying the technique to their casework. I now want to put the technique into context by showing how it might apply to an examination. Numerous types of examinations are interested in what a user was doing on a computer so talking about tracking someone’s activities should be applicable to a wider audience. To help explain put the approach into context I created a fake fraud case study to demonstrate how VSCs provide a more complete picture about what someone did on a computer. The presentation will be a mixture of slides with live demos against a live Windows 7 system. Below are the demos I have lined up (if I am short on time then the last demo is getting axed):

        - Previewing VSCs with Shadow Explorer
        - Listing VSCs and creating symbolic links to VSCs using vsc-parser
        - Parsing the link files in a user profile across VSCs using lslnk-directory-parse2.pl
        - Parsing Jump Lists in a user profile across VSCs using Harlan’s jl.pl
        - Extracting a Word document’s metadata across VSCs using Exiftool
        - Extracting and viewing a Word document from numerous VSCs using vsc-parser and Microsoft Word

I’m not covering everything in the slides but I purposely added additional information so the slides could be used as a reference. One example is the code for the batch scripts. Lastly, I’m working on my presentation skills so please lower your expectations. :)

Microsoft Word Jump List Tidbit

Sunday, March 11, 2012 Posted by Corey Harrell 12 comments
Performing examinations on the Windows 7 (and possibly 8) operating systems is going to become the norm. In anticipation of this occurring, I’m preparing myself by improving my processes, techniques, and knowledge about the artifacts found on these operating systems. One artifact others brought to my attention but I never tested until recently are Jump Lists (Harlan has an excellent write-up about Jumplist Analysis). I wanted to share a quick tidbit about Microsoft Word’s Jump List.

I knew Jump Lists were a new artifact in Windows 7 which contain information about a user’s activity on a system. I thought the user activity information would resemble something similar to link files showing what files were accessed as well as timestamps. I didn’t fully realize how much more information may be available about a user’s activity in Jump Lists until I started using Harlan’s jl.pl script included with WFA 3/e (my WFA 3/e five star review can be found here). I ran a simple test. Create a Word document and see what information jl.pl parses from Word’s Jump List located in the AutomaticDestinations folder. The following is a snippet from the output:

C:\Export\jumplist-research\AutomaticDestinations\adecfb853d77462a.automaticDestinations-ms

Thu Mar 8 02:20:50 2012 C:\fake-invoice.docx
Thu Mar 8 02:17:20 2012 C:\logo.png
Thu Mar 8 02:17:03 2012 C:\Users\test\AppData\Roaming\Microsoft\Templates
C:\Users\test\AppData\Roaming\Microsoft\Templates\TP030002465.dotx

Now let’s breakdown the output above. I identified the Microsoft Word 2007 Jump List (adecfb853d77462a.automaticDestinations-ms) using the list of Jump List Ids on the Forensic Wiki. The last entry shows I accessed a document called fake-invoice.docx at 02:20:50 on 03/08/2012. The other two entries contain information that was previously not available when examining link files. The second entry shows I used Microsoft Word to access an image called logo.png 30 seconds before accessing the fake-invoice.docx document. In addition, the third entry shows the first thing I accessed was a Microsoft Office template. The recorded activity in the Jump List shows exactly how I created the document. I first selected a template for an invoice and made a few changes. To make the invoice look real I imported a company’s image before I saved the document for the first time at 02:20:50.

When analyzing user activity prior to Windows 7 we could gather a lot of information about how a document was created. We could use the information to try to show how the document was created but it wasn’t like the play by play found in the Jump List. Microsoft Word records the files imported into a document and this information may be useful for certain types of cases. For me this information is going to be helpful on financial cases where templates are used to create fraudulent documents. Not every Jump List exhibits this behavior though. I tested something similar with PowerPoint and the following snippet shows what was in the Jump List.

C:\Export\jumplist-research\AutomaticDestinations\f5ac5390b9115fdb.automaticDestinations-ms

Thu Mar 8 02:31:03 2012 C:\Users\Public\Videos\Sample Videos
Thu Mar 8 02:30:32 2012 C:\Users\Public\Pictures\Sample Pictures
Thu Mar 8 02:27:46 2012 C:\Users\test\Desktop
C:\Users\test\Desktop\Presentation1.pptx

As the output shows, PowerPoint only records the objects imported down to the folder level. The entries don’t show the video and image’s filenames I added to the presentation. However, Microsoft Word records the filenames and this is something to be aware of going forward because it provides more information about what a user has been doing with the program.

Nothing ground breaking but just something I noticed while testing.
Labels: ,

Digital Forensics Meets Forensic Auditing

Monday, March 5, 2012 Posted by Corey Harrell 2 comments
One of my employer’s responsibilities is to ensure taxpayers’ dollars are used “effectively and efficiently”. To accomplish this there are numerous auditing and investigation departments in my organization. As one might expect I encounter a significant portion of fraud cases; from fraud audits to fraud investigations to a combination of the two. At times I get mandated have the opportunity to attend in-house trainings intended for auditors. Last week was an opportunity to attend Forensic Analytics: Methods and Techniques for Forensic Accounting Investigations by Mark Nigrini. The training covered the use of "statistical techniques such as Benford's Law, descriptive statistics, correlation, and time-series analysis to detect fraud and errors" in financial data. I try to keep an open mind with each training so I can at least identify anything to help me in information security or Digital Forensics and Incident Response (DFIR). Forensic Analytics was an interesting training and I wanted to briefly discuss a better understanding I have about the field I assist.

What is Digital Forensics and Forensic Auditing

Anyone who is involved with DFIR understands what our field entails. We perform digital forensic investigations which is “a process to answer questions about digital states and events that is completed in a manner so the results can be entered into a court of law”. There are numerous reasons to why digital forensics is performed including supporting:: criminal investigations, internal investigations, incident response, and forensic auditing. The original purpose for digital forensics in my organization was to help support the forensic auditing function in the auditing departments. Despite having forensics in both their names, Forensic Auditing is a completely different field. It is “an examination of an organization's or individual's economic affairs, resulting in a report designed especially for use in a court of law”. Forensic audits are used whenever someone needs reliable data on an entity's financial status or activities. These types of audits can not only detect errors in financial data but the audits can also detect fraudulent activities.

Digital forensics and forensic auditing both involve extensive data analysis but the examinations between the two are drastically different. The data examined in digital forensics can best be explained by Locard’s Exchange Principle. The principle states that when two objects come into contact there is a transfer between those objects. In the digital realm that transfer is data and digital forensics analyzes that data. Whether we are trying to determine what a person or program did on a computer we are trying to understand the data left on a computer after the person/program came into contact with it. The analysis process to understand the data uses the scientific method.

Forensic auditing deals with datasets for specific periods of time. A few examples of potential datasets are: invoices, payroll, receipts, and timesheets. Forensic auditing uses predictive analytics to detect fraud and errors in the data. Predictive analytics encompasses a variety of statistical techniques that analyzes data to find anomalies. One example is Benford’s Law which says in a list of data the first digit is distributed in a specific way. This means a dataset could be tested to see what records don’t apply to the law. The picture shows data conforming to Benford’s law and if there were numerous fraudulent records then there could be more spikes in the data (more first digits with 6, 7, 8 or 9 and less 1 and 2).

Benford’s Law is just one statistical technique leveraged in forensic auditing but the basic examination process is to start with a dataset then run different tests to identify anomalies. As I said before, this is drastically different then digital forensics where the data is observed first and tests are run to disprove your theories.

I thought an analogy would be a good way to sum up the differences between Digital Forensics and Forensic Auditing. An office has a cabinet in the corner of the room which is filled with invoices for the previous five years. A forensic auditor would take those invoices and then analyze them to find any fraudulent activities. A digital forensic examiner would take those same invoices and tell the auditor everything about the paper the invoices are on, who created the invoices, information about how the cabinet got into the room, who may have accessed the cabinet, who was talking about the invoices, and identify other things in the office tied to the cabinet. The analogy does a fairly decent job reflecting how the two different fields can complement each other to provide a more complete understanding about the invoices in the cabinet.

Understanding My Customers (and co-workers)

I went into the Forensic Analytics training hoping for two things; find a few techniques that I could apply to my DFIR work and to get a better understanding about who I provide digital forensic assistance to. The techniques and tests discussed for the most part did not translate over to my DFIR work but I did get a better understanding about who my customers are and how I can provide a better digital forensic service to them. Thinking back over the past few years I can now see I wasn’t asking the right questions because I never put myself in my customers’ shoes.

A typical statement I heard on fraud cases when I asked for additional information was the phrase “I’ll know it when I see it”. I thought maybe it was just me until I was talking to someone at PFIC last year who also supports financial investigators. He said people say the same phrase to him as well. I never completely understood what the phrase meant. In digital forensics if I was to describe something I try to put it into context. Look for artifact X and around X you may see Y and Z. I might also mention a few other artifacts to look for as well. I wouldn’t describe something by saying “I’ll know it when I see it”. Fraud auditing uses predictive analysis to see patterns in data. Tests are run against datasets to identify anomalies which are data points that fall outside the expected pattern. Those data points are possible indications of errors or fraud. When running the tests against the datasets in training I was asking myself what would fraud/errors look like and the answer to my question was “I’ll know it when I see it”.

The training gave me a better understanding about my customers (some are actually my co-workers but it’s easier to group everyone together) and the techniques they use to do their job in finding fraud. Going forward I have a better idea about how to phrase my questions so I can get more actionable information.

Preparing for the Future

I went into the training looking forward to learning about the different types of frauds, how they are detected, and spending a few days in the shoes of the people who send me the most work. I’ll admit there were a lot of times when I got distracted in the training. When a certain type of fraud was discussed my mind would start wandering about how I would approach an examination to validate if the fraud was occurring. Instead of paying attention to how to use excel to perform a statistically test against some financial data I found myself reflecting on: what are the different ways to commit this kind of fraud? What potential artifacts might exist on a network and where? What questions should I ask? What data sources should I be interested in? My wandering was more of a thought exercise about how to process different types of frauds so I am better prepared for what the auditing and investigations departments throw my way next.

Previously, I said the techniques and tests discussed mostly didn’t apply to disk analysis. I said mostly because the predictive analysis portion of the training helped me figure out the final piece to a technique I’ve been working on. The technique is a way to quickly identify potential fraudulent documents. This is a technique I could leverage tomorrow when faced with certain kinds of fraud. It could help reduce the amount of documents to focus on which in turn will enable me to provide information to the auditors/investigators faster. I also envision the technique not only being used by other digital forensic practitioners but fraud auditors and investigators can use it as well to detect potential frauds. I’m hoping to have a paper complete sometime before summer.

Gaining a better understanding about the people who bring me the most work and preparing myself to face what those people have in store for me tomorrow wasn’t a bad way to spend two days afterall.
Labels: