skip to main |
skip to sidebar
Sunday, April 1, 2012
Posted by
Corey Harrell
I was enjoying my Saturday afternoon doing various things around the house. My phone started ringing the caller ID showed it was from out of the area. I usually ignore these types of calls, but I answered this time because I didn’t want the ringing to wake my boys up from their nap. Dealing with a telemarketer is a lot easier than two sleep deprived kids.
Initially when I answered there was a few seconds of silence---then the line started ringing. My thought was “wait a minute, who is calling who here.” A female voice with a heavy accent picked up the phone; I immediately got flashbacks from my days dealing with foreign call centers when I worked in technical support. Then our conversation started:
Me: “Hello”
Female Stranger: “Is this Corey Harrell?”
Me: “Yes … who’s calling?”
Female Stranger: “This is Christina from Microsoft Software Maintenance Department calling about an issue with your computer. Viruses can be installed on computers without you knowing about it.”
Me: “What company are you with again?”
Female Stranger said something that sounded like “Esolvint”
Me in a very concerned tone: “Are you saying people can infect my computer without me even knowing it?”
Female Stranger: “Yes and your computer is infected.”
I knew immediately this was a telephone technical support scam, but I stayed on the line and pretended I knew nothing because I wanted to get first-hand experience about how these criminals operate. Conversation continued:
Female Stranger: “Are you at your computer?”
Me: “Yes”
Female Stranger: “Can you click the Start button then Run”
Me: “Okay …. The Start button then what? Something called Run”
Female Stranger: “What do you see?"
Me: “A box”
Female Stranger: “What kind of box”
Me: “A box that says Open With”
Female Stranger: “What do you see in the Open With path?”
Me: “Nothing” (At this point I had to withhold what I saw because then she might be on to me.)
Female Stranger: “You need to open the Event viewer to see your computer is infected”
Female Stranger: “Can you type in e-v-e-n-t-v-w-r”
Me: “I just typed in e-v-e-n-t-v-w-r”
Female Stranger: “Can you spell what is showing in the Open with path”
Me: “Eventvwr”
Female Stranger: “Can you spell what is showing in the Open with path”
The Female Stranger was taking too long to get to her point. I knew she was trying to get me to locate an error…any kind of error on my computer…to convince me my computer was infected and then from there she would walk me through steps to either give her remote access to my computer, actually infect my computer with a real virus or try to get my credit card information. I ran out of patience and changed the tone of the conversation.
Me: “Why are you trying to get me to access the Windows event viewer if you are saying I’m infected? The only thing in the Event viewer showing my computer was infected would be from an antivirus program but my computer doesn’t have any installed. The event viewer won’t show that my computer is infected”
Female Stranger sticking to the script: “You need to access the event viewer ….”
Me (as I rudely cut her off): “You can stop following your script now”
Female Stranger: complete silence
Me: “I know your scam and I know you are trying to get me to either infect my computer or give you remote access to my computer….”
She then hung up. I believe she knew I was on to her. It’s unfortunate since I wish she had heard everything I had to say about how I feel about people like her who try to take advantage of others. My guess is she wouldn’t care and just moved onto the next potential victim. Could that victim be you?
I’m sharing this cautionary tale so others remember the tale as old as time…”Don’t Talk To Strangers.” Especially when it comes to your private information….especially in the cyber world. Companies will not call you about some issue with your computer. Technical support will not contact you out of the blue knowing your computer is infected (unless it’s your help desk at work). Heck … even your neighborhood Geek won’t call you knowing there is something wrong with your computer.
If someone does then it’s a scam. Plain and simple some criminal is trying to trick you into giving them something. It might be to get you to infect your computer, give them access to your computer, or provide them with your credit card information. The next time you pick up a phone and someone on the other end says there is an issue with your computer let your spidey sense kick in and HANG UP.
Information about this type of scam is discussed in more detail at:
* Microsoft’s article Avoid Tech Support Phone Scams
* Sophos’ article Canadians Increasingly Defrauded by Fake Tech Support Phone Calls
* The Guardian’s article Virus Phone Scam Being Run from Call Centers in India
Updated links courtesy of Claus from Grand Stream Dreams:
Troy Hunt's Scamming the scammers – catching the virus call centre scammers red-handed
Troy Hunt's Anatomy of a virus call centre scam
I reposted my Everyday Cyber Security Facebook page article about my experience to reach a broader audience to warn others. The writing style is drastically different then what my blog readers are accustomed. My wife even edits the articles to make sure they are understandable and useful to the average person.
Sunday, March 25, 2012
Posted by
Corey Harrell
Windows 7 has various artifacts available to help provide context about files on a system. In previous posts I illustrated how the information contained in jump lists, link files, and Word documents helped explain how a specific document was created. The first post was Microsoft Word Jump List Tidbit where I touched on how Microsoft Word jump lists contain more information than the documents accessed because there were references to templates and images. I expanded on the information available in Word jump lists in my presentation Ripping VSCs – Tracking User Activity. In addition to jump list information I included data parsed from link files, documents’ metadata, and the documents’ content. The end result was that these three artifacts were able to show –at a high level - how a Word document inside a Volume Shadow Copy (VSC) was created. System timelines are a great technique to see how something came about on a system but I didn’t create one for my fake fraud case study. That is until now.
Timelines are a valuable technique to help better understand the data we see on a system. The ways in how timelines are used is limitless but the one commonality is providing context around an artifact or file. In my fake fraud case I outlined the information I extracted from VSC 12 to show how a document was created. Here’s a quick summary of the user’s actions: document was created with bluebckground_finance_charge.dotx template, Microsoft Word accessed a Staples icon, and document was saved. Despite the wealth of information extracted about the document, there were still some unanswered questions. Where did the Staples image come from? What else was the user doing when the document was being created? These are just two questions a timeline can help answer.
 |
| The Document of Interest |
Creating VSC Timelines
Ripping VSCs is a useful technique to examine VSCs copies but I don’t foresee using it for timeline creation. Timelines can contain a wealth of information from one image or VSC so extracting data across all VSCs to incorporate into a timeline would be way too much information. The approach I take with timelines is to initially include the artifacts that will help me accomplish my goals. If I see anything when working my timeline I can always add other artifacts but starting out I prefer to limit the amount of stuff I need to look at. (For more about how I approach timelines check out the post Building Timelines – Thought Process Behind It). I wanted to know more about the fraudulent document I located in VSC 12 so I narrowed my timeline data to just that VSC. I created the timeline using the following five steps:
1. Access VSCs
2. Setup Custom Log2timeline Plug-in Files
3. Create Timeline with Artifacts Information
4. Create Bodyfile with Filesystem Metadata
5. Add Filesystem Metadata to Timeline
Access VSCs
In previous posts I went into detail about how to access VSCs and I even provided references about how others access VSCs (one post was Ripping Volume Shadow Copies – Introduction). I won’t rehash the same information but I didn’t want to omit this step. I identified my VSC of interest was still numbered 12 and then I created a symbolic link named C:\vsc12 pointing to the VSC.
Setup Custom Log2timeline Plug-in Files
Log2timeline has the ability to use plug-in files so numerous plug-ins can run at the same time. I usually create custom plug-in files since I can specify the exact artifacts I want in my timeline. I setup one plug-in file to parse the artifacts located inside a specific user profile while a second plug-in file parses artifacts located throughout the system. I discussed in more depth how to create custom plug-in files in the post Building Timelines – Tools Usage. However, a quick way to create a custom file is to just copy and edit one of the built-in plug-in files. For my timeline I did the following on my Windows system to setup my two custom plug-in files.
- Browsed to the folder C:\Perl\lib\Log2t\input. This is the folder where log2timeline stores the input modules including plug-in files.
- Made two copies of the win7.lst plug-in file. I renamed one file to win7_user.lst and the other to win7_system.lst (the files can be named anything you want).
- Modified the win7_user.lst to only contain iehistory and win_link to parse Internet Explorer browser history and Windows link files respectfully.
- Modified the win7_system.lst to only contain the following: oxml, prefetch, and recycler. These plug-ins parse Microsoft Office 2007 metadata, prefetch files, and the recycle bin.
Create Timeline with Artifacts Information
The main reason why I use custom plug-in files is to limit the amount of log2timeline commands I need to run. I could have skipped the previous step which would have caused me to run five commands instead of the following two:
- log2timeline.pl -f win7_user -r -v -w timeline.csv -Z UTC C:/vsc12/Users/harrell
- log2timeline.pl -f win7_system -r -v -w timeline.csv -Z UTC C:/vsc12
The first command ran the custom plug-in file win7_user (-f switch) to recursively (-r switch) parse the IE browser history and link files inside the harrell user profile. The Users folder inside VSC 12 had three different user profiles so pointing log2timeline at the one let me avoid adding unnecessary data from the other user profiles. The second command ran the win7_system plug–in file to recursively parse 2007 Office metadata, prefetch files, and recycle bins inside VSC 12. Both log2timeline commands stored the output in the file timeline.csv in UTC format.
Create Bodyfile with Filesystem Metadata
At this point my timeline was created and it contained timeline information from select artifacts inside VSC 12. The last item to add to the timeline is data from the filesystem. Rob Lee discussed in his post Shadow Timelines And Other VolumeShadowCopy Digital Forensics Techniques with the Sleuthkit on Windows how to use the sleuthkit (fls.exe) to create a bodyfiles from VSCs. I used the method discussed in his post to execute fls.exe directly against VSC 12 as shown below.
- fls -r -m C: \\.\HarddiskVolumeShadowCopy12 >> bodyfile
The command made fls.exe recursively (-r switch) search VSC 12 for filesystem information and the output was redirected to a text file named bodyfile in mactime (-m switch) format.
Add Filesystem Metadata to Timeline
The timeline generated by Log2timeline is in csv format while the sleuthkit bodyfile is in mactime format. These two file formats are not compatible so I opted to convert the mactime bodyfile into the Log2timeline csv format. I did the conversion with the following command:
- log2timeline.pl -f mactime -w timeline.csv -Z UTC bodyfile
Reviewing the Timeline
The timeline I created included the following information: filesystem metadata, Office documents’ metadata, IE browser history, prefetch files, link files, and recycle bin information. I manually included the information inside Microsoft Word’s jump list since I didn’t have the time to put together a script to automate it. The timeline provided more context about the fraudulent document I located as can be seen in the summary below.
1. Microsoft Word was opened to create the Invoice-#233-staples-Office_Supplies.docx (Office metadata)
2. BlueBackground_Finance_Charge.dotx Word template was created on the system (filesystem)
3. User account accessed the template (link files)
4. Microsoft Word accessed the template (jump lists)
5. User performed a Google search for staple (web history)
6. User visited Staples.com (web history)
7. User accessed the staples.png located in C:/Drivers/video/images/ (link files)
8. The staples.png image was created in the images folder (filesystem)
9. Microsoft Word accessed the staples.png image (jump lists)
10. User continued accessing numerous web pages on Staples.com
11. Microsoft Word document Invoice-#233-staples-Office_Supplies.docx was created on the system (office metadata and filesystem)
12. User accessed the Invoice-#233-staples-Office_Supplies.docx document (link files and jump lists)
Here are the screenshots showing the activity I summarized above.
Monday, March 19, 2012
Posted by
Corey Harrell
The one thing I like about sharing is when someone opens your eyes about additional information in an artifact you frequently encounter. Harlan has been posting about prefetch files and the information he shared changed how I look at this artifact. Harlan’s first post Prefetch Analysis, Revisited discussed how the artifact contains strings -such as file names and full paths to modules that were either used or accessed by the executable. He also discussed how the data can not only provide information about what occurred on the system but it could be used in data reduction techniques. One data reduction referenced was searching on the file paths for words such as temp. Harlan’s second post was Prefetch Analysis, Revisited...Again... and he expanded on what information is inside prefetch files. He broke down what was inside a prefetch from one of my test systems where I ran Metasploit against a Java vulnerability. His analysis provided more context to what I found on the system and validated some of my findings by showing Java did in fact access the logs I identified. Needless to say, his two posts opened my files to additional information inside prefetch files. Additional information I didn’t see the first the first time through but now I’m taking a second look to see what I find and to test out how one of Harlan's data reduction techniques would have made things easier for me.
Validating Findings
I did a lot of posts about Java exploit artifacts but Harlan did an outstanding job breaking down what was inside one of those Java prefetch files. I still have images from other exploit artifact testing so I took a look at prefetch files from an Adobe exploit and Windows Help Center exploit. The Internet Explorer prefetch files in both images didn’t contain any references to the attack artifacts but the exploited applications’ prefetch files did.
The CVE-2010-2883 (PDF Cooltype) vulnerability is present in the cooltype.dll affecting certain Adobe Reader and Acrobat versions. My previous analysis identified the following: the system had a vulnerable Adobe reader version, a PDF exploit appeared on the system, the PDF exploit is accessed, and Adobe Reader executed. The strings in the ACRORD32.EXE-3A1F13AE.pf prefetch file helped to validate the attack because it shows that Adobe Reader did in fact access the cooltype.dll as shown below.
\DEVICE\HARDDISKVOLUME1\PROGRAM FILES\ADOBE\READER 9.0\READER\COOLTYPE.DLL
The prefetch file from the Windows Help Center URL Validation vulnerability system showed something similar to the cooltype.dll exploit. The Seclists Full disclosure author mentioned that Windows Media Player could be used in an attack against the Help Center vulnerability. The strings in the HELPCTR.EXE-3862B6F5.pf prefetch file showed the application did access a Windows Media Player folder during the exploit.
\DEVICE\HARDDISKVOLUME1\DOCUMENTS AND SETTINGS\ADMINISTRATOR\LOCAL SETTINGS\APPLICATION DATA\MICROSOFT\MEDIA PLAYER\
Finding Malware Faster
Prefetch files provided more information about the exploit artifacts left on a system. By itself this is valuable enough but another point Harlan mentioned was using the strings inside prefetch files for data reduction. One data reduction technique is to filter on files' paths. To demonstrate the technique and how effective it is at locating malware I ran strings across the prefetch folder in the image from the post Examining IRS Notification Letter SPAM. (note, strings is not the best tool to analyze prefetch files and I’m only using the tool to illustrate how data is reduced) I first ran the following command which resulted in 7,905 lines.
strings.exe –o irs-spam-email\prefetch\*.pf
I wanted to reduce the data by only showing the lines containing the word temp to see if anything launched from a temp folder. To accomplish this I ran grep against the strings output which reduced my data to 84 lines (the grep -w switch matches on whole word and –i ignores case).
strings.exe –o irs-spam-email\prefetch\*.pf | grep –w –i temp
The number of lines went from 7,905 down to 84 which made it fairly easy for me to spot the following interesting lines.
\DEVICE\HARDDISKVOLUME1\DOCUME~1\ADMINI~1\LOCALS~1\TEMP\TEMPORARY DIRECTORY 1 FOR IRS%20DOCUMENT[1].ZIP\IRS DOCUMENT.EXE
\DEVICE\HARDDISKVOLUME1\DOCUME~1\ADMINI~1\LOCALS~1\TEMP\PUSK3.EXE
Using one filtering technique enabled me to quickly spot interesting executables in addition to the possibly finding the initial infection vector (a malicious zip file). This information was obtained by running only one command against the files inside a prefetch folder. In hindsight, my original analysis on the prefetch files was fairly limited (executable paths, runcounts, and filenames) but going forward I'll look at this artifact and the information they contain in a different light.
Tuesday, March 13, 2012
Posted by
Corey Harrell
For the past few months I have been discussing a different approach to examining Volume Shadow Copies (VSCs). I’m referring to the approach as Ripping VSCs and the two different methods to implement the approach are the Practitioner and Developer Methods. The multipart Ripping VSCs series is outlined in the Introduction post. On Thursday (03/15/2012) I’m doing a presentation for a DFIROnline Meet-up about tracking user activity through VSCs using the practitioner method. The presentation is titled Ripping VSCs – Tracking User Activity and the slide deck can be found on my Google sites page.
I wanted to briefly mention a few things about the slides. The presentation is meant to compliment the information I’ve been blogging about in regards to Ripping VSCs. In my Ripping VSCs posts I outlined why the approach is important, how it works, and examples showing anyone can start applying the technique to their casework. I now want to put the technique into context by showing how it might apply to an examination. Numerous types of examinations are interested in what a user was doing on a computer so talking about tracking someone’s activities should be applicable to a wider audience. To help explain put the approach into context I created a fake fraud case study to demonstrate how VSCs provide a more complete picture about what someone did on a computer. The presentation will be a mixture of slides with live demos against a live Windows 7 system. Below are the demos I have lined up (if I am short on time then the last demo is getting axed):
- Previewing VSCs with Shadow Explorer
- Listing VSCs and creating symbolic links to VSCs using vsc-parser
- Parsing the link files in a user profile across VSCs using lslnk-directory-parse2.pl
- Parsing Jump Lists in a user profile across VSCs using Harlan’s jl.pl
- Extracting a Word document’s metadata across VSCs using Exiftool
- Extracting and viewing a Word document from numerous VSCs using vsc-parser and Microsoft Word
I’m not covering everything in the slides but I purposely added additional information so the slides could be used as a reference. One example is the code for the batch scripts. Lastly, I’m working on my presentation skills so please lower your expectations. :)
Sunday, March 11, 2012
Posted by
Corey Harrell
Performing examinations on the Windows 7 (and possibly 8) operating systems is going to become the norm. In anticipation of this occurring, I’m preparing myself by improving my processes, techniques, and knowledge about the artifacts found on these operating systems. One artifact others brought to my attention but I never tested until recently are Jump Lists (Harlan has an excellent write-up about Jumplist Analysis). I wanted to share a quick tidbit about Microsoft Word’s Jump List.
I knew Jump Lists were a new artifact in Windows 7 which contain information about a user’s activity on a system. I thought the user activity information would resemble something similar to link files showing what files were accessed as well as timestamps. I didn’t fully realize how much more information may be available about a user’s activity in Jump Lists until I started using Harlan’s jl.pl script included with WFA 3/e (my WFA 3/e five star review can be found here). I ran a simple test. Create a Word document and see what information jl.pl parses from Word’s Jump List located in the AutomaticDestinations folder. The following is a snippet from the output:
C:\Export\jumplist-research\AutomaticDestinations\adecfb853d77462a.automaticDestinations-ms
Thu Mar 8 02:20:50 2012 C:\fake-invoice.docx
Thu Mar 8 02:17:20 2012 C:\logo.png
Thu Mar 8 02:17:03 2012 C:\Users\test\AppData\Roaming\Microsoft\Templates
C:\Users\test\AppData\Roaming\Microsoft\Templates\TP030002465.dotx
Now let’s breakdown the output above. I identified the Microsoft Word 2007 Jump List (adecfb853d77462a.automaticDestinations-ms) using the list of Jump List Ids on the Forensic Wiki. The last entry shows I accessed a document called fake-invoice.docx at 02:20:50 on 03/08/2012. The other two entries contain information that was previously not available when examining link files. The second entry shows I used Microsoft Word to access an image called logo.png 30 seconds before accessing the fake-invoice.docx document. In addition, the third entry shows the first thing I accessed was a Microsoft Office template. The recorded activity in the Jump List shows exactly how I created the document. I first selected a template for an invoice and made a few changes. To make the invoice look real I imported a company’s image before I saved the document for the first time at 02:20:50.
When analyzing user activity prior to Windows 7 we could gather a lot of information about how a document was created. We could use the information to try to show how the document was created but it wasn’t like the play by play found in the Jump List. Microsoft Word records the files imported into a document and this information may be useful for certain types of cases. For me this information is going to be helpful on financial cases where templates are used to create fraudulent documents. Not every Jump List exhibits this behavior though. I tested something similar with PowerPoint and the following snippet shows what was in the Jump List.
C:\Export\jumplist-research\AutomaticDestinations\f5ac5390b9115fdb.automaticDestinations-ms
Thu Mar 8 02:31:03 2012 C:\Users\Public\Videos\Sample Videos
Thu Mar 8 02:30:32 2012 C:\Users\Public\Pictures\Sample Pictures
Thu Mar 8 02:27:46 2012 C:\Users\test\Desktop
C:\Users\test\Desktop\Presentation1.pptx
As the output shows, PowerPoint only records the objects imported down to the folder level. The entries don’t show the video and image’s filenames I added to the presentation. However, Microsoft Word records the filenames and this is something to be aware of going forward because it provides more information about what a user has been doing with the program.
Nothing ground breaking but just something I noticed while testing.
Monday, March 5, 2012
Posted by
Corey Harrell
One of my employer’s responsibilities is to ensure taxpayers’ dollars are used “effectively and efficiently”. To accomplish this there are numerous auditing and investigation departments in my organization. As one might expect I encounter a significant portion of fraud cases; from fraud audits to fraud investigations to a combination of the two. At times I get mandated have the opportunity to attend in-house trainings intended for auditors. Last week was an opportunity to attend Forensic Analytics: Methods and Techniques for Forensic Accounting Investigations by Mark Nigrini. The training covered the use of "statistical techniques such as Benford's Law, descriptive statistics, correlation, and time-series analysis to detect fraud and errors" in financial data. I try to keep an open mind with each training so I can at least identify anything to help me in information security or Digital Forensics and Incident Response (DFIR). Forensic Analytics was an interesting training and I wanted to briefly discuss a better understanding I have about the field I assist.
What is Digital Forensics and Forensic Auditing
Anyone who is involved with DFIR understands what our field entails. We perform digital forensic investigations which is “a process to answer questions about digital states and events that is completed in a manner so the results can be entered into a court of law”. There are numerous reasons to why digital forensics is performed including supporting:: criminal investigations, internal investigations, incident response, and forensic auditing. The original purpose for digital forensics in my organization was to help support the forensic auditing function in the auditing departments. Despite having forensics in both their names, Forensic Auditing is a completely different field. It is “an examination of an organization's or individual's economic affairs, resulting in a report designed especially for use in a court of law”. Forensic audits are used whenever someone needs reliable data on an entity's financial status or activities. These types of audits can not only detect errors in financial data but the audits can also detect fraudulent activities.
Digital forensics and forensic auditing both involve extensive data analysis but the examinations between the two are drastically different. The data examined in digital forensics can best be explained by Locard’s Exchange Principle. The principle states that when two objects come into contact there is a transfer between those objects. In the digital realm that transfer is data and digital forensics analyzes that data. Whether we are trying to determine what a person or program did on a computer we are trying to understand the data left on a computer after the person/program came into contact with it. The analysis process to understand the data uses the scientific method.

Forensic auditing deals with datasets for specific periods of time. A few examples of potential datasets are: invoices, payroll, receipts, and timesheets. Forensic auditing uses predictive analytics to detect fraud and errors in the data. Predictive analytics encompasses a variety of statistical techniques that analyzes data to find anomalies. One example is Benford’s Law which says in a list of data the first digit is distributed in a specific way. This means a dataset could be tested to see what records don’t apply to the law. The picture shows data conforming to Benford’s law and if there were numerous fraudulent records then there could be more spikes in the data (more first digits with 6, 7, 8 or 9 and less 1 and 2).
Benford’s Law is just one statistical technique leveraged in forensic auditing but the basic examination process is to start with a dataset then run different tests to identify anomalies. As I said before, this is drastically different then digital forensics where the data is observed first and tests are run to disprove your theories.
I thought an analogy would be a good way to sum up the differences between Digital Forensics and Forensic Auditing. An office has a cabinet in the corner of the room which is filled with invoices for the previous five years. A forensic auditor would take those invoices and then analyze them to find any fraudulent activities. A digital forensic examiner would take those same invoices and tell the auditor everything about the paper the invoices are on, who created the invoices, information about how the cabinet got into the room, who may have accessed the cabinet, who was talking about the invoices, and identify other things in the office tied to the cabinet. The analogy does a fairly decent job reflecting how the two different fields can complement each other to provide a more complete understanding about the invoices in the cabinet.
Understanding My Customers (and co-workers)
I went into the Forensic Analytics training hoping for two things; find a few techniques that I could apply to my DFIR work and to get a better understanding about who I provide digital forensic assistance to. The techniques and tests discussed for the most part did not translate over to my DFIR work but I did get a better understanding about who my customers are and how I can provide a better digital forensic service to them. Thinking back over the past few years I can now see I wasn’t asking the right questions because I never put myself in my customers’ shoes.
A typical statement I heard on fraud cases when I asked for additional information was the phrase “I’ll know it when I see it”. I thought maybe it was just me until I was talking to someone at PFIC last year who also supports financial investigators. He said people say the same phrase to him as well. I never completely understood what the phrase meant. In digital forensics if I was to describe something I try to put it into context. Look for artifact X and around X you may see Y and Z. I might also mention a few other artifacts to look for as well. I wouldn’t describe something by saying “I’ll know it when I see it”. Fraud auditing uses predictive analysis to see patterns in data. Tests are run against datasets to identify anomalies which are data points that fall outside the expected pattern. Those data points are possible indications of errors or fraud. When running the tests against the datasets in training I was asking myself what would fraud/errors look like and the answer to my question was “I’ll know it when I see it”.
The training gave me a better understanding about my customers (some are actually my co-workers but it’s easier to group everyone together) and the techniques they use to do their job in finding fraud. Going forward I have a better idea about how to phrase my questions so I can get more actionable information.
Preparing for the Future
I went into the training looking forward to learning about the different types of frauds, how they are detected, and spending a few days in the shoes of the people who send me the most work. I’ll admit there were a lot of times when I got distracted in the training. When a certain type of fraud was discussed my mind would start wandering about how I would approach an examination to validate if the fraud was occurring. Instead of paying attention to how to use excel to perform a statistically test against some financial data I found myself reflecting on: what are the different ways to commit this kind of fraud? What potential artifacts might exist on a network and where? What questions should I ask? What data sources should I be interested in? My wandering was more of a thought exercise about how to process different types of frauds so I am better prepared for what the auditing and investigations departments throw my way next.
Previously, I said the techniques and tests discussed mostly didn’t apply to disk analysis. I said mostly because the predictive analysis portion of the training helped me figure out the final piece to a technique I’ve been working on. The technique is a way to quickly identify potential fraudulent documents. This is a technique I could leverage tomorrow when faced with certain kinds of fraud. It could help reduce the amount of documents to focus on which in turn will enable me to provide information to the auditors/investigators faster. I also envision the technique not only being used by other digital forensic practitioners but fraud auditors and investigators can use it as well to detect potential frauds. I’m hoping to have a paper complete sometime before summer.
Gaining a better understanding about the people who bring me the most work and preparing myself to face what those people have in store for me tomorrow wasn’t a bad way to spend two days afterall.
Sunday, February 26, 2012
Posted by
Corey Harrell
Last week I finished reading Windows Forensic Analysis 3rd Edition by Harlan Carvey. I think WFA 3/e will be a welcomed addition to anyone’s Digital Forensic and Incident Response (DFIR) library. The book has a lot to offer but the content about Windows 7 and processes is why I’m glad it’s in my library.
All about Windows 7
When thinking about references we have available when performing digital forensic examinations on a Windows 7 system there aren’t a lot that come to mind. We have some great presentation slides (cough cough Troy Larson cough), a few blog posts, and the paper SWDGE Technical Notes on Microsoft Windows 7. However, there isn’t a DFIR book who’s main focus is about Windows 7 until now. WFA 3/e comes out of the gates talking about Windows 7 in Chapter 3. The chapter goes into great detail about volume shadow copies (VSCs). What VSCs are, how to access VSCs, different methods to examine VSCs, and different tools available to use against VSCs. The Windows 7 theme continued into Chapter 4 File Analysis with topics such as event logs and jumplists (a new artifact showing user activity). Rounding out the forensic nuggets about Windows 7 was Chapter 5 Registry Analysis. At first I was worried about reading the same information I read in Windows Forensic Analysis 2nd Edition or Windows Registry Forensics but my worries were unfounded. The author has said numerous times WFA 3/e is not a rewrite to his other books and is a companion book. The registry analysis chapter showed how true the statement is because it focused on what information can be pulled from Windows 7 registry hives. The author also highlighted the differences between Windows 7 and previous Windows operating systems. If anyone is going to be encountering Windows 7 systems then WFA 3/e will be one of the references to have within reaching distance.
Process, Process, Process
WFA 3/e discusses numerous Windows artifacts and different tools capable of parsing those artifacts. The book also provides context about the artifacts and tools by discussing the DFIR processes behind them. Right off the bat the author lays the foundation by discussing Analysis Concepts in Chapter 1. There is even a section about tools versus processes. A quote I liked was “analysts can find themselves focusing on specific tool or application rather than the overall process”. I see a lot of DFIR discussions focus on tools instead of the overall process on how those tools could be used. I even fell into this trap earlier in my career. Whenever I read a DFIR book or any analysis book for that matter I want to see the author explain the overall process because it makes it easier for me to translate the information over to my work. WFA 3/e did an outstanding job discussing processes which can be seen in various chapters. The two chapters I wanted to mention specifically are 6 and 7.
Chapter 6 Malware Detection was dedicated to how the author goes about to finding malware on a system. The author lays out the overall process he follows (a checklist accompanies the book) and then goes into detail about what he is looking for and what tools he uses to carry out the process. The same approach is used in Chapter 7 Timeline Analysis. The author discusses his process for performing timeline analysis including: how he approaches timelines, how he builds timelines, and how he examines timelines.
It’s nice to see the processes someone else uses and the case experiences shared by the author helped reinforced why the process is important. WFA 3/e doesn’t disappoint because the author not only provides tools to do DFIR work but he lays out a process that others can follow.
Don’t Overlook the Materials Accompanying the Book
The author made the supporting material to WFA 3/e available online (on this Google page) and this is a welcomed feature for those of us who bought the book’s electronic version. Similar to the author’s previous books I already mentioned, the materials accompanying his book are full of DFIR goodies such as ….
* jumplist parser (jl.pl): the author wrote a script to parse jumplists. This is the only command-line tool I know of that can parse jumplists. I tested script against jumplists inside VSCs and the results were impressive.
* Malware detection capability: there are different scripts to help with detecting malware including mbr.pl to find mbr infections and wfpchk.pl to check the contents of the dllcache.
* Checklists: there are a few different checklists that may be useful references during an examine.
* Source code: the source code is provided to all the scripts. I’m teaching myself Perl so being able to read the code helps me get a better understanding about not only knowing how the script works but how the author puts scripts together.
Clarification about ShadowExplorer
There were no significant improvements I could suggest to make WFA 3/e better. I could make a couple minor suggestions but there isn’t anything glaring. However, there was something I wanted to clarify. Chapter 3 Volume Shadow Copies Analysis mentions using ShadowExplorer to access and browse VSCs. The author mentioned that ShadowExplorer will only show the VSCs available within the volume or drive on which the program is installed on. That ShadowExplorer has to be reinstalled on the drive in order to view its VSCs. The section I’m referring to is on Kindle page 1,366. I might have misunderstood this statement and if I did then please ignore this section to my book review.
ShadowExplorer only needs to be installed on your forensic workstation and it can be used to view any volume’s VSCs mounted to the workstation. The drop down menu next to the drive letter lets you select any drive letter on the workstation to view that volume’s VSCs. I’ve used ShadowExplorer in this manner to view VSCs for drives connected to my system through USB docks and to view the VSCs inside a mounted forensic image. It's a nice way to preview VSCs.
Overall Five Star Review
Overall I give WFA 3/e a five star review (Amazon rating from 0 to 5 stars). The book has a lot to offer from Windows 7 artifacts to DFIR processes to better understanding the artifacts we encounter. As I said in the beginning to the post, the book is a welcomed addition to anyone’s DFIR library and it’s a great companion book to the author’s other books about digital forensics on Windows systems.
I wanted to say how humbling it was to see the author mention my blog. Before I became more active online I lurked in the shadows following a lot of people in the DFIR community. Harlan is one of those people. Every time I see someone mention me I am still taken back. I wanted to say thank you Harlan for the recognition and including an earlier version to my Regripper VSC batch script in your materials. (an updated version to the script can be found here).