Volume Shadow Copy Timeline

Sunday, March 25, 2012 Posted by Corey Harrell 2 comments
Windows 7 has various artifacts available to help provide context about files on a system. In previous posts I illustrated how the information contained in jump lists, link files, and Word documents helped explain how a specific document was created. The first post was Microsoft Word Jump List Tidbit where I touched on how Microsoft Word jump lists contain more information than the documents accessed because there were references to templates and images. I expanded on the information available in Word jump lists in my presentation Ripping VSCs – Tracking User Activity. In addition to jump list information I included data parsed from link files, documents’ metadata, and the documents’ content. The end result was that these three artifacts were able to show –at a high level - how a Word document inside a Volume Shadow Copy (VSC) was created. System timelines are a great technique to see how something came about on a system but I didn’t create one for my fake fraud case study. That is until now.

Timelines are a valuable technique to help better understand the data we see on a system. The ways in how timelines are used is limitless but the one commonality is providing context around an artifact or file. In my fake fraud case I outlined the information I extracted from VSC 12 to show how a document was created. Here’s a quick summary of the user’s actions: document was created with bluebckground_finance_charge.dotx template, Microsoft Word accessed a Staples icon, and document was saved. Despite the wealth of information extracted about the document, there were still some unanswered questions. Where did the Staples image come from? What else was the user doing when the document was being created? These are just two questions a timeline can help answer.

The Document of Interest


Creating VSC Timelines


Ripping VSCs is a useful technique to examine VSCs copies but I don’t foresee using it for timeline creation. Timelines can contain a wealth of information from one image or VSC so extracting data across all VSCs to incorporate into a timeline would be way too much information. The approach I take with timelines is to initially include the artifacts that will help me accomplish my goals. If I see anything when working my timeline I can always add other artifacts but starting out I prefer to limit the amount of stuff I need to look at. (For more about how I approach timelines check out the post Building Timelines – Thought Process Behind It). I wanted to know more about the fraudulent document I located in VSC 12 so I narrowed my timeline data to just that VSC. I created the timeline using the following five steps:

        1. Access VSCs
        2. Setup Custom Log2timeline Plug-in Files
        3. Create Timeline with Artifacts Information
        4. Create Bodyfile with Filesystem Metadata
        5. Add Filesystem Metadata to Timeline

Access VSCs


In previous posts I went into detail about how to access VSCs and I even provided references about how others access VSCs (one post was Ripping Volume Shadow Copies – Introduction). I won’t rehash the same information but I didn’t want to omit this step. I identified my VSC of interest was still numbered 12 and then I created a symbolic link named C:\vsc12 pointing to the VSC.

Setup Custom Log2timeline Plug-in Files


Log2timeline has the ability to use plug-in files so numerous plug-ins can run at the same time. I usually create custom plug-in files since I can specify the exact artifacts I want in my timeline. I setup one plug-in file to parse the artifacts located inside a specific user profile while a second plug-in file parses artifacts located throughout the system. I discussed in more depth how to create custom plug-in files in the post Building Timelines – Tools Usage. However, a quick way to create a custom file is to just copy and edit one of the built-in plug-in files. For my timeline I did the following on my Windows system to setup my two custom plug-in files.

        - Browsed to the folder C:\Perl\lib\Log2t\input. This is the folder where log2timeline stores the input modules including plug-in files.

        - Made two copies of the win7.lst plug-in file. I renamed one file to win7_user.lst and the other to win7_system.lst (the files can be named anything you want).

        - Modified the win7_user.lst to only contain iehistory and win_link to parse Internet Explorer browser history and Windows link files respectfully.

        - Modified the win7_system.lst to only contain the following: oxml, prefetch, and recycler. These plug-ins parse Microsoft Office 2007 metadata, prefetch files, and the recycle bin.

Create Timeline with Artifacts Information


The main reason why I use custom plug-in files is to limit the amount of log2timeline commands I need to run. I could have skipped the previous step which would have caused me to run five commands instead of the following two:

        - log2timeline.pl -f win7_user -r -v -w timeline.csv -Z UTC C:/vsc12/Users/harrell

        - log2timeline.pl -f win7_system -r -v -w timeline.csv -Z UTC C:/vsc12

The first command ran the custom plug-in file win7_user (-f switch) to recursively (-r switch) parse the IE browser history and link files inside the harrell user profile. The Users folder inside VSC 12 had three different user profiles so pointing log2timeline at the one let me avoid adding unnecessary data from the other user profiles. The second command ran the win7_system plug–in file to recursively parse 2007 Office metadata, prefetch files, and recycle bins inside VSC 12. Both log2timeline commands stored the output in the file timeline.csv in UTC format.

Create Bodyfile with Filesystem Metadata


At this point my timeline was created and it contained timeline information from select artifacts inside VSC 12. The last item to add to the timeline is data from the filesystem. Rob Lee discussed in his post Shadow Timelines And Other VolumeShadowCopy Digital Forensics Techniques with the Sleuthkit on Windows how to use the sleuthkit (fls.exe) to create a bodyfiles from VSCs. I used the method discussed in his post to execute fls.exe directly against VSC 12 as shown below.

        - fls -r -m C: \\.\HarddiskVolumeShadowCopy12 >> bodyfile

The command made fls.exe recursively (-r switch) search VSC 12 for filesystem information and the output was redirected to a text file named bodyfile in mactime (-m switch) format.

Add Filesystem Metadata to Timeline


The timeline generated by Log2timeline is in csv format while the sleuthkit bodyfile is in mactime format. These two file formats are not compatible so I opted to convert the mactime bodyfile into the Log2timeline csv format. I did the conversion with the following command:

        - log2timeline.pl -f mactime -w timeline.csv -Z UTC bodyfile

Reviewing the Timeline


The timeline I created included the following information: filesystem metadata, Office documents’ metadata, IE browser history, prefetch files, link files, and recycle bin information. I manually included the information inside Microsoft Word’s jump list since I didn’t have the time to put together a script to automate it. The timeline provided more context about the fraudulent document I located as can be seen in the summary below.

1. Microsoft Word was opened to create the Invoice-#233-staples-Office_Supplies.docx (Office metadata)

2. BlueBackground_Finance_Charge.dotx Word template was created on the system (filesystem)

3. User account accessed the template (link files)

4. Microsoft Word accessed the template (jump lists)

5. User performed a Google search for staple (web history)

6. User visited Staples.com (web history)

7. User accessed the staples.png located in C:/Drivers/video/images/ (link files)

8. The staples.png image was created in the images folder (filesystem)

9. Microsoft Word accessed the staples.png image (jump lists)

10. User continued accessing numerous web pages on Staples.com

11. Microsoft Word document Invoice-#233-staples-Office_Supplies.docx was created on the system (office metadata and filesystem)

12. User accessed the Invoice-#233-staples-Office_Supplies.docx document (link files and jump lists)


Here are the screenshots showing the activity I summarized above.













Second Look at Prefetch Files

Monday, March 19, 2012 Posted by Corey Harrell 1 comments
The one thing I like about sharing is when someone opens your eyes about additional information in an artifact you frequently encounter. Harlan has been posting about prefetch files and the information he shared changed how I look at this artifact. Harlan’s first post Prefetch Analysis, Revisited discussed how the artifact contains strings -such as file names and full paths to modules that were either used or accessed by the executable. He also discussed how the data can not only provide information about what occurred on the system but it could be used in data reduction techniques. One data reduction referenced was searching on the file paths for words such as temp. Harlan’s second post was Prefetch Analysis, Revisited...Again... and he expanded on what information is inside prefetch files. He broke down what was inside a prefetch from one of my test systems where I ran Metasploit against a Java vulnerability. His analysis provided more context to what I found on the system and validated some of my findings by showing Java did in fact access the logs I identified. Needless to say, his two posts opened my files to additional information inside prefetch files. Additional information I didn’t see the first the first time through but now I’m taking a second look to see what I find and to test out how one of Harlan's data reduction techniques would have made things easier for me.

Validating Findings

I did a lot of posts about Java exploit artifacts but Harlan did an outstanding job breaking down what was inside one of those Java prefetch files. I still have images from other exploit artifact testing so I took a look at prefetch files from an Adobe exploit and Windows Help Center exploit. The Internet Explorer prefetch files in both images didn’t contain any references to the attack artifacts but the exploited applications’ prefetch files did.

The CVE-2010-2883 (PDF Cooltype) vulnerability is present in the cooltype.dll affecting certain Adobe Reader and Acrobat versions. My previous analysis identified the following: the system had a vulnerable Adobe reader version, a PDF exploit appeared on the system, the PDF exploit is accessed, and Adobe Reader executed. The strings in the ACRORD32.EXE-3A1F13AE.pf prefetch file helped to validate the attack because it shows that Adobe Reader did in fact access the cooltype.dll as shown below.

\DEVICE\HARDDISKVOLUME1\PROGRAM FILES\ADOBE\READER 9.0\READER\COOLTYPE.DLL

The prefetch file from the Windows Help Center URL Validation vulnerability system showed something similar to the cooltype.dll exploit. The Seclists Full disclosure author mentioned that Windows Media Player could be used in an attack against the Help Center vulnerability. The strings in the HELPCTR.EXE-3862B6F5.pf prefetch file showed the application did access a Windows Media Player folder during the exploit.

\DEVICE\HARDDISKVOLUME1\DOCUMENTS AND SETTINGS\ADMINISTRATOR\LOCAL SETTINGS\APPLICATION DATA\MICROSOFT\MEDIA PLAYER\

Finding Malware Faster

Prefetch files provided more information about the exploit artifacts left on a system. By itself this is valuable enough but another point Harlan mentioned was using the strings inside prefetch files for data reduction. One data reduction technique is to filter on files' paths. To demonstrate the technique and how effective it is at locating malware I ran strings across the prefetch folder in the image from the post Examining IRS Notification Letter SPAM. (note, strings is not the best tool to analyze prefetch files and I’m only using the tool to illustrate how data is reduced) I first ran the following command which resulted in 7,905 lines.

strings.exe –o irs-spam-email\prefetch\*.pf

I wanted to reduce the data by only showing the lines containing the word temp to see if anything launched from a temp folder. To accomplish this I ran grep against the strings output which reduced my data to 84 lines (the grep -w switch matches on whole word and –i ignores case).

strings.exe –o irs-spam-email\prefetch\*.pf | grep –w –i temp

The number of lines went from 7,905 down to 84 which made it fairly easy for me to spot the following interesting lines.

\DEVICE\HARDDISKVOLUME1\DOCUME~1\ADMINI~1\LOCALS~1\TEMP\TEMPORARY DIRECTORY 1 FOR IRS%20DOCUMENT[1].ZIP\IRS DOCUMENT.EXE

\DEVICE\HARDDISKVOLUME1\DOCUME~1\ADMINI~1\LOCALS~1\TEMP\PUSK3.EXE

Using one filtering technique enabled me to quickly spot interesting executables in addition to the possibly finding the initial infection vector (a malicious zip file). This information was obtained by running only one command against the files inside a prefetch folder. In hindsight, my original analysis on the prefetch files was fairly limited (executable paths, runcounts, and filenames) but going forward I'll look at this artifact and the information they contain in a different light.

Ripping VSCs – Tracking User Activity

Tuesday, March 13, 2012 Posted by Corey Harrell 5 comments
For the past few months I have been discussing a different approach to examining Volume Shadow Copies (VSCs). I’m referring to the approach as Ripping VSCs and the two different methods to implement the approach are the Practitioner and Developer Methods. The multipart Ripping VSCs series is outlined in the Introduction post. On Thursday (03/15/2012) I’m doing a presentation for a DFIROnline Meet-up about tracking user activity through VSCs using the practitioner method. The presentation is titled Ripping VSCs – Tracking User Activity and the slide deck can be found on my Google sites page.

I wanted to briefly mention a few things about the slides. The presentation is meant to compliment the information I’ve been blogging about in regards to Ripping VSCs. In my Ripping VSCs posts I outlined why the approach is important, how it works, and examples showing anyone can start applying the technique to their casework. I now want to put the technique into context by showing how it might apply to an examination. Numerous types of examinations are interested in what a user was doing on a computer so talking about tracking someone’s activities should be applicable to a wider audience. To help explain put the approach into context I created a fake fraud case study to demonstrate how VSCs provide a more complete picture about what someone did on a computer. The presentation will be a mixture of slides with live demos against a live Windows 7 system. Below are the demos I have lined up (if I am short on time then the last demo is getting axed):

        - Previewing VSCs with Shadow Explorer
        - Listing VSCs and creating symbolic links to VSCs using vsc-parser
        - Parsing the link files in a user profile across VSCs using lslnk-directory-parse2.pl
        - Parsing Jump Lists in a user profile across VSCs using Harlan’s jl.pl
        - Extracting a Word document’s metadata across VSCs using Exiftool
        - Extracting and viewing a Word document from numerous VSCs using vsc-parser and Microsoft Word

I’m not covering everything in the slides but I purposely added additional information so the slides could be used as a reference. One example is the code for the batch scripts. Lastly, I’m working on my presentation skills so please lower your expectations. :)

Microsoft Word Jump List Tidbit

Sunday, March 11, 2012 Posted by Corey Harrell 12 comments
Performing examinations on the Windows 7 (and possibly 8) operating systems is going to become the norm. In anticipation of this occurring, I’m preparing myself by improving my processes, techniques, and knowledge about the artifacts found on these operating systems. One artifact others brought to my attention but I never tested until recently are Jump Lists (Harlan has an excellent write-up about Jumplist Analysis). I wanted to share a quick tidbit about Microsoft Word’s Jump List.

I knew Jump Lists were a new artifact in Windows 7 which contain information about a user’s activity on a system. I thought the user activity information would resemble something similar to link files showing what files were accessed as well as timestamps. I didn’t fully realize how much more information may be available about a user’s activity in Jump Lists until I started using Harlan’s jl.pl script included with WFA 3/e (my WFA 3/e five star review can be found here). I ran a simple test. Create a Word document and see what information jl.pl parses from Word’s Jump List located in the AutomaticDestinations folder. The following is a snippet from the output:

C:\Export\jumplist-research\AutomaticDestinations\adecfb853d77462a.automaticDestinations-ms

Thu Mar 8 02:20:50 2012 C:\fake-invoice.docx
Thu Mar 8 02:17:20 2012 C:\logo.png
Thu Mar 8 02:17:03 2012 C:\Users\test\AppData\Roaming\Microsoft\Templates
C:\Users\test\AppData\Roaming\Microsoft\Templates\TP030002465.dotx

Now let’s breakdown the output above. I identified the Microsoft Word 2007 Jump List (adecfb853d77462a.automaticDestinations-ms) using the list of Jump List Ids on the Forensic Wiki. The last entry shows I accessed a document called fake-invoice.docx at 02:20:50 on 03/08/2012. The other two entries contain information that was previously not available when examining link files. The second entry shows I used Microsoft Word to access an image called logo.png 30 seconds before accessing the fake-invoice.docx document. In addition, the third entry shows the first thing I accessed was a Microsoft Office template. The recorded activity in the Jump List shows exactly how I created the document. I first selected a template for an invoice and made a few changes. To make the invoice look real I imported a company’s image before I saved the document for the first time at 02:20:50.

When analyzing user activity prior to Windows 7 we could gather a lot of information about how a document was created. We could use the information to try to show how the document was created but it wasn’t like the play by play found in the Jump List. Microsoft Word records the files imported into a document and this information may be useful for certain types of cases. For me this information is going to be helpful on financial cases where templates are used to create fraudulent documents. Not every Jump List exhibits this behavior though. I tested something similar with PowerPoint and the following snippet shows what was in the Jump List.

C:\Export\jumplist-research\AutomaticDestinations\f5ac5390b9115fdb.automaticDestinations-ms

Thu Mar 8 02:31:03 2012 C:\Users\Public\Videos\Sample Videos
Thu Mar 8 02:30:32 2012 C:\Users\Public\Pictures\Sample Pictures
Thu Mar 8 02:27:46 2012 C:\Users\test\Desktop
C:\Users\test\Desktop\Presentation1.pptx

As the output shows, PowerPoint only records the objects imported down to the folder level. The entries don’t show the video and image’s filenames I added to the presentation. However, Microsoft Word records the filenames and this is something to be aware of going forward because it provides more information about what a user has been doing with the program.

Nothing ground breaking but just something I noticed while testing.
Labels: ,

Digital Forensics Meets Forensic Auditing

Monday, March 5, 2012 Posted by Corey Harrell 2 comments
One of my employer’s responsibilities is to ensure taxpayers’ dollars are used “effectively and efficiently”. To accomplish this there are numerous auditing and investigation departments in my organization. As one might expect I encounter a significant portion of fraud cases; from fraud audits to fraud investigations to a combination of the two. At times I get mandated have the opportunity to attend in-house trainings intended for auditors. Last week was an opportunity to attend Forensic Analytics: Methods and Techniques for Forensic Accounting Investigations by Mark Nigrini. The training covered the use of "statistical techniques such as Benford's Law, descriptive statistics, correlation, and time-series analysis to detect fraud and errors" in financial data. I try to keep an open mind with each training so I can at least identify anything to help me in information security or Digital Forensics and Incident Response (DFIR). Forensic Analytics was an interesting training and I wanted to briefly discuss a better understanding I have about the field I assist.

What is Digital Forensics and Forensic Auditing

Anyone who is involved with DFIR understands what our field entails. We perform digital forensic investigations which is “a process to answer questions about digital states and events that is completed in a manner so the results can be entered into a court of law”. There are numerous reasons to why digital forensics is performed including supporting:: criminal investigations, internal investigations, incident response, and forensic auditing. The original purpose for digital forensics in my organization was to help support the forensic auditing function in the auditing departments. Despite having forensics in both their names, Forensic Auditing is a completely different field. It is “an examination of an organization's or individual's economic affairs, resulting in a report designed especially for use in a court of law”. Forensic audits are used whenever someone needs reliable data on an entity's financial status or activities. These types of audits can not only detect errors in financial data but the audits can also detect fraudulent activities.

Digital forensics and forensic auditing both involve extensive data analysis but the examinations between the two are drastically different. The data examined in digital forensics can best be explained by Locard’s Exchange Principle. The principle states that when two objects come into contact there is a transfer between those objects. In the digital realm that transfer is data and digital forensics analyzes that data. Whether we are trying to determine what a person or program did on a computer we are trying to understand the data left on a computer after the person/program came into contact with it. The analysis process to understand the data uses the scientific method.

Forensic auditing deals with datasets for specific periods of time. A few examples of potential datasets are: invoices, payroll, receipts, and timesheets. Forensic auditing uses predictive analytics to detect fraud and errors in the data. Predictive analytics encompasses a variety of statistical techniques that analyzes data to find anomalies. One example is Benford’s Law which says in a list of data the first digit is distributed in a specific way. This means a dataset could be tested to see what records don’t apply to the law. The picture shows data conforming to Benford’s law and if there were numerous fraudulent records then there could be more spikes in the data (more first digits with 6, 7, 8 or 9 and less 1 and 2).

Benford’s Law is just one statistical technique leveraged in forensic auditing but the basic examination process is to start with a dataset then run different tests to identify anomalies. As I said before, this is drastically different then digital forensics where the data is observed first and tests are run to disprove your theories.

I thought an analogy would be a good way to sum up the differences between Digital Forensics and Forensic Auditing. An office has a cabinet in the corner of the room which is filled with invoices for the previous five years. A forensic auditor would take those invoices and then analyze them to find any fraudulent activities. A digital forensic examiner would take those same invoices and tell the auditor everything about the paper the invoices are on, who created the invoices, information about how the cabinet got into the room, who may have accessed the cabinet, who was talking about the invoices, and identify other things in the office tied to the cabinet. The analogy does a fairly decent job reflecting how the two different fields can complement each other to provide a more complete understanding about the invoices in the cabinet.

Understanding My Customers (and co-workers)

I went into the Forensic Analytics training hoping for two things; find a few techniques that I could apply to my DFIR work and to get a better understanding about who I provide digital forensic assistance to. The techniques and tests discussed for the most part did not translate over to my DFIR work but I did get a better understanding about who my customers are and how I can provide a better digital forensic service to them. Thinking back over the past few years I can now see I wasn’t asking the right questions because I never put myself in my customers’ shoes.

A typical statement I heard on fraud cases when I asked for additional information was the phrase “I’ll know it when I see it”. I thought maybe it was just me until I was talking to someone at PFIC last year who also supports financial investigators. He said people say the same phrase to him as well. I never completely understood what the phrase meant. In digital forensics if I was to describe something I try to put it into context. Look for artifact X and around X you may see Y and Z. I might also mention a few other artifacts to look for as well. I wouldn’t describe something by saying “I’ll know it when I see it”. Fraud auditing uses predictive analysis to see patterns in data. Tests are run against datasets to identify anomalies which are data points that fall outside the expected pattern. Those data points are possible indications of errors or fraud. When running the tests against the datasets in training I was asking myself what would fraud/errors look like and the answer to my question was “I’ll know it when I see it”.

The training gave me a better understanding about my customers (some are actually my co-workers but it’s easier to group everyone together) and the techniques they use to do their job in finding fraud. Going forward I have a better idea about how to phrase my questions so I can get more actionable information.

Preparing for the Future

I went into the training looking forward to learning about the different types of frauds, how they are detected, and spending a few days in the shoes of the people who send me the most work. I’ll admit there were a lot of times when I got distracted in the training. When a certain type of fraud was discussed my mind would start wandering about how I would approach an examination to validate if the fraud was occurring. Instead of paying attention to how to use excel to perform a statistically test against some financial data I found myself reflecting on: what are the different ways to commit this kind of fraud? What potential artifacts might exist on a network and where? What questions should I ask? What data sources should I be interested in? My wandering was more of a thought exercise about how to process different types of frauds so I am better prepared for what the auditing and investigations departments throw my way next.

Previously, I said the techniques and tests discussed mostly didn’t apply to disk analysis. I said mostly because the predictive analysis portion of the training helped me figure out the final piece to a technique I’ve been working on. The technique is a way to quickly identify potential fraudulent documents. This is a technique I could leverage tomorrow when faced with certain kinds of fraud. It could help reduce the amount of documents to focus on which in turn will enable me to provide information to the auditors/investigators faster. I also envision the technique not only being used by other digital forensic practitioners but fraud auditors and investigators can use it as well to detect potential frauds. I’m hoping to have a paper complete sometime before summer.

Gaining a better understanding about the people who bring me the most work and preparing myself to face what those people have in store for me tomorrow wasn’t a bad way to spend two days afterall.
Labels:

Review of Windows Forensic Analysis 3rd Edition

Sunday, February 26, 2012 Posted by Corey Harrell 4 comments
Last week I finished reading Windows Forensic Analysis 3rd Edition by Harlan Carvey. I think WFA 3/e will be a welcomed addition to anyone’s Digital Forensic and Incident Response (DFIR) library. The book has a lot to offer but the content about Windows 7 and processes is why I’m glad it’s in my library.

All about Windows 7

When thinking about references we have available when performing digital forensic examinations on a Windows 7 system there aren’t a lot that come to mind. We have some great presentation slides (cough cough Troy Larson cough), a few blog posts, and the paper SWDGE Technical Notes on Microsoft Windows 7. However, there isn’t a DFIR book who’s main focus is about Windows 7 until now. WFA 3/e comes out of the gates talking about Windows 7 in Chapter 3. The chapter goes into great detail about volume shadow copies (VSCs). What VSCs are, how to access VSCs, different methods to examine VSCs, and different tools available to use against VSCs. The Windows 7 theme continued into Chapter 4 File Analysis with topics such as event logs and jumplists (a new artifact showing user activity). Rounding out the forensic nuggets about Windows 7 was Chapter 5 Registry Analysis. At first I was worried about reading the same information I read in Windows Forensic Analysis 2nd Edition or Windows Registry Forensics but my worries were unfounded. The author has said numerous times WFA 3/e is not a rewrite to his other books and is a companion book. The registry analysis chapter showed how true the statement is because it focused on what information can be pulled from Windows 7 registry hives. The author also highlighted the differences between Windows 7 and previous Windows operating systems. If anyone is going to be encountering Windows 7 systems then WFA 3/e will be one of the references to have within reaching distance.

Process, Process, Process

WFA 3/e discusses numerous Windows artifacts and different tools capable of parsing those artifacts. The book also provides context about the artifacts and tools by discussing the DFIR processes behind them. Right off the bat the author lays the foundation by discussing Analysis Concepts in Chapter 1. There is even a section about tools versus processes. A quote I liked was “analysts can find themselves focusing on specific tool or application rather than the overall process”. I see a lot of DFIR discussions focus on tools instead of the overall process on how those tools could be used. I even fell into this trap earlier in my career. Whenever I read a DFIR book or any analysis book for that matter I want to see the author explain the overall process because it makes it easier for me to translate the information over to my work. WFA 3/e did an outstanding job discussing processes which can be seen in various chapters. The two chapters I wanted to mention specifically are 6 and 7.

Chapter 6 Malware Detection was dedicated to how the author goes about to finding malware on a system. The author lays out the overall process he follows (a checklist accompanies the book) and then goes into detail about what he is looking for and what tools he uses to carry out the process. The same approach is used in Chapter 7 Timeline Analysis. The author discusses his process for performing timeline analysis including: how he approaches timelines, how he builds timelines, and how he examines timelines.

It’s nice to see the processes someone else uses and the case experiences shared by the author helped reinforced why the process is important. WFA 3/e doesn’t disappoint because the author not only provides tools to do DFIR work but he lays out a process that others can follow.

Don’t Overlook the Materials Accompanying the Book

The author made the supporting material to WFA 3/e available online (on this Google page) and this is a welcomed feature for those of us who bought the book’s electronic version. Similar to the author’s previous books I already mentioned, the materials accompanying his book are full of DFIR goodies such as ….

        * jumplist parser (jl.pl): the author wrote a script to parse jumplists. This is the only command-line tool I know of that can parse jumplists. I tested script against jumplists inside VSCs and the results were impressive.

        * Malware detection capability: there are different scripts to help with detecting malware including mbr.pl to find mbr infections and wfpchk.pl to check the contents of the dllcache.

        * Checklists: there are a few different checklists that may be useful references during an examine.

        * Source code: the source code is provided to all the scripts. I’m teaching myself Perl so being able to read the code helps me get a better understanding about not only knowing how the script works but how the author puts scripts together.

Clarification about ShadowExplorer

There were no significant improvements I could suggest to make WFA 3/e better. I could make a couple minor suggestions but there isn’t anything glaring. However, there was something I wanted to clarify. Chapter 3 Volume Shadow Copies Analysis mentions using ShadowExplorer to access and browse VSCs. The author mentioned that ShadowExplorer will only show the VSCs available within the volume or drive on which the program is installed on. That ShadowExplorer has to be reinstalled on the drive in order to view its VSCs. The section I’m referring to is on Kindle page 1,366. I might have misunderstood this statement and if I did then please ignore this section to my book review.

ShadowExplorer only needs to be installed on your forensic workstation and it can be used to view any volume’s VSCs mounted to the workstation. The drop down menu next to the drive letter lets you select any drive letter on the workstation to view that volume’s VSCs. I’ve used ShadowExplorer in this manner to view VSCs for drives connected to my system through USB docks and to view the VSCs inside a mounted forensic image. It's a nice way to preview VSCs.

Overall Five Star Review

Overall I give WFA 3/e a five star review (Amazon rating from 0 to 5 stars). The book has a lot to offer from Windows 7 artifacts to DFIR processes to better understanding the artifacts we encounter. As I said in the beginning to the post, the book is a welcomed addition to anyone’s DFIR library and it’s a great companion book to the author’s other books about digital forensics on Windows systems.

I wanted to say how humbling it was to see the author mention my blog. Before I became more active online I lurked in the shadows following a lot of people in the DFIR community. Harlan is one of those people. Every time I see someone mention me I am still taken back. I wanted to say thank you Harlan for the recognition and including an earlier version to my Regripper VSC batch script in your materials. (an updated version to the script can be found here). 

Labels:

Examining VSCs with GUI Tools

Wednesday, February 22, 2012 Posted by Corey Harrell 0 comments
Over the past few posts I’ve been discussing how to examine data while it’s still inside Volume Shadow Copies (VSCs). I refer to the approach as Ripping VSCs because the concept behind it is to extract data from a system/forensic image as fast as possible so an examiner can start their analysis. This allows an examiner to start analyzing data within seconds instead of having to wait minutes in order to gather the information to analyze. The two different methods to rip VSCs are the Practitioner and Developer methods. Both methods don’t necessary use tools with Graphical User Interfaces (GUIs) because these types of tools are not great for automation. However, GUI tools are viable options for parsing data inside VSCs and they shouldn’t be overlooked.

To run a GUI tool against a VSC requires the that VSC is accessed a certain way. As I mentioned in a previous post chapter 3 in Harlan Carvey’s WFA 3/e shows how to create a symbolic directory to a VSC. The other method I saw in Troy Larson’s slide deck where he exposes a VSC as a network share. Before I show how Harlan and Troy access VSCs I wanted to share my own failure in figuring this out so others know what didn’t work for me.

When I first started working with VSCs I created symbolic links to VSCs using the /j switch with mklink. The /j switch creates a Directory Junction which worked well for my needs since I was running command-line tools against it. However, I was unable to get GUI tools to traverse through a directory junction and this limited the tools I could use to parse VSCs’ data. To get it to work I knew the VSC had to be exposed like a folder or drive but my attempts were unsuccessful. I tried DiskShadow (I did get this to work in Windows 7 by leveraging the DLL search order vulnerability) and vshadow (included in the SDK) but neither program can mount a persistent VSC to a folder. The VSCs on Windows 7 and Vista systems are persistent so at that point I didn’t have a way to expose them for GUI tools to work. That was until I saw what Harlan and Troy were doing.

Exposing VSCs as Symbolic Folder

I already discussed how Harlan was creating a symbolic directory to a VSC in the Practitioner Method post. If anyone wants more information than what I’m providing here I’d recommend you check out the post. The mklink command was used with the /d switch to create a symbolic directory to a VSC. The following command creates a symbolic directory named vsc1 pointing to C volume’s first VSC and the picture shows the result:

mklink /d c:\vsc1 \\?\GLOBALROOT\Device\HardDiskVolumeShadowCopy1\

Any GUI tool can then browse the VSCs or parse any data inside. Side note, to automate creating and removing symbolic links to VSCs I put together the access-vsc.bat script located here. See the following pictures for some examples:

Windows Explorer Browsing VSC


Mitec WFA Analyzing Prefetch Files


FTK Imager Browsing VSC

Exposing VSCs as a Network Share

I could never do justice trying to explain the information Troy provides in his slides. That’s why I won’t even try to summarize anything and I recommend to anyone reading my post who hasn’t seen the presentation I’m referencing to check it out (here’s the link again). Slide 53 shows how to expose a VSC as a network share and I reposted the command below.

net share testshadow=\\.\HarddiskVolumeShadowCopy18\

After the command is ran then the share testshadow points to VSC 18. To make things easier for browsing with GUI tools I’d map the share to network drive. The command below creates a mapped drive using drive letter K.

net use K: \\127.0.0.1\testshadow

Similar to the symbolic directory, any GUI tool can browse the VSC or parse data inside VSCs. See the following pictures for some examples:

Windows Explorer Browsing VSC


MalwareBytes Scanning VSC

Ripping VSCs Summary

The majority of my casework involves Windows XP operating systems so I rarely encounter VSCs. The few cases I did have involving Windows Vista and 7 VSCs played a critical role in my examinations since they allowed me to see how data evolved overtime. As more organizations begin the migration from Windows XP to Windows 7 or 8 then examining VSCs will become a common occurrence. Knowing the different approaches for examining VSCs will be vital for a successful examination. One of those approaches is to parse data while it’s still stored inside VSCs. The different methods to accomplish that include: the Ripping VSCs Practitioner and Developer methods as well as manually using any GUI tool of choice.

Ripping VSCs – Developer Examples

Tuesday, February 14, 2012 Posted by Corey Harrell 3 comments
The previous post, Ripping VSCs – Developer Method, provided a detailed explanation about how data can be parsed directly inside Volume Shadow Copies (VSCs). Unlike the Practitioner Method, the Developer Method accessed data directly thereby bypassing the need to go through a symbolic link. The previous post explained how and why it’s possible to programmatically access files in VSCs. Ripping VSCs – Developer Examples picks up where the last post left off by demonstrating how existing scripts can be used to parse data inside VSCs.

The Ripping VSCs – Developer Method made two key points that need to be understood about accessing data in VSCs. The first take away is that to read or parse data directly requires a handle to be opened to the object using the full UNC path. The line below shows how to open a handle to the IE9_main.log file in Volume Shadow Copy 18:

open FILE, \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy18\WINDOWS\IE9_main.log or die $!;

The second take away is that to query information by executing commands against a folder/file's path requires a handle to the object to be opened into a variable. The line below shows how to open a handle to the IE9_main.log file in Volume Shadow Copy 18 into the variable $file:

open ($file, \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy18\WINDOWS\IE9_main.log) or die $!;

To modify or write a script to parse data inside a VSC means one of the handles above has to be used. The easiest way I found to change existing scripts is to identify the points where the script is interacting with an external file. Then change that code to use either a handle or make it avoid executing commands against a file/directory path. This is how I aproached the three scripts discussed in this post.

The scripts I wanted to try to get to work against VSCs are ones I’ve used a lot in the past. I knew what results I should expect so it made things easier to identify any issues I caused. One script already worked against VSCs which was Kristinn Gudjonsson’s read_open_xml_win.pl (parses Office 2007 metadata). I picked two other scripts to modify because the changes reinforce the two take aways from the The Ripping VSCs – Developer Method post. These scripts were Harlan Carvey’s RegRipper (parses registry hives) and lslnk.pl (parses Windows link files). I’m discussing the scripts starting with the one requiring no modifications then progressing to the more difficult changes.

Disclaimer: the modifications being made to these scripts are to demonstrate how they can be altered in order to support examining VSCs. As such, my recommendations for anyone wanting to make these changes for actually casework would be to reach out to Kristinn and Harlan (the authors) for feedback on the best way to alter their scripts.

read_open_xml_win.pl against VSCs

read_open_xml_win.pl is a script to read metadata from Microsoft Office 2007 documents. The script has no options and only takes the file path to the office document. At the time I wrote this post the current version was 0.1 and by default it was able to parse files directly inside VSCs. The picture below shows the script parsing a Word document in VSC 18.


Reviewing the script and identifying where the code interacts with an external file brings you to line 92.

92. # read the parameter (the document)
93. $doc = $ARGV[0];
94.
95. # create a ZIP object
96. $zip = Archive::Zip->new();
97.
98. # read the Word document, that is the ZIP file
99. $zip->read( $doc ) == AZ_OK or die "Unable to open Office file\n";

As the code shows, the file path entered on the command-line is stored in the $doc variable (line 92) and is then read into the ZIP object (line 99). Looking at the module doing this work says “the Archive::Zip module allows a Perl program to create, manipulate, read, and write Zip archive files”. My research identified opening a handle using the IO::File module but the read_open_xml_win.pl script shows other modules that open files can access VSCs as well.

RegRipper against VSCs

RegRipper is a tool to perform registry analysis in examinations. There is a command-line version (rip.pl) as well as a version with a GUI. The two switches I’m using in this post are: -r to specific the registry hive and –p to specify a single plugin (note: -f specifies a plug in file). At the time I wrote this post the current version of Regripper was 20090102 and by default it was unable to parse registry hives directly in VSCs. The picture below shows RegRipper failing to parse the UserInfo key in an ntuser.dat hive in VSC 18.


The error reported by RegRipper was that the ntuser.dat registry hive was not found. Opening rip.pl in a text editor and identifying the point where it interacts with a registry hive brings you to lines 89 and 170. I copied and pasted sections of the code below.

89. if ($config{file}) {
90. # First, check that a hive file was identified, and that the path is
91. # correct
92.      my $hive = $config{reg};
93.      die "You must enter a hive file path/name.\n" if ($hive eq "");
94.      die $hive." not found.\n" unless (-e $hive);

170. if ($config{plugin}) {
171. # First, check that a hive file was identified, and that the path is
172. # correct
173.      my $hive = $config{reg};
174.      die "You must enter a hive file path/name.\n" if ($hive eq "");
175.      die $hive." not found.\n" unless (-e $hive);

The first section (lines 89 to 94) appears to be for when a plugin file is ran (-f switch) while the second section (lines 170 to 175) is for a single plugin file (-p which was ran). Looking at lines 94 and 175 shows the error that appeared when RegRipper failed (ntuser.dat not found). Those two lines are performing an error check to see if the registry hive is present. The issue is the check is performed against a path inside a VSC. Remember the file size issue in the previous post? Commands can’t execute against a path to a VSC since they fail (at least in all my testing). To make RegRipper work with VSCs just make a change to lines 94 and 175. One option is to comment out those lines completely and another option is to remove the –e switch (worked in my testing). For demonstration purposes I commented the lines out. The changed lines are below:

94. # die $hive." not found.\n" unless (-e $hive);

175. # die $hive." not found.\n" unless (-e $hive);

The picture below shows the command is now successful; the modified RegRipper successfully rips the ntuser.dat hive in VSC 18.


Lslnk.pl against VSCs

Lslnk.pl is a script included with WFA 2/e to parse Windows link files. This was the first script I changed to work with VSCs and it was the most difficult one to figure out. The picture below shows lslnk.pl failing to parse a link file (Receipt-#4-Walmart-shredder.docx.lnk) in VSC 18.


Looking at the code to see how it interacts with link files shows three areas of interest. The first is the portion where the file entered on the command line is stored in the $file variable (line 16) and a check is performed to see if the file is present (line 17).

16. my $file = shift || die "You must enter a filename.\n";
17. die "$file not found.\n" unless (-e $file);

The second portion is where the stat command is executed against the file path stored in the $file variable (line 65) and then the file path is printed before the file size (line 66)

64. # Get info about the file
65. my ($size,$atime,$mtime,$ctime) = (stat($file))[7,8,9,10];
66. print $file." $size bytes\n";

The third section is where the file stored in the $file variable is opened into the FH filehandle.

71. # Open file in binary mode
72. open(FH,$file) || die "Could not open $file: $!\n";

Those three sections need to be modified in order for lslnk.pl to parse VSCs directly. The first change is to comment out line 72 because the filehandle needs to be opened in the beginning of the script. Remember to parse files inside VSCs a handle needs to be used? Here is the line commented out and I added my own comment explaining it.

# (corey) Had to move to first in script to access file in VSC
#open($file,$file) || die "Could not open $file: $!\n";

Continuing with the first change the handle needs to be opened before any actions are taken against the external link file. The script uses the $file variable throughout it so the easiest thing to do is to create a new variable (I picked $file_path). The second change is to comment out the error check against the file path while the third change is to open the file into the $file variable. Below are my changes made to the beginning of the script.

use strict;


# (corey) created variable to store file path. (without it this line won't work print $file." $size bytes\n";)
my $file_path = shift || die "You must enter a filename.\n";

# (corey) line below is not needed because of the line above
# my $file = shift || die "You must enter a filename.\n";

# (corey) added and changed the open command so handle is inside a variable
open(my $file,$file_path) || die "Could not open $file_path: $!\n";

# (corey)Line below caused error even though file opened
#die "$file not found.\n" unless (-e $file);


# Setup some variables

The last change is for reporting purposes. The line printing the file size contains the $file variable. This will cause it to print out a glob of characters instead of the file’s path. My $file_path variable contains the file’s path so it can be used with the print command as shown below.

# (corey) had to change the variable in the line below to print the path to the file
#print $file." $size bytes\n";
print "$file_path"." $size bytes\n";

In summary, the changes made were to make lslnk.pl open a file handle into a variable in order to access a file inside a VSC. The other changes were to avoid executing a command against the file’s path (error check) and to change a variable to show the file path. The end result; lslnk.pl is now to able successfully parse the link file (Receipt-#4-Walmart-shredder.docx.lnk) in VSC 18.



Next and Last Post in Series: Examining VSCs with GUI Tools

Ripping VSCs – Developer Method

Sunday, February 12, 2012 Posted by Corey Harrell 5 comments
For the past couple of weeks I’ve been talking about the Ripping VSCs approach to examining Volume Shadow Copies (VSCs). I started using the approach out of necessity because it allowed me to parse data while it was still inside VSCs. In the Ripping VSCs - Introduction post I mentioned there were two different methods to Ripping VSCs and I already covered the first one which was the Practitioner Method. The second method is the Developer Method and this post will explain it in detail.

As I mentioned before, I’ve been using the Practitioner Method for some time now. I had a lot of time to work and improve the approach which is why it is fully working solution to examining VSCs. I provided in-depth information about the method, working scripts for automation, detailed documentation for the scripts, and even a video demonstrating how to examine VSCs. Anyone can read about the Practitioner Method, grab the scripts, and starting examining data on their cases right away. Unfortunately, the Developer Method is not as polished as the Practitioner Method. In fact, it was about a month and half ago when I figured this method out. I’m releasing my research early on the Developer Method not only to make the Ripping VSCs series well rounded but to share it with the coders and tool developers in the DFIR community. I think they could leverage the information I’m sharing to improve their tools or develop new ones better than I could (so far I read 2.5 books about Perl).

Developer Method Overview

The Practitioner Method accessed VSCs data by traversing through a symbolic link. This method has worked flawlessly for me but a more efficient method would to be to access the data directly. This would avoid the need to make and remove the symbolic links pointing to VSCs. The Developer Method is able to programmatically access the data directly inside VSCs as can be seen in the picture below.


Unlike the Practitioner Method, to use the Developer Method one must know a programming language. The approach is broken down into two steps:

        1. Accessing VSCs
        2. Ripping Data

Both of those steps can be combined into the same script or tool. However, for clarity I will discuss them separately.

Accessing VSCs

There is one similarity between the Practitioner and Developer Methods in how they both access VSCs. Both methods only work on mounted volumes (thus online VSCs) and both require VSCs full paths to be identified. VSCs paths start with \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy# and each VSC will have a unique number. The way to accomplish identifying a volume’s VSCs will be dependent on the person writing the code but I'm currently researching a way to do this without using the vssadmin command. The need to identify the VSCs is where the similarities ends between the two methods. That’s because how the VSCs are accessed is drastically different.

Quick note: I tested accessing VSCs directly using Perl (more specifically Perl v5.12.4 on Windows 7 Ultimate 64 and 32 bit versions). My assumption is this method should work with other programming languages as well because they should be using the same underlying Windows API function calls.

In Perl (and different sections in the Windows System Programming book I’m reading) to read a file or directory a handle must first be created to that object. When Perl interacts with an external file, “Perl labels the connection (not the file itself) with a label called a "filehandle”. The following line shows the path stored in the $file_path variable being opened into a filehandle: open (FILE, $file_path). In this case, the filehandle is named FILE and whatever Perl wants to do with the external file is done so against the FILE label. The simple script below will print to the standard output a file’s contents entered on the command-line.

        $file_path = shift || die "You must enter a filename.\n";
        open FILE, $file_path or die $!;
        print <FILE>;

First I’ll explain the script before showing what it does. The first line is storing the filename entered on the command line into the variable $file_path. I already explained the second line so the last line is what prints the file (notice print executes against the filehandle). note: FILE should be enclosed in the less than and greater than signs but Blogger keeps stripping them out.

To see how the script works I ran it against a random log file I found in the Windows folder on my laptop. The screenshot below highlights the script and the filename entered on the command-line and the picture also shows the resulting output.


I went into so much detail explaining how a file is opened in Perl because it works the same way when dealing with VSCs. Opening a filehandle is done the same way whether the file is located in system’s Windows folder or a VSC’s Windows folder. To illustrate, I’ll run the same script against the same file with the one exception, I’m pointing it at a VSC that was created on February 4, 2012 (in case anyone has trouble seeing the screenshot the full path I’m using is \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy18\WINDOWS\IE9_main.log).


To access data directly inside VSCs the only thing that has to be done is to use the full UNC path to the file. Other than that, the data can be treated as if it was stored anywhere else on a system. Then once a handle is opened to a file or directory then it can be read or parsed.

As Sticky Fingaz from ONYX would say “but but but but wait” there’s more. At times a file or directory’s attributes are queried for information about it. One example is obtaining a file’s size. In these instances, a filehandle isn’t used because the commands are executed against the file/directory’s path. There is an issue with executing commands directly against paths to data inside VSCs. To see this issue I’ll use a script (listed below) to print a file’s size that’s entered on the command-line.

        $file_path = shift || die "You must enter a filename.\n";
        ($size) = (stat($file_path))[7];
        print " $size bytes\n";

The script works fine when files are located on a system but doesn’t execute properly against files inside VSCs. The screenshot below shows the script displaying the file IE9_main.log’s size located in the Windows folder but failing against the one in VSC 19.


There is a way to get around this issue; just open a filehandle into a variable. Below shows a slight modification to the script above so it can open a filehandle into a variable (I highlighted in red the changes).

        $file_path = shift || die "You must enter a filename.\n";
        open ($file,$file_path) or die $!;
        ($size) = (stat($file))[7];
        print " $size bytes\n";

The screenshot below shows how the script now works properly.


        Ripping Data

A friend of mine who is a coder always says “a loop is a loop”. He says this in reference to doing different things in programming because when it comes down to it all that is occurring is just loops being written in different ways. The Practitioner Method automated ripping data from VSCs by executing the same command in a loop inside a batch file. To rip data with the Developer Method a loop can be leveraged as well. Adding a loop to the file size script can show the file’s size in different VSCs. Below shows one way to accomplish this:

        @vscs = (9..18);
        $file_path = \\\\?\\GLOBALROOT\\Device\\HarddiskVolumeShadowCopy;
        foreach $num (@vscs) {
                open ($file,"$file_path$num\\WINDOWS\\IE9_main.log");
               ($size) = (stat($file))[7];
               print "VSC$num IE9_main.log size: $size bytes\n";
               close($file);
        }

The screenshot shows the file’s size being ripped from 10 different VSCs.



Research behind Ripping VSCs – Developer Method

Treating files/directories inside VSCs the same as data stored on a system may seem obvious after the fact. For me to come to this conclusion took a lot of research and testing. In my previous posts I didn’t discuss any research but I wanted to follow-up the Developer Method post with the testing I did to shed light on why VSCs can be accessed directly.

At the time, I was working with the Practitioner Method for some time and it never occurred to me to access VSCs directly. Things changed when I read PaulDotCom’s article Safely Dumping Hashes from Live Domain Controllers back in November. There was one line in the article that jumped out to me and I pasted it below.

copy \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy[X\windows\system32\config\SYSTEM

I was parsing registry hives inside VSCs on live systems by traversing through symbolic links but the command in the article was copying files directly from shadow copies. I tried the Windows copy command myself and I got the same results. It copied data directly from a VSC. I thought if a file could be copied then it could be parsed but I didn’t get around to researching the idea until the following month.

First I wanted to get a better idea about how copy was able to access VSCs directly. I fired up Process Monitor and executed the copy command against a file inside VSC 19. The exact command I ran was:

copy \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy19\windows\aksdrvsetup.log

I examined Process Monitor’s output to see exactly what copy was doing at the point when the file aksdrvsetup.log was accessed. The screenshot below shows copy calling different function calls such as CreateFile, QueryDirectory, ReadFile, and Closefile. These calls are part of Windows File Management Functions.


If a file could be copied then I wondered what else could be done against a file. I reviewed the built-in Windows commands until I came across one that queries information about files. The attrib command "displays, sets, or removes the read-only, archive, system, and hidden attributes assigned to files or directories". I executed attrib against a file in a VSC to not only see if it would work but to also identify any similarities with the copy command. The command I ran is listed below:

attrib \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy19\windows\aksdrvsetup.log

Looking at Process Monitor’s output showed attrib using the same File Management Functions that copy used as shown below.


At that point I identified two different built-in Windows commands using the same function calls to access files directly inside VSCs. I concluded to rip data directly against VSCs I had to use the same calls. At the time I wasn’t that knowledgeable about system calls so I reached out to my coder friend and asked if those calls could be replicated through programming. He let me know they were just lower level API calls and they can be called when programming. After some research I found the Win32API::File module which provides low-level access to Win32 system API calls for files/dirs in Perl. I was able to put together a script using the module to directly access files in VSCs. However, I was only partially successful in my attempt when I tried to print a logfile to the screen. The output was only the first line from the log file. I was able to print the entire file using a loop but this wasn’t a feasible option for parsing files. I was about to look into what I was doing wrong using the module when I saw that Win32API::File can be used like an IO::File object.

IO::File is a core module in Perl and its purpose is to create filehandles to objects. I wanted to see what function calls Perl used when accessing files on a system so I put together the script that prints a file's contents I referenced earlier. The Process Monitor output showed that Perl used the same File Management Functions as copy and attrib as shown in the picture below. As a result, I never circled back to figuring out what I did wrong with the Win32API::File module because it wasn’t necessary to interact with VSCs’ files at such a low level.


At that point I knew files could be read inside VSCs but I wanted to confirm if they could be parsed as well. I made some modifications to Harlan’s lslnk.pl script so it parse files directly in VSCs. The modifications included the information I discussed in the Accessing VSCs section and the changes enabled lslnk.pl to directly parse link files inside VSCs. The picture below shows the same link file (one was recovered from VSCs while the other was inside VSCs) being parsed. The picture on the left is the unmodified lslnk.pl script parsing a file on the system while the one on the right shows the modified lslnk.pl script parsing the same file in a VSC. The outputs from both scripts were exactly the same; thus validating examining data in this manner produces the same results.



Next Up: Ripping VSCs - Developer Examples