Ripping VSCs – Developer Method

Sunday, February 12, 2012 Posted by Corey Harrell 5 comments
For the past couple of weeks I’ve been talking about the Ripping VSCs approach to examining Volume Shadow Copies (VSCs). I started using the approach out of necessity because it allowed me to parse data while it was still inside VSCs. In the Ripping VSCs - Introduction post I mentioned there were two different methods to Ripping VSCs and I already covered the first one which was the Practitioner Method. The second method is the Developer Method and this post will explain it in detail.

As I mentioned before, I’ve been using the Practitioner Method for some time now. I had a lot of time to work and improve the approach which is why it is fully working solution to examining VSCs. I provided in-depth information about the method, working scripts for automation, detailed documentation for the scripts, and even a video demonstrating how to examine VSCs. Anyone can read about the Practitioner Method, grab the scripts, and starting examining data on their cases right away. Unfortunately, the Developer Method is not as polished as the Practitioner Method. In fact, it was about a month and half ago when I figured this method out. I’m releasing my research early on the Developer Method not only to make the Ripping VSCs series well rounded but to share it with the coders and tool developers in the DFIR community. I think they could leverage the information I’m sharing to improve their tools or develop new ones better than I could (so far I read 2.5 books about Perl).

Developer Method Overview

The Practitioner Method accessed VSCs data by traversing through a symbolic link. This method has worked flawlessly for me but a more efficient method would to be to access the data directly. This would avoid the need to make and remove the symbolic links pointing to VSCs. The Developer Method is able to programmatically access the data directly inside VSCs as can be seen in the picture below.


Unlike the Practitioner Method, to use the Developer Method one must know a programming language. The approach is broken down into two steps:

        1. Accessing VSCs
        2. Ripping Data

Both of those steps can be combined into the same script or tool. However, for clarity I will discuss them separately.

Accessing VSCs

There is one similarity between the Practitioner and Developer Methods in how they both access VSCs. Both methods only work on mounted volumes (thus online VSCs) and both require VSCs full paths to be identified. VSCs paths start with \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy# and each VSC will have a unique number. The way to accomplish identifying a volume’s VSCs will be dependent on the person writing the code but I'm currently researching a way to do this without using the vssadmin command. The need to identify the VSCs is where the similarities ends between the two methods. That’s because how the VSCs are accessed is drastically different.

Quick note: I tested accessing VSCs directly using Perl (more specifically Perl v5.12.4 on Windows 7 Ultimate 64 and 32 bit versions). My assumption is this method should work with other programming languages as well because they should be using the same underlying Windows API function calls.

In Perl (and different sections in the Windows System Programming book I’m reading) to read a file or directory a handle must first be created to that object. When Perl interacts with an external file, “Perl labels the connection (not the file itself) with a label called a "filehandle”. The following line shows the path stored in the $file_path variable being opened into a filehandle: open (FILE, $file_path). In this case, the filehandle is named FILE and whatever Perl wants to do with the external file is done so against the FILE label. The simple script below will print to the standard output a file’s contents entered on the command-line.

        $file_path = shift || die "You must enter a filename.\n";
        open FILE, $file_path or die $!;
        print <FILE>;

First I’ll explain the script before showing what it does. The first line is storing the filename entered on the command line into the variable $file_path. I already explained the second line so the last line is what prints the file (notice print executes against the filehandle). note: FILE should be enclosed in the less than and greater than signs but Blogger keeps stripping them out.

To see how the script works I ran it against a random log file I found in the Windows folder on my laptop. The screenshot below highlights the script and the filename entered on the command-line and the picture also shows the resulting output.


I went into so much detail explaining how a file is opened in Perl because it works the same way when dealing with VSCs. Opening a filehandle is done the same way whether the file is located in system’s Windows folder or a VSC’s Windows folder. To illustrate, I’ll run the same script against the same file with the one exception, I’m pointing it at a VSC that was created on February 4, 2012 (in case anyone has trouble seeing the screenshot the full path I’m using is \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy18\WINDOWS\IE9_main.log).


To access data directly inside VSCs the only thing that has to be done is to use the full UNC path to the file. Other than that, the data can be treated as if it was stored anywhere else on a system. Then once a handle is opened to a file or directory then it can be read or parsed.

As Sticky Fingaz from ONYX would say “but but but but wait” there’s more. At times a file or directory’s attributes are queried for information about it. One example is obtaining a file’s size. In these instances, a filehandle isn’t used because the commands are executed against the file/directory’s path. There is an issue with executing commands directly against paths to data inside VSCs. To see this issue I’ll use a script (listed below) to print a file’s size that’s entered on the command-line.

        $file_path = shift || die "You must enter a filename.\n";
        ($size) = (stat($file_path))[7];
        print " $size bytes\n";

The script works fine when files are located on a system but doesn’t execute properly against files inside VSCs. The screenshot below shows the script displaying the file IE9_main.log’s size located in the Windows folder but failing against the one in VSC 19.


There is a way to get around this issue; just open a filehandle into a variable. Below shows a slight modification to the script above so it can open a filehandle into a variable (I highlighted in red the changes).

        $file_path = shift || die "You must enter a filename.\n";
        open ($file,$file_path) or die $!;
        ($size) = (stat($file))[7];
        print " $size bytes\n";

The screenshot below shows how the script now works properly.


        Ripping Data

A friend of mine who is a coder always says “a loop is a loop”. He says this in reference to doing different things in programming because when it comes down to it all that is occurring is just loops being written in different ways. The Practitioner Method automated ripping data from VSCs by executing the same command in a loop inside a batch file. To rip data with the Developer Method a loop can be leveraged as well. Adding a loop to the file size script can show the file’s size in different VSCs. Below shows one way to accomplish this:

        @vscs = (9..18);
        $file_path = \\\\?\\GLOBALROOT\\Device\\HarddiskVolumeShadowCopy;
        foreach $num (@vscs) {
                open ($file,"$file_path$num\\WINDOWS\\IE9_main.log");
               ($size) = (stat($file))[7];
               print "VSC$num IE9_main.log size: $size bytes\n";
               close($file);
        }

The screenshot shows the file’s size being ripped from 10 different VSCs.



Research behind Ripping VSCs – Developer Method

Treating files/directories inside VSCs the same as data stored on a system may seem obvious after the fact. For me to come to this conclusion took a lot of research and testing. In my previous posts I didn’t discuss any research but I wanted to follow-up the Developer Method post with the testing I did to shed light on why VSCs can be accessed directly.

At the time, I was working with the Practitioner Method for some time and it never occurred to me to access VSCs directly. Things changed when I read PaulDotCom’s article Safely Dumping Hashes from Live Domain Controllers back in November. There was one line in the article that jumped out to me and I pasted it below.

copy \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy[X\windows\system32\config\SYSTEM

I was parsing registry hives inside VSCs on live systems by traversing through symbolic links but the command in the article was copying files directly from shadow copies. I tried the Windows copy command myself and I got the same results. It copied data directly from a VSC. I thought if a file could be copied then it could be parsed but I didn’t get around to researching the idea until the following month.

First I wanted to get a better idea about how copy was able to access VSCs directly. I fired up Process Monitor and executed the copy command against a file inside VSC 19. The exact command I ran was:

copy \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy19\windows\aksdrvsetup.log

I examined Process Monitor’s output to see exactly what copy was doing at the point when the file aksdrvsetup.log was accessed. The screenshot below shows copy calling different function calls such as CreateFile, QueryDirectory, ReadFile, and Closefile. These calls are part of Windows File Management Functions.


If a file could be copied then I wondered what else could be done against a file. I reviewed the built-in Windows commands until I came across one that queries information about files. The attrib command "displays, sets, or removes the read-only, archive, system, and hidden attributes assigned to files or directories". I executed attrib against a file in a VSC to not only see if it would work but to also identify any similarities with the copy command. The command I ran is listed below:

attrib \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy19\windows\aksdrvsetup.log

Looking at Process Monitor’s output showed attrib using the same File Management Functions that copy used as shown below.


At that point I identified two different built-in Windows commands using the same function calls to access files directly inside VSCs. I concluded to rip data directly against VSCs I had to use the same calls. At the time I wasn’t that knowledgeable about system calls so I reached out to my coder friend and asked if those calls could be replicated through programming. He let me know they were just lower level API calls and they can be called when programming. After some research I found the Win32API::File module which provides low-level access to Win32 system API calls for files/dirs in Perl. I was able to put together a script using the module to directly access files in VSCs. However, I was only partially successful in my attempt when I tried to print a logfile to the screen. The output was only the first line from the log file. I was able to print the entire file using a loop but this wasn’t a feasible option for parsing files. I was about to look into what I was doing wrong using the module when I saw that Win32API::File can be used like an IO::File object.

IO::File is a core module in Perl and its purpose is to create filehandles to objects. I wanted to see what function calls Perl used when accessing files on a system so I put together the script that prints a file's contents I referenced earlier. The Process Monitor output showed that Perl used the same File Management Functions as copy and attrib as shown in the picture below. As a result, I never circled back to figuring out what I did wrong with the Win32API::File module because it wasn’t necessary to interact with VSCs’ files at such a low level.


At that point I knew files could be read inside VSCs but I wanted to confirm if they could be parsed as well. I made some modifications to Harlan’s lslnk.pl script so it parse files directly in VSCs. The modifications included the information I discussed in the Accessing VSCs section and the changes enabled lslnk.pl to directly parse link files inside VSCs. The picture below shows the same link file (one was recovered from VSCs while the other was inside VSCs) being parsed. The picture on the left is the unmodified lslnk.pl script parsing a file on the system while the one on the right shows the modified lslnk.pl script parsing the same file in a VSC. The outputs from both scripts were exactly the same; thus validating examining data in this manner produces the same results.



Next Up: Ripping VSCs - Developer Examples

Ripping VSCs – Practitioner Examples

Wednesday, February 8, 2012 Posted by Corey Harrell 3 comments
The previous post, Ripping VSCs – Practitioner Method, provided a detailed explanation about the Practitioner Method for ripping Volume Shadow Copies (VSCs). The method executes programs against data inside VSCs by traversing through symbolic links and the previous post provided a simple batch script to automate this. The practitioner method examples discussed was parsing registry hives using the program Regripper and one simple loop showed how to automate parsing the Software hives across numerous VSCs. Ripping VSCs – Practitioner Examples picks up where the last post left off by demonstrating how to rip various data from VSCs using different free tools.

The Practitioner Method doesn’t leverage any programs with a Graphical User Interface (GUI). I’m not bias against tools with GUIs; heck the majority of my tools I interact with through a GUI. The method only uses command-lines tools because these can be automated through scripting. The basic premise about ripping data is reducing the amount of time needed to extract information for analysis. The faster information can be presented to an examiner then the faster questions can be answered. I started to really understand this concept when using Regripper. I used to perform registry analysis using a viewer and a paper with a registry key listing. The approached worked but in hindsight it took forever to examine each registry key. Then I started using Regripper and the tool extracted the data from registry keys on my list. In mere seconds I could analyze the information when it took minutes for me to locate the same keys with a viewer. The same concept applies to ripping VSCs; extract the data from a system/forensic image and each VSCs as fast as possible so an examiner can start their analysis. Scripting command-line tools to parse VSCs’ data takes only seconds/minutes while manually processing the same data with tools (GUIs or commands) could take minutes/hours to complete. As a refresher from my previous post, to write scripts one just needs to understand the For loop in the template listed below:

@echo off
for /f %%f in (vscs-2-parse.txt) do (
do something against c:\vsc%%f
)

I’m not going into too much depth explaining the examples because other information accompanies this post. The scripts I’m releasing are loaded with comments explaining what is going on, there’s a readme document explaining how to use the scripts, and there’s a video demonstrating the scripts usage. Taken all together I hope this provides enough information and examples for others to understand how to leverage this method in their own casework.

Now on to some examples showing how to leverage the Practitioner Method to rip data from VSCs.

Extracting Data from VSCs

As I mentioned in the introduction, QCCIS white paper and Richard Drinkwater (Forensics from the sausage factory) both used the Robocopy program to copy data from VSCs while preserving the files’ metadata. The batch script below shows how to extract the Users folder from every VSC that has a symbolic link and store the Users folders in a folder named Exported-folder.

@echo off
for /f %%f in (vscs-2-parse.txt) do (
robocopy.exe C:\vsc%%f\Users Exported-folder
)

The Robocopy program has a lot of options which can be used to preserve files’ metadata, and configure logging. To see the options I used you can review the file-info-vsc.bat script into the archive linked below.

Hashing Files in VSCs

One step in almost every digital forensic examination is to hash one or more files. Sometimes only a few files may be hashed while at other times the contents of entire hard drives are hashed. It makes sense that there could be a need to hash all the files inside of VSCs. The script below shows how to hash every file inside linked VSCs using the program md5deep.

@echo off
for /f %%f in (vscs-2-parse.txt) do (
md5deep.exe -r -c c:\vsc%%f\ >> file-hashes-vsc%%f.txt
)

The –r option is for recursive mode which means all subfolder and files are hashed. The –c option is so the output will be in csv format (this is my personal preference and the –c option doesn’t have to be used). The output is stored in a text file that indicates where the hash list came from. For example, the output hash list for vsc1 would look like file-hashes-vsc1.txt.

Identifying Differences between VSCs

One question I see often about VSCs is how to tell what is different between them. I even asked this question myself since knowing the answer has numerous benefits. If data was deleted then identifying this difference could quickly identify what was deleted. Knowing what files didn’t change can reduce the amount of data one has to analyze. When I first started examining VSCs the one ability I wanted was to able to determine the differences between a forensic image and each VSC. I wasn’t aware how to do this and the questions I saw online at the time weren’t answered with anyone explaining how. Linux has a diff command that has the ability to identify the differences between files and folders. A version of diff has been ported to Windows and it’s available in the UnxUtils package (once extracted the exe is located in UnxUtils\usr\local\wbin\diff.exe. The command below shows the diff.exe command comparing two symbolic links pointing to VSCs which therefore compares the differences between the actually VSCs. The differences are then redirected to a text file.

diff.exe -i -r –q C:\vsc11 C:\vsc10 >> differences.txt

The –i switch is to ignore case, -r is for recursive mode (compare all subfolders and files) and the –q switch will make the output only indicate if the files differ (I didn’t want to identify the actual difference for time sake). The most time consuming activity I have encountered with ripping VSCs is comparing the differences between them. Despite the additional time required, the results are impressive. Not only are files identified that are present in one VSC and not the other but files that have been modified are also highlighted. Check out the screenshot below.


Unlike the other examples I’ve shown so far, automating comparing VSCs was a little more challenging. The script isn’t as simple as copying the template because more logic is needed to make the comparison. Working my way through this issue is when I realized that I had to change my For loop in my scripts to work with text files as an input. The script below shows what I came up with to automate comparing VSCs. To any coders reading this the logic may appear funky. My preference was to use a while loop inside a For loop but there is no while loop in batch scripting. I had to simulate it with a nestled For loop.

@echo off
for /f %%f in (vscs-2-parse.txt) do (
        if !break! == 5 goto :exit
        set f=%%f
        for /f %%x in (vscs-2-parse.txt) do (
                set x=%%x
                if not !f! == !x! (diff.exe -i -r –q C:\vsc!f! C:\vsc!x! >> files-diff_vsc!f!-2-vsc!x!.txt)
                set f=!x!
                set break=5
        )
)

The variables in the scripts are using exclamation points (!) instead of percent symbols (%) for the variables. This is because to set a variable inside a batch For loop an exclamation point has to be used. To compare VSCs the script needs two variables to hold the VSC numbers to use. The first For loop starts the process by storing the first number in the text file inside %%f. The break variable will exit the loop once the inner For loop is done. Before entering into the inner loop the number in %%f is stored in a variable named f (was needed to compare numbers). The inner For loop does the rest of the work. The first time through %%x also stores the first number in the text file and then stores the number in the x variable. A comparison is made between the x and f variables. If they are not equal then diff will compare the links pointing to two VSCs. The first time through the diff doesn’t execute since the x and f variables both store the first number in the text file. The line set f=!x! moves the number inside the x variable to the f variable because the x variable will become the next number in the text file the second time through the loop. Lastly, the set break=5 makes sure the break variable contains the number 5. The inner For loop will keep processing the text file until it reaches the last number which will then go back to the first For loop. The break variable equals 5 so the loop will immediately exit. If anyone is interested in the exact code I used then I highly recommend reading the code in the scripts (file-info-vsc.bat) since I left comments explaining everything.

I took the time to explain this logic because it can be used to make other comparisons. One example is changing the code to run a program to compare registry hives.

VSC-Parser Scripts

I put together a few different scripts to rip VSCs into something I call vsc-parser (I am releasing version 1). The scripts are more of a Proof of Concept to demonstrate different activities that can be done to data stored inside VSCs. Please don’t let the PoC label fool you though. These scripts work and I actually use them in my DFIR work (professional and personal). I only gave vsc-parser the PoC label is because I have no intention to maintain the scripts publicly. The vsc-parser_readme document accompany the scripts outlines how to configure and use the scripts.

I won’t repeat the information in this post but I wanted to provide a little background about why the scripts were developed. The primary reason was because I needed this capability in my work. I wanted to access VSCs quickly and rip certain information. Some other functionality was added as my efforts to get partial credit for a DC3 2011 challenge. This functionality was hashing (MD5 and SHA) and listing files in VSCs. The detailed readme file was also a result from the DC3 challenge.

Here is the download link to vsc-parser on my blog’s Google page site. The following is a about a five minute video I put together demonstrating the Practitioner Method using these scripts on a live Windows 7 Ultimate system.





Ripping VSCs – Practitioner Method

Monday, February 6, 2012 Posted by Corey Harrell 6 comments
Volume shadow copies (VSCs) store a wealth of data and there are different approaches to extract that data for examination. One approach is to examine the data stored inside VSCs directly thereby skipping the need to image or copy the data. My previous post (Ripping Volume Shadow Copies – Introduction) briefly provides an overview about this approach and the two different methods to implement the approach. One method is the Practitioner Method and this post will explain it in detail.

Background

I wanted to provide a little more background about the Ripping VSCs approach and why I needed this capability. At my day job the majority of my cases are fraud related and one activity I need to do is track users’ activity so I can determine where financial data is located. As most examiners know the registry stores information about what a user was doing on a Windows computer including what files they accessed. Over time I grew accustomed to using Regripper when performing registry analysis. A cool thing about Regripper is it comes with some other useful tools and one of them is RipXP. RipXP enables you to parse a registry key from a hive on a system and then it will extract that same key from every registry hive in the system restore points. On my cases where I wanted to know what files were accessed? I would parse specific registry keys from my forensic image then parse those same keys from all system restore points. RipXP automates this process which was not only a time saver but it enabled me to get a more complete picture about a user’s activity over the course of time.

When I received my first few cases involving Windows 7 (and one Vista) systems then I lost the ability to use RipXP. The issue was that Windows 7/Vista replaced the restore points with VSCs. VSCs have a different structure than system restore points which means RipXP doesn’t work against them. I didn’t want to lose this capability when faced with Windows 7 systems so I went on a quest to figure out what my options were for ripping registry hives in VSCs. I first reviewed others’ research about VSCs and their forensic significance. I proceeded to learn and attempt the two well known approaches to VSCs examination including the robocopy method for copying data. I took what I learned and wanted to take the robocopy method to another level. My logic was if robocopy can copy data from VSCs then Regripper could parse registry hives inside VSCs. I manually ran Regripper against hives in VSCs through symbolic links showing I was on the right track. For the technique to be useful I needed automation so I could replicate how RipXP worked. I had to run the Regripper command in a loop to execute it repeatedly against registry hives in different VSCs. I was working on looping Regripper through the command-line and reached out to Harlan for some help. The end result was a command that would rip registry hives across VSCs. I saw that Harlan shows the exact command in Chapter 3 in Windows Forensic Analysis 3rd Edition.

The technique worked and replicated RipXP’s functionality. However, all you have to do with RipXP is run a script which means the technique had to be scripted. I taught myself Windows batch scripting and created a few scripts to rip registry hives in VSCs thereby getting my lost RipXP capability back. One of these initial scripts is included in the materials that accompanies WFA 3/e (I have made some significant changes too the script since it was given to Harlan). Now my logic was if Regipper can parse registry hives then any program can be automated to parse data inside VSCs. Again I was on the right track and this is how the Ripping VSCs approach came about.

Practitioner Method Overview

The Practitioner Method uses one existing tools to parse data inside a mounted volume’s VSCs by traversing through a symbolic link. I won’t rehash how to mount a volume of interest since it was discussed in the introduction. The method will be explained from the point after the volume was mounted and below illustrates the examination process.


The method can be broken done into the following three steps:

        1. Accessing VSCs
        2. Ripping Data
        3. Removing Access to VSCs

Before breaking down the three steps I wanted to discuss one of the lost DFIR commandments: Thou Shall Not Fear the Command-line. This commandment should be kept in mind because the Practitioner Method doesn’t have a nice GUI since its command-line based. For those who don’t like to interact with the command-line should check out Girl Unllocated’s Basic Command Prompt video since it may be a helpful tutorial. With all joking aside, the method does leverage command-line tools since they can be automated in scripts. I’ve found overtime the technique is very powerful because of the sheer number of free DFIR tools that run from the command-line.

Accessing VSCs

I just started reading Harlan’s WFA 3/e Kindle version (I’m on chapter 4 at the time of this post) and chapter 3 does an outstanding job explaining the process about accessing VSCs. From explaining what VSCs are to mounting a forensic image to creating symbolic links to VSCs to various tools for examining data inside VSCs. WFA 3/e goes into more depth than this post because I’m only providing a quick overview.

Accessing a VSC consists of identifying the VSC’s path followed by creating a symbolic link pointing to that path and the built-in Windows vssadmin command can accomplish this. The command also displays a lot more information including when each VSC was created. The command below will show the VSCs on the mounted volume with the drive letter C:

vssadmin list shadows /for=C:

The picture below shows the output from that command.


As can be seen in the screenshot, VSCs paths start with \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy and each VSC will have a unique number. For this specific volume the VSCs are numbered starting with the number 1 then increasing to 12 since there are 12 current VSCs. To access the first VSC a symbolic link needs to be created pointing to the path \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy1. The built-in Windows mklink command can create the symbolic links to VSCs. I’ve only been reading WFA 3/e for a few days and the book as already made me change how I approach VSCs. Harlan was using the mklink’s /d switch to mount a VSC to a symbolic directory which means the link acts as a normal folder. I updated my scripts to use the /d switch instead of /j (creates a directory junction). The following command will create a symbolic link named vsc1 pointing to C volume’s first VSC and the picture shows the result:

mklink /d c:\vsc1 \\?\GLOBALROOT\Device\HardDiskVolumeShadowCopy1\


To access every VSC of interest means someone would have to execute the mklink command multiple times. To access all 12 VSCs on my C volume means I need to type the mklink command 12 times. After working my first case involving Windows 7 I learned pretty quickly the need to automate VSCs access. My post A Little Help with Volume Shadow Copies discussed and provided a batch script that automates creating symbolic links to VSCs. The script worked great but I have since updated it. One change was to incorporate mklink’s /d switch but the more important change is making it easier to access specific VSCs.

     Automate Accessing VSCs

If you are only interested in the access-vsc script then skip ahead to the last paragraph in this section for the script’s download link. Otherwise, you can continue reading on to see what and why I changed the script. I already mentioned the significance of mklink’s /d switch so I won’t rehash it here. Just know that I did update my scripts to use this switch. The issue I encountered with my access-vsc script was the difficulty in narrowing my focus on specific VSCs that were not sequential. For example, if I wanted to identify the differences between VSC1, VSC3, and VSC6 then it was difficult due to the For loop used in the script. To show the issue I will discuss how the old For loop worked. The following was the loop in my old script (the command uses two % symbols since the command launches from a batch file):

for /l %%f in (start,step,stop) do echo %%f

Start represents the number the For loop should start at which in this case is 1 for the first VSC. Stop represents the number the loop should stop at which in this case is 6 for the last VSC. Step represents the number to increment each time the loop goes through. If the step was set to 2 then the For loop’s output looks like the following:


As shown in the screenshot setting the increment number to 2 only resulted in numbers 1, 3, and 5 when starting at 1. This means symbolic links would only be created for VSC1, VSC3, and VSC5 while missing VSC6. The only increment number that would work for automation is 1 and this was how my old script worked. Now the For loop’s output looks like the following:


The For loop now counts from 1 to 6; meaning symbolic links would be created for VSC1, VSC2, VSC3, VSC4, VSC5, and VSC6. That’s great since it provides access to the three VSCs of interest (VSC1, VSC3, and VSC6). However, there are VSCs being accessed that I didn’t want. This is only a small issue but it’s pretty significant when trying to automate comparing the differences between VSCs. To get around this issue I change the For loop so it uses numbers listed in a text file:

for /f %%f in (vscs-2-parse.txt) do echo %%f

The text file contains one number per line and in this case the numbers are 1, 3, and 6: Now the For loop’s output looks like the following:


As shown in the screenshot, the numbers count from 1 to 3 then 6. This would provide access to only the three VSCs of interest. To some this issue may seem small because I’m using small numbers. The volume I keep referencing throughout the post has 12 VSCs. If I wanted to access VSC1, VSC6, and VSC12 then the old script would create symbolic links to every VSC. This also means whatever data I want to parse through automation would get parsed in every VSC instead of the three I’m interested in. My new script provides access to only the VSCs someone wants; to see how check out the For loop below:

for /f %%f in (vscs-2-parse.txt) do mklink /d c:\vsc%%f \\?\GLOBALROOT\Device\HardDiskVolumeShadowCopy%%f\

The only difference in this For loop compared to the previous ones I showed is that the echo command was replaced by the mklink command. To access VSCs 1, 6, and 12 means the text file (vscs-2-parse.txt) should contain these numbers. The end result is having access to specific VSCs as shown below.


The cool thing about using a text file is that the same file can be leverage by other scripts to rip data inside VSCs. The access-vsc script pretty much works the same way as the old script. The only noticeable change is that it allows you to create a text file with the VSC numbers of interest (file gets dropped in the same directory as the script). The new script can be located at my blog’s Google Code site here. I’m releasing the script since I previously blogged about and shared the old one. I’m also releasing a series of scripts that work together to rip VSCs and will provide a short demo video showing their capabilities in my next blog post.

Ripping Data

Data can be parsed once the symbolic links pointing to VSCs are created. All that has to be done is to run a command against the data traversing through the symbolic link. For example, the Windows dir command can be executed directly against the symbolic link vsc1 to see what’s in the root directory of the VSC it’s linked to.


Continuing on with the example, the dir command can also show the files located in the Regripper folder inside VSC1.


Programs can run against VSCs’ data by going through the symbolic link. Just switch out the dir command with any other command-line program. So many days ago I switched out the dir command with Regripper. Here’s a screenshot showing Regripper parsing the Software hive’s uninstall registry key inside VSC1.


     Automate Ripping Data

The previous Regripper command was only executed against the Software hive in VSC1. To process other software hives that same command needs to be ran against each VSC of interest. Examining VSCs in this manner is doable but the work is timing consuming and tedious. Not only does it take longer to execute the commands manually but typing the commands is error prone. I remember my first case working with VSCs; I was manually creating the symbolic links to VSCs and parsing registry hives. With over 15 VSCs it got old really quick; I had typos thus making the commands not work, wasted time trying to copy commands, and I learned how boring it is to type the same thing over and over. That experience is what influenced me to learn about batch scripting and the same concept applies to ripping VSCs. What option looks better: write a command once to extract information in seconds or write a command many times that takes minutes/hours to extract the same information? I bet most people would be pick door number one; that door is the main motivation to automating ripping VSCs.

One doesn’t have to be an expert in batch scripting to automate ripping VSCs using the Practitioner Method. I think all that someone needs to know is how a For loop works in a batch script. The Automate Accessing VSCs section in this post explained the For loop used in the script to create symbolic links. If you skipped that section; don’t worry and I won’t make you go back to re-read it. The important thing about the section is that the script uses a text file named vscs-2-parse.txt and the file contains a number on each line. The numbers are used to create symbolic links to each VSC. For example, the number 1 on a line results in a link named vsc1 and it points to VolumeShadowCopy1. A For loop can be written which uses the same text file to rip data inside VSCs. The simple batch script below can be used as a template:

@echo off
for /f %%f in (vscs-2-parse.txt) do (
do something against c:\vsc%%f
)

The @echo off line turns off the displaying the commands running in the batch file. The line isn’t needed but most people prefer not to display command executing. The rest of the script is just a For loop. The /f switch makes the loop work against a text file which is specified between the parenthesis (vscs-2-parse.txt). %%f will be the variable used to hold the data on each line in the text file. Quick tip: a For loop in a batch file requires two percent symbols (%%) since it strips out one symbol. However, only use one percent symbol (%) when running a For loop from the command-line. This little nuance caused me a lot of headaches when I first started working with batch files. Everything between the next set of parenthesis is the loop and will be executed until the loop reaches the last line in the text file (vscs-2-parse.txt). To rip VSCs, a program needs to be pointed to the symbolic links using the loop’s variable (%%f).

I know this may seem complicated to those who have never worked with batch files before. I swear, it just seems that way writing and reading about it. All that really has to be done is to copy the template, insert whatever program is to be executed, and save the text file with a .bat file extension. The loop below will rip the Software hive’s uninstall registry key from each link pointing to a VSC and all I did was copy the template.

@echo off
for /f %%f in (vscs-2-parse.txt) do (
rip.exe -r C:\vsc%%f\Windows\System32\config\SOFTWARE -p uninstall
)

The output from the above batch script would just be displayed on the screen. A slight change by -redirecting the output – can save the output to a text file as shown below. Just make sure that >> is used to append the output to the text file.

@echo off
for /f %%f in (vscs-2-parse.txt) do (
rip.exe -r C:\vsc%%f\Windows\System32\config\SOFTWARE -p uninstall >> C:\output.txt
)

The batch script template discussed is the foundation to ripping VSCs with the Practitioner Method and the next post will demonstrate how it can be used.


Up Next: Ripping VSCs – Practitioner Examples

Ripping Volume Shadow Copies – Introduction

Sunday, January 29, 2012 Posted by Corey Harrell 0 comments
Windows XP is the operating system I mostly encounter during my digital forensic work. Over the past year I’ve been seeing more and more systems running Windows 7. 2011 brought with it my first few cases where the corporate systems I examined (at my day job) were all running Windows 7. There was even a more drastic change for the home users I assisted with cleaning malware infections because towards the end of the year all my cases involved Windows 7 systems. I foresee Windows XP slowly becoming a relic as the corporate environments I face start upgrading the clients on their networks to Windows 7. One artifact that will be encountered more frequently in Windows 7 is Volume Shadow Copies (VSCs). VSCs can be a potential gold mine but for them to be useful one must know how to access and parse the data inside them. The Ripping Volume Shadow Copies series is discussing another approach on how to examine VSCs and the data they contain.

What Are Volume Shadow Copies

VSCs are not new to Windows 7 and have actually been around since Windows Server 2003. Others in the DFIR community have published a wealth of information on what VSCs are, their forensic significance, and approaches to examine them. I’m only providing a quick explanation since Troy Larson’s presentation slides provide an excellent overview about what VSCs are as well as Lee Whitfield’s Into the Shadows blog post. Basically, the Volume Shadow Copy Service (VSS) can backup data on a Windows system. VSS monitors a volume for any changes to the data stored on it and will create backups only containing those changes. These backups are referred to as a shadow copies. According to Microsoft, the following activities will create shadow copies on Windows 7 and Vista systems:

        -  Manually (Vista & 7)
        -  Every 24 Hours (Vista)
        -  Every 7 Days (7)
        -  Before a Windows Update (Vista & 7)
        -  Unsigned Driver Installation (Vista & 7)
        -  A program that calls the Snapshot API (Vista & 7)

Importance of VSCs

The data inside VSCs may have a significant impact on an examination for a couple of reasons. The obvious benefit is the ability to recover files that may have been deleted or encrypted on the system. This ringed true for me on the few cases involving corporate systems; if it wasn’t for VSCs then I wouldn’t have been able to recover the data of interest. The second and possibly even more significant is the ability to see how systems and/or files evolved over time. I briefly touched on this in the post Ripping Volume Shadow Copies Sneak Peek. I mentioned how parsing the configuration information helped me know what file types to search for based on the installed software. Another example was how the user account information helped me verify a user account existed on the system and narrow down the timeframe when it was deleted. A system’s configuration information is just the beginning; documents, user activity, and programs launched are all great candidates to see how they changed over time.

To illustrate I’ll use a document as an example. When a document is located on a system without VSCs - for the most part - the only data that can be viewed in the document is what is currently there. Previous data inside the document might be able to be recovered from copies of the document or temporary files but won’t completely show how the document changed over time. To see how the document evolved would require trying to recover it at different points in time from system backups (if they were available). Now take that same document located on a system with VSCs. The document can be recovered from every VSC and each one can be examined to see its data. The data will only be what was inside the document when each VSC was created but it could cover a time period of weeks to months. Examining each document from the VSCs will shed light on how the document evolved. Another possibility is the potential to recover data that was in the document at some point in the past but isn't in the document that was located on the system. If system backups were available then they could provide additional information since more copies of the document could be obtained at other points in time.

Accessing VSCs

The Ripping Volume Shadow Copies approach works against mounted volumes. This means a forensic image or hard drive has to be mounted to a Windows system (Vista or 7) in order for the VSCs in the target volume to be ripped. There are different ways to see a hard drive or image’s VSCs and I highlighted some options:

        -  Mount the hard drive by installing it inside a workstation (option will alter data on the hard drive)

        -  Mount the hard drive by using an external hard drive enclosure (option will alter data on the hard drive)

        -  Mount the hard drive by using a hardware writeblocker

        -  Mount the forensic image using Harlan Carvey’s method documented here, here, and the slide deck referenced here

        -  Mount the forensic image using Guidance Software’s Encase with the PDE module (option is well documented in the QCCIS white paper Reliably recovering evidential data from Volume Shadow Copies)

Regardless of the option used to mount the hard drive or image, the Windows vssadmin command or Shadow Explorer program can show what if VSCs are available for a given mounted volume. The pictures below show the Shadow Explorer program and vssadmin command displaying the some VSCs for the mounted volume with drive letter C.

Shadow Explorer Displaying C Volume VSCs

VSSAdmin Displaying C Volume VSCs

Picking VSCs to examine is dependent on the examination goals and what data is needed to accomplish those goals. However, time will be a major consideration. Does the examination need to review an event, document, or user activity for specific times or for all available times on a computer? Answering that question will help determine if certain VSCs covering specific times are picked or if every available VSCs should be examined. Once the VSCs are selected then they can be examined to extract the information of interest.

Another Approach to Examine VSCs

Before discussing another approach to examining VSCs it’s appropriate to reflect on the approaches practitioners are currently using. The first approach is to forensically image each VSC and then examine the data inside each image. Troy’s slide deck referenced earlier has a slide showing how to image a VSC and Richard Drinkwater's Volume Shadow Copy Forensics post from a few years ago shows imaging VSCs as well. The second popular approach doesn’t use imaging since it copies data from each VSC followed by examining that data. The QCCIS white paper referenced earlier outlines this approach using the robocopy program as well as Richard Drinkwater in his posts here and here. Both approaches are feasible for examining VSCs but another approach is to examine the data directly inside VSCs bypassing the need for imaging and copying. The Ripping VSCs approach examines data directly inside VSCs and the two different methods to implement the approach are: Practitioner Method and Developer Method.

Ripping VSCs: Practitioner Method

The Practitioner Method uses ones existing tools to parse data inside VSCs. This means someone doesn’t have to learn a new tool or learn a programming language to write their own tools. All that’s required is for the tool to be command line and the practitioner willingness to execute the tool multiple times against the same data. The picture below shows how the Practitioner Method works.

Practitioner Method Process

Troy Larson demonstrated how a symbolic link can be used to provide access to VSCs. The mklink command can create a symbolic link to a VSC which then provides access to the data stored in the VSC. The Practitioner Method uses the access provided by the symbolic link to execute one’s tools directly against the data. The picture above illustrates a tool executing against the data inside Volume Shadow Copy 19 by traversing through a symbolic link. One could quickly determine the differences between VSCs, parse registry keys in VSCs, examine the same document at different points in time, or track a user’s activity to see what files were accessed. Examining VSCs can become tedious when one has to run the same command against multiple symbolic links to VSCs; this is especially true when dealing with 10, 20, or 30 VSCs. A more efficient and faster way is to use batch scripting to automate the process. Only a basic understanding about batch scripting (need to know how a For loop works) can create powerful tools to examine VSCs. In future posts I’ll cover how simple batch scripts can be leverage to rip data from any VSCs within seconds.

Ripping VSCs: Developer Method

I’ve been using the Practitioner Method for some time now against VSCs on live systems and forensic images. The method has enabled me to see data in different ways which was vital for some of my work involving Windows 7 systems. Recently I figured out a more efficient way to examine data inside VSCs. The Developer Method can examine data inside VSCs directly which bypasses the need to go through a symbolic link. The picture below shows how the Developer Method works.

Developer Method Process

The Developer Method programmatically accesses the data directly inside of VSCs. The majority of existing tools cannot do this natively so one must modify existing tools or develop their own. I used the Perl programming language to demonstrate that the Developer Method for ripping VSCs is possible. I created simple Perl scripts to read files inside a VSC and I modified Harlan’s lslnk.pl to parse Windows shortcut files inside a VSC. Unlike the Practitioner Method, at the time of this post I have not extensively tested the Developer Method. I’m not only discussing the Developer Method for completeness when explaining the Ripping VSCs approach but my hope is by releasing my research early it can help spur the development of DFIR tools for examining VSCs.

What’s Up Next?

Volume Shadow Copies have been a gold mine for me on the couple corporate cases where they were available. The VSCs enabled me to successfully process the cases and that experience is what pushed me towards a different approach to examining VSCs. This approach was to parse the data while it is still stored inside the VSCs. I’m not the only DFIR practitioner looking at examining VSCs in this manner. Stacey Edwards shared in her post Volume Shadow Copies and LogParser how she runs the program logparser against VSCs by traversing through a symbolic link. Rob Lee shared his work on Shadow Timelines where he creates timelines and lists deleted files in VSCs by executing the Sleuthkit directly against VSCs. Accessing VSCs’ data directly can reduce examination time while enabling a DFIR practitioner to see data temporally. Ripping Volume Shadow Copies is a six part series and the remaining five posts will explain the Practitioner and Developer methods in-depth.

        Part 1: Ripping Volume Shadow Copies - Introduction
        Part 2: Ripping VSCs - Practitioner Method
        Part 3: Ripping VSCs - Practitioner Examples
        Part 4: Ripping VSCs - Developer Method
        Part 5: Ripping VSCs - Developer Example
        Part 6: Examining VSCs with GUI Tools

Dual Purpose Volatile Data Collection Script

Monday, January 2, 2012 Posted by Corey Harrell 18 comments
When responding to a potential security incident a capability is needed to quickly triage the system to see what's going on. Is a rogue process running on the system, whose currently logged onto the system, what other systems are trying to connect over the network, or how do I document the actions I took on the system. These are valid questions during incident response whether the response is for an actual event or a simulation. One area to examine to get answers is the systems' volatile data. Automating the collection of volatile data can save valuable time which in turn helps analysts examine the data faster in order to get answers. This post briefly describes (and releases) the Tr3Secure volatile data collection script I wrote.

Tr3Secure needed a toolset for responding to systems during attack simulations and one of the tools had to quickly collect volatile data on a system (I previously discussed what Tr3Secure is here). However, the volatile data collection tool had to provide dual functions. First and foremost it had to properly preserve and acquire data from live systems. The toolset is initially being used in a training environment but the tools and processes we are learning need to be able to translate over to actual security incidents. What good is mastering a collection tool that can’t be used during live incident response activities? The second required function was the tool had to help with training people on examining volatile data. Tr3Secure members come from different information security backgrounds so not every member will be knowledgeable about volatile data. Collecting data is one thing but people will eventually need to know how to understand what the data means. The DFIR community has a few volatile data collection scripts but none of the scripts I found provided the dual functionality for practical and training usage. So I went ahead and wrote a script to meet our needs.

Practical Usage

These were some considerations taken into account to ensure the script is scalable to meet the needs for volatile data collection during actual incident response activities.

        Flexibility

Different responses will have different requirements on where to store the volatile data that’s collected. At times the data may be stored on the same drive where the DFIR toolset is located while at other times the data may be stored to a different drive. I took this into consideration and the volatile data collection script allows for the output data to be stored on a drive of choice. If someone prefers to run their tools from a CD-ROM while someone else works with a large USB removable drive then the script can be used by the both of them.

        Organize Output
 
Troy Larson posted a few lines of code from his collection script to the Win4n6 sometime ago. One thing I noticed about his script was that he organized the output data based on a case number. I incorporated his idea into my script; a case number needs to be entered when the script is run on a system. A case folder enables data collected from numerous systems to be stored in the same folder (folder is named Data-Case#). In addition to organizing data into a case folder, the actual volatile data is stored in a sub-folder named after the system the data came from (system's computer name is used to name the folder). To prevent overwriting data by running the script multiple times on the same system I incorporated a timestamp into the folder name (two digit month, day, year, hour, and minute). Appending a timestamp to the folder name means the script can execute against the same system numerous times and all of the volatile data is stored in separate folders. Lastly, the data collected from the system is stored in separate sub-folders for easier access. The screenshot below shows the data collected for Case Number 100 from the system OWNING-U on 01/01/2012 at 15:46.


        Documentation

Automating data collection means that documentation can be automated as well. The script documents everything in a collection log. Each case has one collection log so regardless if data is collected from one or ten systems an analyst will only have to worry about reviewing one log.


The following information is documented both to the screen for an analyst to see and a collection log file: case number, examiner name, target system, user account used to collect data, drives for tools and data storage, time skew, and program execution. The script prompts the analyst for the case number, their name, and the drive to store data on. This information is automatically stored in the collection log so the analyst doesn’t have to worry about maintaining documentation elsewhere. In addition, the script prompts the analyst for the current date and time which is used to record the time difference between the system and the actual time. Every program executed by the script is recorded in the collection log along with a timestamp of when the program executed. This will make it easier to account for artifacts left on a system if the system is examined after the script is executed. The screenshot below shows the part of the collection log for the data collected from the system OWNING-U.


        Preservation

RFC 3227’s Order of Volatility outlines that evidence should be collected starting with the most volatile then proceeding to the less volatile. The script takes into account the order of volatility during data collection. When all data is selected for collection, the memory is first imaged then volatile data is collected followed by collecting non-volatile data. The volatile data collected is: process information, network information, logged on users, open files, clipboard, and then system information. The non-volatile data collected is installed software, security settings, configured users/groups, system's devices, auto-runs locations, and applied group policies. Another item the script incorporated from Troy Larson’s comment in the Win4n6 group is preserving the prefetch files before volatile data is collected. I never thought about this before I read his comment but it makes sense. Volatile data gets collected by executing numerous programs on a system and these actions can overwrite the existing prefetch files with new information or files. Preserving the prefetch files upfront ensures analysts will have access to most of the prefetch files that were on the system before the collection occurred (four prefetch files may be overwritten before the script preserves them). The script uses robocopy to copy the prefetch files so the file system metadata (timestamps, NTFS permissions, and file ownership) is collected along with the files themselves. The screenshot below shows the preserved files for system OWNING-U.


        Tools Executed

The readme file accompanying the script outlines the various programs used to collect data. The programs include built-in Windows commands and third party utilities. The screenshot below shows the tools folder where the third party utilities are stored.




I’m not going to discuss every program but I at least wanted to highlight a few. Windows diskpart command allows for disks, partitions, and volumes to be managed through the command line. The script leverages diskpart to make it easy for an analyst to see what drives and volumes are attached to a system. Hopefully, the analyst won’t need to open up Windows explorer to see what the removable media drive mappings are since the script displays the information automatically as shown below. Note, to make diskpart work a text file needs to be created in the tools folder named diskpart_commands.txt and the file needs to contain these two commands on separate lines: list disk and list volume.


Mandiant’s Memoryze is used to obtain a forensic image of the system’s memory. Memoryze supports a wide range of Windows operating systems which makes the script more versatile for dumping RAM. The key reason the script uses Memoryze is because it’s the only free memory imaging program I found that allows an image to be stored in a folder of your choice. Most programs will place the memory image in the same folder where the command line is opened. This wouldn’t work because the image would be dropped in the folder where the script is located instead of the drive the analyst wants. Memoryze uses an xml configuration file to image RAM so I borrowed a few lines of code from the MemoryDD.bat batch file to create the xml file for the script. Note, the script only needs the memoryze.exe; to obtain the exe install Memoryze on a computer then just copy memoryze.exe to the Tools folder.

PXServer’s Winaudit program obtains the configuration information from a system and I first became acquainted with the program during my time performing vulnerability assessments. The script uses Winaudit to collect some non-volatile data including the installed software, configured users/groups, and computer devices. Winaudit is capable of collecting a lot more information so it wouldn’t be that hard to incorporate the additional information by modifying the script.

Training Usage

These were the two items put into the script to assist with training members on performing incident response system triage.

        Ordered Output Reports

The script collects a wealth of information about a system and this may be overwhelming to analysts new to examining volatile data. For example, the script produces six different reports about the processes running on a system. A common question when faced with so many reports is how should they be reviewed. The script’s output reports have numbers which is the suggested order for them to be reviewed. This provides a little assistance to analysts until they develop their own process for examining the data. The screenshots below shows the process reports in the output folder and those reports opened in Notepad ++.




        Understanding Tool Functionality and Volatile Data

The script needs to help people better understand what the collected data means about the system where it came from. Two great references for collecting, examining, and understanding volatile data are Windows Forensic Analysis, 2nd edition and Malware Forensics: Investigating and Analyzing Malicious Code. I used both books when researching and selecting the script’s tools to collect volatile data. What better ways to help someone better understand the tools or data then by directing them to references that explain it? I placed comments in the script containing the page number where a specific tool is discussed and the data explained in both books. The screenshot below shows the portion of the script that collects process information and the references are highlighted in red.



Releasing the Tr3Secure Volatile Data Collection Script

There are very few things I do forensically that I think are cool; this script happens to be one of them. There are not many tools or scripts that work as intended while at the same time provide training. People who have more knowledge about volatile data can hit the ground running with the script investigating systems. The script automates imaging memory image, collecting volatile/non-volatile data, and documenting every action taken on the system. People with less knowledge can leverage the tool to learn how to investigate systems. The script collects data then the ordered output and references in the comments can be used to interpret the data. Talk about killing two birds with one stone.

The following is the location to the zip file containing the script and the readme file (download link is here). Please be advised, a few programs the script uses require administrative rights to run properly.

Enjoy and Happy Hunting ...

***** Update *****

The Tr3Secure script has been updated and you can read about the changes in the post Tr3Secure Data Collection Script Reloaded

***** Update *****

Ripping Volume Shadow Copies Sneak Peek

Monday, December 19, 2011 Posted by Corey Harrell 4 comments
I was hesitant to do a sneak peak about a different approach to examine Volume Shadow Copies (VSCs). I personally don’t like sneak peeks and would rather wait to see the finished product. I think it’s along the lines of starting a movie then stopping it after 15 minutes and being forced to finish watching months later. If I don’t like sneak peeks then why am I putting others through it? I previously mentioned how I wanted to spend my furlough days by putting together some posts about another approach to examining VSCs. Well last week was my furlough week and my family wrote a new version to the carol The Twelve Days of Christmas. Four out of town trips, three sick kids, two family emergencies, and one blogger quarantined to his room. Needless to say I had to spend my time focused on my family. I won’t have time to write the VSCs blog posts until next month so I at least wanted to show one example on how I use this method.

There are times when I get a system that has been altered and one change is removing financial software from the system. This is pretty important because if I’m trying to locate financial data then I need to know what software is on the system so I know what kind of files to look for. There is a chance some file types might initially be missed if I’m not aware a certain program was installed at some point in the past. Different registry keys can help determine what programs were installed or executed but you can get a more complete picture about a system by looking at those same registry keys at different points in time. Performing registry analysis in this manner has allowed me to quickly identify uninstalled financial applications which reduced the time needed to find the data. Anyone who has used Harlan’s RipXP understands the value in seeing registry keys at different points in time. I used the same concept with one exception: numerous registry keys can be queried at the same time when dealing with VSCs.

The system I used for this demonstration was a live Windows 7 Ultimate 32 bit system. In the past I also used it against Windows 7 and Vista. forensic images

Obtaining General Operating System Information

I discussed previously one initial examination step is to get a better understanding about the system I’m facing. I use a batch script with Regripper to obtain a wealth of information about how the system was configured when it was last powered on. The configuration information is from only one point in time but if the system has VSCs then that means the same information can be obtained from different points in time. Seeing the same configuration information enables you to see how the system changed slightly over time including what software was installed or uninstalled. To do this I made some modifications to the general operating system batch script which lets me run it against VSCs I have access to.

I’m not going to discuss accessing VSCs in this post. For information on how to access VSCs I’d check out Harlan’s Even More Stuff post since he provides a link to his slide deck he gave to the online DFIR meet-up on the topic. My Windows 7 system had 19 VSCs and for the demonstration I only used the following:

        - ShadowCopy19 12/13/2011 6:13:35 PM

        - ShadowCopy16 12/01/2011 8:08:50 AM

        - ShadowCopy3 11/28/2011 11:19:40 AM

        - ShadowCopy1 8/26/2011 12:15:34 PM

The screen shot below shows the main menu to the vsc-parser (most selections have sub menus). To review the system to identify software of interest I’m interested in selection 2: “Obtain General Operating System Information from Volume Shadow Copies”.


The selection will immediately execute my Regripper batch file against every VSC I have access to. The picture below shows the script running against my four VSCs. I highlighted the samparse and uninstall plug-ins that executed.


The output from the script is nicely organized into different folders based on what the information is.


I’m interested in the software on the system which means I need the reports in the software-information folder. A report was created for each VSC I had access to (notice how the file name contains the VSC number it came from).


Now at this point I can review the reports and notice the slight differences between each VSCs. I tend to look at the most recent VSC then work my way to the oldest VSC. It makes it easier to see how the system slightly changed over time from the forensic image I examined first.


On a case I used this technique and it helped me to identify a financial application that was removed from the system. In the end it saved some a lot of time because this was one of my initial steps and I knew right off the bat I was looking for specific file types. Some may be wondering why I decided to highlight the samparse plug-in as well. At another time the same technique helped me verify a user account existed on the system and narrow down the timeframe when it was removed from the system.

I showed an example running Regripper against registry hives stored in VSCs on a live Windows 7 system. However, the approach is not only limited to registry hives or Regripper since you can pretty much parse any data stored in a VSC.

A Time of Reflection

Tuesday, December 13, 2011 Posted by Corey Harrell 7 comments
Certain events in life cause you to reflect on humility and put back into perspective the meaningful things in life. You remember that in time almost everything is replaceable. Another forensicator will fill your shoes at work and your organization will continue to go on. Another researcher will continue your research and the little that you did accomplish will eventually just be a footnote. Another person will step up to provide assistance to others in forensic forums and listservs. Your possessions and equipment will become someone else’s to enjoy and use. When looking at the big picture, the work we do and value will eventually fade away and life will go on as if we were never there. One of the only things remaining will be the impact we make on others in the little time we have available to us. One doesn’t need a lot of time or resources to make an impact; all that’s needed is having a certain perspective.

Everyone should look out not [only] for his own interests, but also for the interests of others. Philippians 2:4

Having an outlook that looks beyond one’s own self interests can positively impact others and I think the statement holds true regardless of religious beliefs. A perspective that takes into consideration others’ interests is displayed everyday in the Digital Forensic and Incident Response (DFIR) community. DFIR forums have thousands of members but there are only a few who regularly take the time to research and provide answers to others’ questions. DFIR listservs are very similar that despite their membership the minority are the ones who regularly try to help others. Look at the quality information (books, articles, blogs, white papers, etc) available throughout the community and their authorship is only a small fraction of the people in the community. These are just a few examples out of many how individuals within the DFIR community use their time and resources in an effort to not only better themselves but to educate others as well.

When I look at the overall DFIR community I think there’s only a minority who are looking beyond their own interests in an effort to help others. A few people have helped me over my career which contributed to where I am today. They never asked for anything in return and were genuinely interested in trying to help others (myself included). If the DFIR community is what it is because of a few people giving up their time and resources to make a positive impact on others, then I can only wonder what our community would look like if the majority of people looked beyond their own interests to look after the interests of others. In the meantime all I can do is to continue to try to remember to look beyond myself in every aspect of my life. To try to consider those around me so I can help whoever crosses my path needing assistance. When the day is over one of the only things remaining will be the impact I have on others.
Labels: