skip to main |
skip to sidebar
Monday, March 21, 2011
Posted by
Corey Harrell
Artifact Name
CVE-2010-0840 (Trusted Methods) Exploit Artifacts
Attack Vector Category
Exploit
Description
Vulnerability present in the code responsible for privileged execution of methods affects Oracle Java 6 prior to update 19 and 5 prior to update 23. Exploitation allows for the execution arbitrary code under the context of the currently logged on user.
Attack Description
This description was obtained using the Metasploit exploit reference and it involves having a user visit a malicious website.
Exploits Tested
Metasploit v3.6 multi\browser\java_trusted_chain
Target System Information
* Windows XP SP3 Virtual Machine with Java 6 update 16 using administrative user account
* Windows XP SP3 Virtual Machine with Java 6 update 16 using non-administrative user account
Different Artifacts based on Administrator Rights
No
Different Artifacts based on Software Versions
Not tested
Potential Artifacts
The potential artifacts include the CVE 2010-0840 exploit and the changes the exploit causes in the operating system environment. The artifacts can be grouped under the following three areas:
* Temporary File Creation
* Indications of the Vulnerable Application Executing
* Internet Activity
Note: the documenting of the potential artifacts attempted to identify the overall artifacts associated with the vulnerability being exploited as opposed to the specific artifacts unique to the Metasploit. As a result, the actual artifact storage locations and filenames are inside of brackets in order to distinguish what may be unique to the testing environment.
* Temporary File Creation
-JAR file created in a temporary storage location on the system within the timeframe of interest. [C:/Documents and Settings/Administrator/Local Settings/Temp/jar_cache3590475423724669955.tmp. The contents of the JAR file contained a manifest file and one class file was detected as the CVE 2010-0840 exploit. There were other class files whose md5 hash was not present in VirusTotal database.
* Indications of the Vulnerable Application Executing
- Log files indicating Java was executed within the timeframe of interest. [C:/Documents and Settings/Administrator/Application Data/Sun/Java/Deployment/deployment.properties, C:/Documents and Settings/Administrator/Local Settings/Temp/java_install_reg.log, and C:/Documents and Settings/Administrator/Local Settings/Temp/jusched.log] The picture below shows the contents of the java_install_reg.log file.
- Prefetch files of Java executing. [C:/WINDOWS/Prefetch/JAVA.EXE-0C263507.pf]
- Registry modification involving Java executing. [HKCU-Admin/Software/JavaSoft/Java Update/Policy/JavaFX]
- Folder activity involving the Java application. [C:/Program Files/Java, C:/Documents and Settings/Administrator/Application Data/Sun/Java/Deployment/cache/, and C:/Documents and Settings/Administrator/Local Settings/Temp/hsperfdata_username]
* Internet Activity
- Web browser history of user accessing websites within the timeframe of interest. [Administrator user account accessed the computer -192.168.11.200- running Metasploit]
- Activity involving the Temporary Internet Files folder. [C:/Documents and Settings/Administrator/Local Settings/Temporary Internet Files]
Timeline View of Potential Artifacts
The images below shows the above artifacts in a timeline of the file system from the Windows XP SP3 system with an administrative user account. The timeline includes the file system, registry, and Internet Explorer history entries.
References
Vulnerability Information
Mitre’s CVE http://cve.mitre.org/cgi-bin/cvename.cgi?name=2010-0840
NIST National Vulnerability Database http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2010-0840
Zero Day Initiative http://www.zerodayinitiative.com/advisories/ZDI-10-056/
SecurityFocus http://www.securityfocus.com/bid/39065
Exploit Information
Metasploit Exploit http://www.metasploit.com/modules/exploit/multi/browser/java_trusted_chain
Saturday, March 12, 2011
Posted by
Corey Harrell
Artifact Name
CVE-2010-0094 (RMIConnectionImpl) Exploit Artifacts
Attack Vector Category
Exploit
Description
Vulnerability present within the deserialization of RMIConnectionImpl objects affects Oracle Java 6 Update 18 and 5.0 Update 23 and earlier versions on Windows, Solaris and Linux systems. Exploitation allows for the execution of arbitrary code under the context of the currently logged on user.
Attack Description
This description was obtained using the Zero Day Initiative reference and it consists of having a user visit a malicious website.
Exploits Tested
Metasploit v3.6 multi\browser\java_rmi_connection_impl
Target System Information
* Windows XP SP3 Virtual Machine with Java 6 update 16 using administrative user account
* Windows XP SP3 Virtual Machine with Java 6 update 16 using non-administrative user account
Different Artifacts based on Administrator Rights
No
Different Artifacts based on Software Versions
Not tested
Potential Artifacts
The potential artifacts include the CVE 2010-0094 exploit and the changes the exploit causes in the operating system environment. The artifacts can be grouped under the following three areas:
* Temporary File Creation
* Indications of the Vulnerable Application Executing
* Internet Activity
Note: the documenting of the potential artifacts attempted to identify the overall artifacts associated with the vulnerability being exploited as opposed to the specific artifacts unique to the Metasploit. As a result, the actual artifact storage locations and filenames are inside of brackets in order to distinguish what may be unique to the testing environment.
* Temporary File Creation
- JAR file created in a temporary storage location on the system within the timeframe of interest. [C:/Documents and Settings/Administrator/Local Settings/Temp/jar_cache8659615251018636226.tmp. The contents of the JAR file contained a manifest file and other files which were detected as the CVE-2010-0094 exploit. Exploit.class and PayloadClassLoader.class are two of the files detected as containing the exploit.
* Indications of the Vulnerable Application Executing
- Log files indicating Java was executed within the timeframe of interest. [C:/Documents and Settings/Administrator/Application Data/Sun/Java/Deployment/deployment.properties, C:/Documents and Settings/Administrator/Local Settings/Temp/java_install_reg.log, and C:/Documents and Settings/Administrator/Local Settings/Temp/jusched.log] The picture below shows the contents of the deployment.properties file.
- Prefetch files of Java executing. [C:/WINDOWS/Prefetch/JAVA.EXE-0C263507.pf]
- Registry modification involving Java executing. The last write time on the registry key is the same thime that is reflected in the jusched.log file. [HKLM-Admin/Software/JavaSoft/Java Update/Policy/JavaFX. One of the entries in the jusched.log file was "SetDefaultJavaFXUpdateSchedule: Frequency:16, Schedule: 3:52" and this occurred when the registry key was modified]
- Folder activity involving the Java application. [C:/Program Files/Java/jre6/, C:/Documents and Settings/Administrator/Application Data/Sun/Java/Deployment/cache/, and C:/Documents and Settings/Administrator/Local Settings/Temp/hsperfdata_username]
* Internet Activity
- Web browser history of user accessing websites within the timeframe of interest. [Administrator user account accessed the computer -192.168.11.200- running Metasploit]
- Files located in the Temporary Internet Files folder. [C:/Documents and Settings/Administrator/Local Settings/Temporary Internet Files/Content.IE5/]
Timeline View of Potential Artifacts
The images below shows the above artifacts in a timeline of the file system from the Windows XP SP3 system with an administrative user account. The timeline includes the filesystem, registry, event logs, and Internet Explorer history entries.
References
Vulnerability Information
Mitre’s CVE http://cve.mitre.org/cgi-bin/cvename.cgi?name=2010-0094
NIST National Vulnerability Database http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2010-0094
Zero Day Initiative http://www.zerodayinitiative.com/advisories/ZDI-10-051/
Exploit Information
Metasploit Exploit Information http://www.metasploit.com/modules/exploit/multi/browser/java_rmi_connection_impl
Sunday, March 6, 2011
Posted by
Corey Harrell
What's one of the new forensic artifacts a Kinect leaves on the Xbox 360 which may be beneficial to an investigation? Depending on the game or application using the Kinect, there could be photographic evidence and this evidence could be used to determine the person using Xbox, the other people in a room, or the state of a room over a period of time. The corporate environment doesn't deploy gaming systems to support the business so I won't come across the Kinect's photographic evidence until the technology has a business use for the Windows computer. The topic of this post is a little different than my usual content but there's a Kinect in my house and I wanted to find the photos or videos created by any of the Kinect games.
What is the Kinect?
The Kinect is a peripheral for the Xbox 360 and according to Microsoft it is a "controller-free gaming means full body play". The Kinect senses body movement and this movement lets people interact with the Xbox whether if it's playing a game or watching a movie. The Kinect was a Christmas present to my entire family and if you do your research on the games then it really does work as advertised. I spike volleyballs by jumping in the air, my teenager scores goals by kicking a soccer ball, and my three year old runs in place while jumping over hurdles as he races down the track. Gaming systems have come a long way since my days of playing Contra and Super Mario Brothers using a controller with two buttons and a directional pad.
The Wired article How Motion Detection Works in Xbox Kinect describes the Kinect technology including the camera that's a part of the hardware. There are a few games that make use of the camera for entertainment purposes by providing slideshows of everyone who played the games. Certain games even store the captured pictures so people can access them at a later time.
Accessing the Multimedia the Xbox Way
Kinect Adventures comes bundled with the Kinect and this is one of the games which take pictures during game play. Kinect Adventures stores the pictures on the Xbox's hard drive and people can view the photos at a later time. The game's menu is used to access any of the created photos as opposed to the Xbox menu. The photos can be uploaded to websites and services such as Kinectshare.com. I uploaded a few Kinect Adventures photos to Kinectshare. The image below shows which games support Kinectshare and as you can see the Kinect Adventures game has uploaded photos (yup, that's my mug on the camera).
The pictures can be uploaded to Facebook, printed, or downloaded using Kinectshare. This is a downloaded picture with one of my sons.
Accessing the Multimedia the Post-mortem Way
An investigation may have some issues trying to use the photos or videos uploaded to Kinectshare. The first issue is Kinectshare uses the Windows Live ID associated with the Xbox live gamertag which will make it harder to access the uploaded files since the site is password protected. The second issue is the files are automatically deleted after 14 days which limits the timeframe of when the files can be accessed. Both of these issues can be avoided by directly accessing the Kinect multimedia stored on the Xbox's hard drive.
I mentioned previously I don't examine Xboxes but I was interested in the gaming photos. This post isn't intended to cover how to perform Xbox 360 examinations. If anyone is looking for this type of information there's a book called Xbox 360 Forensics published by Syngress (I came across this book while writing this post).
Right off the bat I found out that FTK imager and Encase don't display the partitions on the Xbox hard drive. A few quick Google searches not only provided me with a program to browse the hard drive but the searches also explained the folder structure. The folder structure stores content in a global area that applies to all users and content is stored in each user account's profile. The global area is located at /partition3/content/0000000000000000/TITLEID/OFFERID/ while the content in the user profiles are located at /partition3/content/PROFILEID/TITLEID/OFFERID/. The PROFILEID is the ID of the user account, the TITLEID is the name of game or application that created the folder, and the OFFERID is the type of content the folder stores. I used Kingla's Xbox 360 HDD Folder List website to determine the TITLEID and OFFERID. The picture below shows the global content for my Xbox and the Kinect games' folders are 4D5308ED (Kinect Adventures), 4D5308C9 (Kinect Sports), and 545607D3 (Dance Central).

The Kinect Adventures photos are located in the global folder 4D5308ED. There were two content folders with one for photos (OFFERID 000000001) and the other for videos (OFFERID 00090000). The videos folder didn't contain any videos of people playing the Kinect. However, there were numerous photos stored in the 000000001 folder as illustrated below.
The names of the files are based on the date and time of when they were created. It doesn't help much in my case since the Xbox's time was wrong. The files contain the Kinect Adventures photos as well as additional data. Examining the files I noticed some consistent file offsets containing data.
* File offset 5778: name of the game and the data was K•i•n•e•c•t• •A•d•v•e•n•t•u•r•e•s
* File offset 5914: PNG image and the image was an icon
* File offset 22298: Same PNG image of an icon
* File offset 49152: file name and the data was M9_0_2005_11_22_7_9_38_784
* File offset 53328: JPG image which is the Kinect photo
I used a hex editor to copy out all of the data for the JPG image. As illustrated below the start of the JPG image is at file offset 53328.
The JPG data was copied and saved as a new file with a jpg file extension. The image was the Kinect photo showing my three year old playing Kinect Adventures while my teenager waits on the couch.
What's Next
Only certain games or applications create videos or photos with the Kinect. Kinect Adventures is one of the games that do and this game comes bundled with the Kinect. As I said before, this technology hasn't reached the corporate environment yet but I think it's only a matter of time before it does. A quick Google search provides a ton of hits of how various people adopted the Kinect technology for other uses including controlling a Windows 7 computer. Winrumors.com posted that Microsoft is going to be releasing its own Windows based Kinect SDK in the spring amid a growing community of "Kinect hackers". This could be the beginning of this technology extending beyond gaming and research to serve other purposes more suitable for the corporate environment. Time will tell what new forensic artifacts this technology will bring and how beneficial the artifacts are to an investigation.
Tuesday, February 22, 2011
Posted by
Corey Harrell
Finally
As I’m writing the first paragraph of a paper for my Masters of Science program the only thought that keeps running through my mind is finally. I finally reached not only the last week of class but the last week of my master’s program. In a few days I will finally complete the MSIA program when I submit my paper and my experience -not the knowledge- will gradually become a distant memory.
The second thought to run through my mind was everything on my to do list. My list has been piling up over the months and one of the more recent items on the list is the lack of my blog posts over the past few weeks. This will hopefully change once I’m done with school and a few of the future posts will cover some of the things I’m looking at including Java vulnerability exploit artifacts, my introduction to log analysis, and possibly a new crime scene camera that people are putting into their homes.
In the meantime here are a few links about timelines.
Timeline Analysis Links
Kristinn has an excellent post about analyzing timelines which can be found here. I previously blogged about reviewing timelines with Excel (post is here) and Calc (post is here). I created the timelines using mactime and redirected the output to a csv file which I then imported into Excel. Kristinn approaches analyzing timelines with Excel a different way. Kristinn mentioned that filtering is not optimal with mactime and Excel so he uses the CSV output module in log2timeline to create the timeline. One of the limitations I found with Excel was the limit on the number of variables you can filter on using basic filters (Calc had a higher limit but it was still only eight variables). This was one of the reasons I looked into using advanced filters, Kristinn's approach is really interesting since the CSV module breaks up the description field which makes it easier to filter on using basic filters. His write-up is very informative and educational. Trying out this approach has been added to my to do list. Kristinn, thanks again for the write-up and sharing this information.
One of the downsides to being state public sector employee – especially for New York state- is the lack of funds to attend trainings and conferences. This is the main reason why I like when speakers share their conference presentation slides since it lets people who couldn’t attend the conference (aka me) to see some of the presented material. Mandiant posted their DoD Cyber Crime 2011 presentations and one of them was Rob Lee’s Super Timeline Analysis presentation. My biggest take away from Rob's slides was his research on the Windows time rules (I was already familiar with the other content in the slides since I read Rob's post on the SANs forensic blog about supertimelines and volume shadow copy timelines). The Windows time rules (slides 15 and 16) outline how the timestamps in the Standard Information Attribute and Filename Attribute are changed by actions taken against a file. For example, you can see the difference between the changes to a file's timestamps when it is moved locally as compared being moved to another volume. The charts are a great reference and thank you Rob for sharing this information.
Monday, February 7, 2011
Posted by
Corey Harrell
I was working on a computer a few weeks ago (non-work related issue) when my antivirus scanner flagged two files. The names of the two files were 2371d6c6-2a6da032.idx and 2371d6c6-2a6da032; both files were located in the Sun Java cache folder. The first time I came across this type of artifact I mentioned it in the jIIr post Anatomy of Drive-by Part II. This time around things were different because I didn’t have to identify the files since the antivirus software marked them as containing the CVE-2010-0094 exploit. I thought this known Java exploit was a good candidate for a sample to practice on. Not only could the sample be used to learn how to analyze Java exploits with REMnux but it could also be used to try out a few recipes from the Malware Analyst’s Cookbook. This post is the examination of the 2371d6c6-2a6da032.idx and 2371d6c6-2a6da032 files which consists of the following:
* Understand the Java Cache folder
* Examine the IDX File
* Examine the JAR File
* Extract Java Source from the JAR File
* Examine Java Source
Understand the Java Cache folder
The 2371d6c6-2a6da032.idx and 2371d6c6-2a6da032 files were located in the following folder: Users\\AppData\LocalLow\Sun\Java\Deployment\cache\6.0\6\. This folder is the default location where Java stores temporary files on the computer so the files can be executed faster in the future. The picture below highlights where the temporary file location can be changed from its default value in the Java Control Panel. Note: the picture was taken from a Windows XP system but the samples came from a Windows 7 system which is why the path shown below is different than the one I mentioned.
Examine the IDX File
The storage location of the 2371d6c6-2a6da032.idx and 2371d6c6-2a6da032 indicate they are temporary files. The files in the cache folder with an extension of IDX are Java applet cache index. The index tracks information about the temporary files in the cache folder such as: the file’s name, URL the file came from, IP address of the computer the file came from, the last modified date of the file, and what appears to be the date of when the file was downloaded. The picture below shows this information stored in an index file I grabbed from my Java cache folder. Note: Skillport is a legitimate website so the information below is not malicious.

The temporary files’ indexes can be viewed using the View button in the Java Control Panel (the button is to the right of the Settings button in the picture of the Java Control Panel above). I verified this by comparing the contents of an IDX file on my computer with the information in the Java Control Panel viewer. The picture below highlights the relationship between the values in the IDX file and the viewer.
As can be seen in the picture above, the Java Cache Viewer doesn’t provide all of the information that’s available in the index file. For example, the last modified date only shows the date while the index file also contains the time. Another example is the Java Cache Viewer not showing the IP address of the computer where the file came from even though the address is present in the index.
The 2371d6c6-2a6da032.idx file is the index for the 2371d6c6-2a6da032 file and this index provides valuable information about where the file came from. The picture below shows the information in the 2371d6c6-2a6da032 file’s index (the file was viewed using the vi text editor in REMnux).
The index file contains some valuable information about the JAR file and some of that information is listed below.
* Filename: 7909df6ac8d.jar
* URL where file came from: hxxp://partersl(dot)com/new/2fcf33c783
*File size: 23996
* File's last modified date: Wed Oct 27, 2010 14:44:55 GMT
* IP address of the computer where file came from: 91.213.217.35
* Web software involved: Apache
* Deploy request content type: application/ java archive
* Date when file was downloaded: Sun Oct 31, 2010 21:49:25 GMT
Side note: I conducted a few tests on the download date. I'm not sure if any actions alter this date but during my testing the date didn’t change the same file was accessed on a server at a later time.
Examine the JAR File
The 2371d6c6-2a6da032.idx file provided some interesting information about the 2371d6c6-2a6da032 file. Further research could be done on some of this information such as the IP address or domain names; the Malware Analyst Cookbook has a few recipes for this type of research. The index file indicated that the 2371d6c6-2a6da032 file was an application/java archive but there wasn't any other information about what was the file's purpose. A closer examination was needed to find out the file’s purpose.
JAR files are package with the ZIP file format; this means JAR files can be used for ZIP tasks such as archiving files. This means the 2371d6c6-2a6da032 JAR file is an archival. I wanted to become more familiar with the JD-GUI program so I downloaded it to REMnux in order to view the contents of the JAR file. The picture below shows the contents of the 2371d6c6-2a6da032 file (I added the .jar extension when I had an issue with JD-GUI not seeing the file).
A JAR contains a manifest which “is a special file that can contain information about the files packaged in a JAR file”. The information contained in the manifest enables the JAR file to be used for multiple purposes. This also means the manifest can help determine what the purpose of an unknown JAR is. The picture below shows 2371d6c6-2a6da032 file’s manifest.
The 2371d6c6-2a6da032.idx file indicated this JAR was an application and if an application is bundled in a JAR file then there needs to be a way to indicate which class file within the JAR is the application’s entry point. The entry point is identified using the Main-Class header and the picture above shows the main class is the starting point for the bundled application. This information established the starting point of the examination of the Java in the eight class files.
Extract Java Source from the JAR File
The SANs Internet Storm Center posted an entry -Java Exploits- and this post discussed how Java exploits can be analyzed. To examine the class files a Java decomplier is required in order to extract the Java source from the files. I wanted to become familiar with the jad decomplier in REMnux so I attempted to extract the source code using the method outlined in the post Java Exploits.
The class files were unzip from the JAR.
Jad was used to extract the source code when the following error was encountered.
I looked into the error and came across a forum that stated jad has issues handling Java 5.0. To get around the issue I used JD-GUI to extract the source code.
Examine Java Source Code
This post is called (Almost) Cooked Up Some Java because I wasn't successful in examining the Java source code. I wanted to complete the 6-2 recipe in the Malware Analyst Cookbook to see if the exploit could be identified in the Java code. This would have completed the examination of the 2371d6c6-2a6da032.idx and 2371d6c6-2a6da032 files. However, the exploit wasn't identified because of an error running the source code. I received the error when running the code through jsunpack-n and spidermonkey (REMxun has both programs). The screenshots of the errors are shown below.


The error referenced a missing semicolon but the review of the code showed there wasn't a semicolon missing. I even ran the Java source code through Wepawet to see if the result would be different but the site doesn't show if an error occurred similar to jsunpack-n. I reached out for help on this issue and someone helped identified the lines in the code causing the errors. The lines had string variables with values containing numbers. When the numbers were removed then the error would disappear. However, removing the numbers wasn’t a solution to the error so this meant I couldn’t examine the Java source code.
******** Update ********
A reader responded and pointed out the errors were due to the programs I was using. Spidermonkey and jsunpack-n are used to analyze Javascript instead of Java code. Thank you again to the reader who took the time to contact me.
I was hoping someone would see what I was doing wrong because I still wanted to know how to examine the Java code in order to locate the exploit and its payload. If there are any more updates to this post in the future, I'll put them at the end of the post after the summary.
******** Update ********
At this point I thought the next step could be to conduct a few searches using keywords from the JAR file. I performed a few quick searches using different combinations of the names of the class files in the JAR file. I wasn’t able to find the same 2371d6c6-2a6da032 file but I found other files that had similar class file names. A few of the search hits are listed below.
* Oct 7, 2010: JAR file was run through ThreatExpert and there were no detections
* Oct 29, 2010: JAR file was run through ThreatExpert and there were no detections
* Oct 31, 2010: 2371d6c6-2a6da032 file being examined was downloaded
* Dec 06, 2010: JAR file was run through ThreatExpert and there was one detection
* Dec 23, 2010: Microsoft Malware Protection write-up on TrojanDownloader:Java/Rexec.C
* Feb 03, 2010: 2371d6c6-2a6da032 file being examined was run through ThreatExpert and there were numerous detections
Summary
The Java cache folder is one location on a system where there could be artifacts of a Java exploit. The folder’s location can be changed but the default location for Windows 7 is \\AppData\LocalLow\Sun\Java\Deployment\cache\6.0\6\ while Windows XP is \\Application Data\Sun\Java\Deployment\cache\6.0.
For each temporary file downloaded into the Java Cache folder will result in two files being present. One file will be the actual temporary file while the second file will be the Java applet cache index for the temporary file. The index tracks information about the temporary file such as: the file’s name, URL the file came from, and the file’s last modified date. The temp file and its index can provide valuable information about the file’s purpose and where it came from.
Now back to my examination. I was unable to analyze the Java code in the JAR file and the few quick Google searches I performed didn’t provide a good hit (around the time of 10/31/2010) to confirm my suspicions about the file. However, I was able to quickly confirm my suspicions using the information from the index file. A quick Google search of the IP address 91.213.217.35 resulted in a hit for the Malc0de database. The Malc0de database entry is shown below.
Notice the first entry was on 11/01/2010 which was one day after the JAR file was downloaded to the system. The database entry contained the IP address and domain that was tracked in the 2371d6c6-2a6da032.idx file.
For anyone interested here is the VirusTotal report about the 2371d6c6-2a6da032 file. The report analyzed the file three months after the file was downloaded to the system I was looking at.
Thursday, January 27, 2011
Posted by
Corey Harrell
Wine is a program that lets Windows software run on other operating systems. This means Wine can be used to run Windows only forensic or malware analysis tools on the Sift workstation and REMnux. It’s so easy to get Wine up and running I wasn’t even sure if a blog post was needed. However, it never hurts to be informed. Here's a quick post on installing Wine and running Windows tools on Sift and REMnux.
Install Wine
The Sift v2 workstation and REMnux v2 both were built using Ubuntu Linux. The Wine website shows the different options for installing Wine on Ubuntu including using repositories, the GUI, or the command line. All of these options require the Sift and REMnux to have Internet access. I used the command line option since it only involved running the following commands:
* sudo add-apt-repository ppa:ubuntu-wine/ppa
* sudo apt-get update
* sudo apt-get install wine1.3
- Enter Yes to proceed with the installation
That’s right, just three commands to install Wine. The next few pictures show Wine being installed on the Sift workstation.
Running Windows Programs on the Sift and REMnux
Wine can be used to run standalone Windows programs or programs that require an installation process. I wanted Wine so I could run a few standalone Windows programs so this post won’t cover installing a program in Wine (the Wine website has information on this topic). To run a standalone Windows program the program needs to be launched with Wine. Most of the programs I’ve tested run without any issues but a couple programs required some tinkering. The pictures below show Windows programs running on the Sift and REMnux.
First up is Nirsoft’s IEHistoryView running on Sift.
Next is McAfee’s BinText running on REMnux.
Here is PEID running on REMnux.
As I mentioned before, not all of the Windows programs will run without any issues. For example, Digital Detective’s Dcode program fails to run because of a missing dll. This is shown below with the missing dll highlighted in the red box.
A quick search on a Windows system locates the msvbvm60.dll in the Windows\System32 folder (this search was done on a Windows XP system). To fix the missing dll error, just copy the msvbvm60.dll from a Windows system to Wine’s Windows\System32 folder as shown below.
Now here is the picture of Dcode running on the Sift. Some messages appear while Dcode runs so testing has to be done to make sure the program still converts all of the dates properly.
REMnux and Sift are great distributions since they come preconfigured with some of the tools I use. My main platform is Windows so REMnux and Sift save me a lot of time because I don’t have to setup my own Linux environments. At times I find myself switching between Windows and Linux to run certain tools. Wine gives me the option of bringing a few Windows tools over to the Linux so I won’t have to switch between the two operating systems as much.
Sunday, January 23, 2011
Posted by
Corey Harrell
I attended a lot of trainings as a communications technician in the Marines. There were formal trainings by outside parties, organized trainings by my unit, and formal education on my own to hone my troubleshooting skills. The goal of all of those trainings was to prepare me to perform my job regardless of the situation I might face. Even though I left the Marines, I carried over the same mentality to my career in information security especially for digital forensics. Using a mixture of formal education, paid training, and a lot of self training to ensure I'm capable of performing my job regardless of the situation I might encounter. However, outside of forensic challenges or forensic datasets I never had an organized way to approach self training until I wanted to learn about incident response investigations. This post will explain the approach I've been using but haven't documented until now.
Forensicator readiness is the method I've been using to help prepare myself to be able to investigate any situation regardless of the circumstances. If someone said "I need you to help figure out what caused this incident” I wanted to be able to hit the ground running. This is better than having to reply with "I'd like to help out but first I have to attend a training which by the way costs a few thousand dollars". I'm sharing my approach because I think it might be useful to others. Experienced analysts/examiners can use it to learn how to investigate new types of cases or students can use it to help prepare them for the common cases they might face. Forensicator readiness consists of the following six steps:
* Pick a Scenario
* Establish the Scenario’s Scope
* Collect Digital Information
* Examine Digital Information
* Scenario Simulation
* Identify Areas for Improvement
Ever since I read the Alexiou Principle (as described by Chris Pogue here and here) I've been using it in my cases. The principle has been helpful in planning out the investigation and keeping the investigation on track. I thought if it works for actual cases then it should work for simulations. Well, the principle does work in simulations so I use it in the forensicator readiness steps.
Pick a Scenario
There is always something to learn in digital forensics whether it’s a student studying the field in school or a forensicator who has been in the field for a number of years. It could be trying to understand how to investigate different types of cases, how to examine new data, or how to extract data from certain devices. The first step is to pick a scenario containing the item or situation of what is to be learned. The scenario could be based on the common types of cases processed in your organization or on a potential situation someone might need to investigate. A few scenario examples are: data leakage through USB device, sexual harassment involving company email, or a malware infected server.
Next a determination needs to be made to see if it’s possible to set up test systems to simulate the scenario. Research is completed in the collection, examination, and simulation steps and systems will be needed to run a few tests on. For example, I’d like to learn how to investigate a hacked database server. Unfortunately, I can’t use this scenario since I can’t simulate a SQL injection attack against a test database server. As a result, my focus is on the scenarios I can simulate in a test environment such as a malware infected system.
Having a scenario by itself isn’t enough because an end goal hasn’t been established; what is trying to be accomplished. This is where the first question of the Alexiou principle comes into play which is “what question are you trying to answer”. Identify a few potential questions to help guide the goal of the investigation. For example, the two potential questions of a suspected malware infected system I've been using are: is the system infected and how did the system become infected. The two questions identify what I’m trying to accomplish and helped guide my research in investigating a malware infected system.
Establish the Scenario’s Scope
The selected scenario has an end goal and can be replicated in a test environment. The next step is to determine the scope of the testing environment. Is the test environment going to be one computer or multiple computers? What operating systems are going to be on the computers? Are there going to be any networking devices such as routers, switches, or firewalls? Another consideration when determining the scope of the testing environment is what resources are available. Is there the necessary hardware and software to build the test environment? I'd like to be able to simulate a test environment of over 20 machines for my malware scenario but I can’t pull it off due to the lack of resources. I had to settle on just a few test systems and I’m still able to simulate my scenario. The above questions are only some of the things that have to be considered when scoping the test environment.
Setting up of the test environment during the next few steps may take some time. Despite the time required, one of the benefits of setting up your own environment is learning about the technology as you install and configure it. For example, if your scenario requires a web server running IIS (Windows Internet Information Service) then setting up IIS will provide a better understanding of what the default settings are and how it can be configured.
Collect Digital Information
At this point, the scenario has been identified, goals have been established, and the testing environment has been identified. The next step is to collect the digital information. The second question in the Alexiou principle states “what data do you need to answer that question”. The data sources in the test environment need to be evaluated to determine which ones can help you answer your question(s). The data sources could include hard drives, memory, logs, or captured network traffic.
Once the data sources of interest are identified then it’s time to research how these sources should be collected and what tools can be used. The amount of research required for this step will depend on the experience of the person conducting this exercise. In some instances, a person will already have a procedure in place for collecting the data sources and will have knowledge of the tools to use so additional research may not be necessary. On the other hand, the person may be facing a new data source(s) so there won’t be a procedure for the collection and the person won’t have knowledge about the tools to use. For example, in one of my scenarios I wanted to collect a hard drive and volatile data. I had experience with hard drives but collecting volatile data was new. I conducted research with the intention of modifying my collection procedures to include volatile data. The research involved reviewing RFC 3227, forensic books, blogs, and forums to determine what procedural steps were required to collect volatile data and what tools could be used to acquire volatile data from a system.
The new procedural steps and tools will need to be evaluated to determine if they work as intended. This evaluation will require a small test environment. Continuing with my volatile data example, I tested the procedural steps I researched and my list of tools to see which one best met my needs. I ran a few tests against Windows XP virtual machines by acquiring the volatile data from them. This not only allowed me to see if the steps were correct and what tool worked best but it also showed me what changes I had to make to the collection steps.
Examine Digital Information
At this point it's time to identify and extract the data required to answer the scenario questions. Continuing on with the Alexiou principle's second question “what data do you need to answer that question”; this question can further identify the data needed to answer the questions. The data sources have already been identified so the next part is to identify what information in those data sources can answer the questions. For example, in my scenario one of my data sources was volatile data so I had to figure out what information I needed from it. Some of the information was the running processes, established network connections, and loaded drivers. Once the exact information in the data sources is identified, the third Alexiou principle comes into play which is "how do you extract the data". Using the volatile data example, this would be determining how to extract the established network connections, loaded drivers, and running processes from the data.
As might be expected, research has to be conducted to decide what information in the data sources is needed to answer the questions and how that information can be extracted. The same types of references used in the collection step can be used such as blogs, forums, and forensic books.
Similar to the collection step, the new examination steps and tools need to be evaluated to see if they work as intended. The evaluation can use available forensic datasets or a small test environment. Forensic datasets can be used for testing different types of data sources and this is a faster option then setting up a test environment. The datasets available for testing the examination steps and tools for the volatile data example include: NIST CFReDS Project, Forensic Educational Datasets, Honeynet Challenges, or the memory images on the Forensic Incident Response blog. If there isn't an available dataset then a small test environment has to be set up. The fourth question of the Alexiou principle is "what does the data tell you". This question should be kept in mind during the evaluation because the purpose of the examination steps and tools are to extract information needed to answer the scenario questions. If the information doesn't help answer the question then additional research may have to be performed so the examination steps and tools can be adjusted.
Scenario Simulation
This step is where all of the hard work of researching and evaluating pays off. The scenario simulation is when the test environment is created and the scenario is simulated in that environment. The first scenario I've been working with is a computer infected with malware, and one of the ways I simulated this scenario was by visiting known malicious websites with a computer running vulnerable software. After the scenario is simulated then the next step is to treat the test environment like a real investigation. The data sources of interest get collected, and information is extracted from those sources to answer the scenario's questions.
Identify Areas for Improvement
Now that the dust has settled from investigating the scenario in a test environment; it's time to reflect back on what was done. The purpose of this step is to see if there is anything to improve upon. A few things for consideration are: did the tools perform as expected, were the procedures correct, what didn't work, and what can be done better. Something else to keep in mind during this reflection is to decide if any additional research has to be performed on any artifacts in order to get a better understanding about them. During my simulation, I didn't have a good understanding about the attack vector artifacts such as those left by exploits. I spent some time researching a few of these artifacts so I'd have a better understanding the next time I come across a similar artifact.
Summary
There isn’t a set time table to complete the forensicator readiness steps. It could take days, weeks, or months to complete. The time all depends on the scenario and how much of an understanding someone wants. People prepare for things differently and forensicator readiness is no different. If the steps can accomplish the end goal of preparing someone to investigate an incident regardless of the circumstances - like it did for me- then the process has served its purpose.
So when someone said to me "I need your help to figure out what caused this infection"; I was ready to rock and roll. I’ve been successful numerous times locating malware on systems and identifying the attack vector that put the malware on the systems. A few of the vectors were: a malicious email attachment, a drive-by download using a malicious PDF, and third party content pointing to a website hosting Windows help center and Java exploits. I don't think my success is a string of luck. It's due to my preparation for a situation I thought I would face sooner or later. It just so happened to be sooner than I was expecting.