skip to main |
skip to sidebar
Tuesday, September 18, 2012
Posted by
Corey Harrell
Malware forensics can answer numerous questions. Is there malware on the system, where is it, how long has it been there, and how did it get there in the first place. Despite all the questions malware forensics can solve there are some that it can’t. What is the malware’s purpose, what is its functionality, and is it capable of stealing data. To answer these questions requires malware analysis. Practical Malware Analysis defines malware analysis as “the art of dissecting malware to understand how it works, how to identify it, and how to defeat or eliminate it”. I’ve been working on improving my malware analysis skills while at the same time thinking about the different ways organizations can benefit from the information gained by analyzing malware. One such benefit is empowering the help desk to combat malware. Everyday help desks are trying to find malware on computers using antivirus products lacking signatures to detect it. Analyzing malware can provide enough information to build a custom antivirus signature to provide the help desk with a capability to find it until the commercial antivirus signatures catch up. In this post I’m going through the process; from analyzing malware to creating a custom antivirus signature to using the signature in the portable apps ClamAV version.
The work presented in this post was originally put together for my hands on lab for a Tr3Secure meet-up. Also, the sample used was obtained from Contagio’s post APT Activity Monitor / Keylogger.
Disclaimer:
I’m not a malware analyst. I’m an information security and digital forensic practitioner who is working on improving my malware analysis skills. As such for anyone looking to be more knowledgeable on the subject then I highly recommend the books Practical Malware Analysis and the Malware Analyst's Cookbook.
Static Analysis
Static Analysis is when the malware is examined without actually running it. There are different static analysis steps to extract information; the two I’m discussing are: reviewing strings and reviewing the import table.
Reviewing Strings
Strings in malware can provide clues about the program and I find them helpful since it makes me more aware about malware’s potential functionality. However, conclusions cannot be drawn by solely looking at the strings. I usually first run HexDive on a sample to filter the strings typically associated with malware followed by running Strings to make sure I see everything.
Below is the Hexdrive command running against AdobeInfo.exe and a snippet from its output.
C:\> Hdive.exe C:\Samples\AdobeInfo.exe
CreateFileA
SetFileAttributesA
CreateDirectoryA
GetCurrentDirectoryA
GetWindowTextA
GetForegroundWindow
GetAsyncKeyState
GetStartupInfoA
[Up]
[Num Lock]
[Down]
[Right]
[UP]
[Left]
[PageDown]
[End]
[Del]
[PageUp]
[Home]
[Insert]
[Scroll Lock]
[Print Screen]
[WIN]
[CTRL]
[TAB]
[F12]
[F11]
There were numerous Windows API function names in the strings and looking the functions up in Practical Malware Analysis’s Appendix A (commonly encountered Windows functions) provides some clues. The following are three function names and why they may be relevant:
- CreateFileA: creates new or opens existing file
- GetForegroundWindow: returns a handle to a window currently in the foreground of the desktop. Function is commonly used by keyloggers to determine what window the user is entering keystrokes in
- GetAsyncKeyState: used to determine whether a particular key is being pressed. Function is sometimes used to implement a keylogger
The other interesting strings were the characters associated with a keyboard such as [Down] and [Del]. The combination of the API names and keyword characters indicate the malware could have some keylogging functionality.
Below is the Strings command running against AdobeInfo.exe and a snippet from its output.
C:\>Strings.exe C:\Samples\AdobeInfo.exe
---- %04d%02d%02d %02d:%02d:%02d ----------------
\UpdaterInfo.dat
\mssvr
The Active Windows Title: %s
In addition to the strings extracted with HexDrive, Strings revealed some other text that didn’t become clear until later in the examination.
Reviewing the Import Table
“Imports are functions used by one program that are actually stored in a different program, such as code libraries that contain functionality common to many programs”. Looking at functions imported by a program provides better information about a malware’s functionality than solely relying on its strings. I used CFF Explorer to review the import table as shown in the screen shot below.
The import table showed three DLLs which were kernel32.dll, user32.dll, and msvcrt.dll. The DLLs’ functions imported matched the Windows API function names I found earlier in the strings. The functions provide the sample with the ability to: create and open files and directories, copies text from a window’s title bar, returns a handle to an active window, returns handle to a loaded module, and monitor for when a key is pressed. All of which strengthens the indication that the sample is in fact a keylogger.
Dynamic Analysis
The opposite of static analysis is dynamic analysis which is examining the malware as it runs on a system. There are different dynamic analysis steps and tools to use but I’m only going to discuss one; monitoring program execution with Capture-Bat. Below is the output from the Capture-Bat log after AdobeInfo.exe executed on the system (the entries related to my monitoring tools were removed).
"2/8/2012 15:03:27.653","process","created","C:\WINDOWS\explorer.exe","C:\Samples\AdobeInfo.exe"
"2/8/2012 15:03:40.403","file","Write","C:\Samples\AdobeInfo.exe","C:\Samples\mssvr\UpdaterInfo.dat"
"2/8/2012 15:03:40.481","file","Write","C:\Samples\AdobeInfo.exe","C:\Samples\mssvr\UpdaterInfo.dat"
"2/8/2012 15:03:40.497","file","Write","C:\Samples\AdobeInfo.exe","C:\Samples\mssvr\UpdaterInfo.dat"
"2/8/2012 15:04:33.419","file","Write","C:\Samples\AdobeInfo.exe","C:\Samples\mssvr\UpdaterInfo.dat"
"2/8/2012 15:04:33.419","file","Write","C:\Samples\AdobeInfo.exe","C:\Samples\mssvr\UpdaterInfo.dat"
Capture-Bat revealed when AdobeInfo.exe runs it creates a folder named mssvr in the same folder where the executable is located as well as creates a file named UpdaterInfo.dat inside the folder. Remember the suspicious strings before? Now the strings \UpdaterInfo.dat and \mssvr make a lot more sense. Looking at the UpdaterInfo.dat file closer showed it was a text file containing the captured data as can be seen in the partial output below.
---- 20120802 15:07:38 ----------------
15:07:40 The Active Windows Title: Process Explorer - Sysinternals: www.sysinternals.com [XP-SP2\Administrator]
15:07:33 The Active Windows Title: C:\WINDOWS\system32\cmd.exe
15:07:39 The Active Windows Title: Process Explorer - Sysinternals: www.sysinternals.com [XP-SP2\Administrator]
c[CTRL]y
15:07:41 The Active Windows Title: Process Monitor - Sysinternals: www.sysinternals.com
15:07:44 The Active Windows Title: Process Monitor
15:07:38 The Active Windows Title: Process Monitor
15:07:38 The Active Windows Title: API Monitor v2 (Alpha-r12) 32-bit (Administrator)
15:07:44 The Active Windows Title: Process Monitor
15:07:45 The Active Windows Title: API Monitor v2 (Alpha-r12) 32-bit (Administrator)
15:07:35 The Active Windows Title: ApateDNS
Everything in the analysis so far has identified the AdobeInfo.exe program as a keylogger that stores its captured data in a log file named UpdaterInfo.dat. All that was left was to confirm the functionality. I used Windows to create a password protected zip file then I unzipped it. A quick look at the UpdaterInfo.dat log file afterwards confirmed the functionality (note: Windows made me enter the password twice).
16:07:30 The Active Windows Title: Program Manager
16:07:38 The Active Windows Title: Add to Archive
[CTRL]
16:07:58 The Active Windows Title: Program Manager
supersecretsupersecret
16:07:58 The Active Windows Title: Compressing
16:07:59 The Active Windows Title: Program Manager
16:07:21 The Active Windows Title: Extraction Wizard
16:07:30 The Active Windows Title: Password needed
16:07:37 The Active Windows Title: Extraction Wizard
supersecret
16:07:40 The Active Windows Title: Program Manager
16:07:42 The Active Windows Title: Windows Explorer
Creating Custom AntiVirus Signature
I’m not going into detail about how to create custom signatures for ClamAV but I will point to the great references I found on the subject. The Malware Analyst’s Cookbook talks about how to leverage ClamAV for malware analysis in the recipes: Recipe 3-1 (examining existing ClamAV signatures), Recipe 3-2 (creating a custom ClamAV database), and Recipe 3-3 (converting ClamAV signatures to Yara). Another resource is Alexander Hanel’s post An Intro to Creating Anti-Virus Signatures. The ClamAV website also has information on the subject as well including the slide deck in PDF format for the webcast Writing CalmAV Signatures.
I spent some time creating different signatures: from a custom hash database to extended signature format to a logical signature. To keep the post shorter I’m only going to cover how I created a logical signature. A ClamAV logical signature is based on hex strings found inside a file and logical operators can be used to combine the hex strings in different ways. The format for the signature is below:
SignatureName;TargetDescriptionBlock;LogicalExpression;Sig0;Sig1;Sig2;
The SignatureName is self explanatory, the TargetDescriptionBlock is the type of file the signature applies to (0 means any file), LogicalExpression is how the signatures are combined using logical operators, and the Sig# are the actual hex strings. The completed signature is placed into a file with an ldb extension.
Reviewing the strings in AdobeInfo.exe provided some good candidates to create a signature for; specially \UpdaterInfo.dat, \mssvr, and [CTRL]. I used portable apps ClamAV’s sigtool to determine the hex of those strings. I ran the following command for each string:
C:\>echo \UpdaterInfo.dat | App\clamwin\bin\sigtool --hex-dump
The end result provided me with the hex for each string.
\UpdaterInfo.dat
5c55706461746572496e666f2e646174
\mssvr
5c6d73737672
[CTRL]
5b4354524c5d
I then combined the strings into a logical signature as shown next.
AdobeInfo.exe;Target:0;0&1&2;5c55706461746572496e666f2e646174;5c6d73737672;5b4354524c5d
Finally, I ran the custom signature against the AdobeInfo.exe file and was successfully able to identify it. The command to run a custom scan from the command-line in portable ClamAV is:
App\clamwin\bin\clamscan -d adobe.ldb C:\Samples\
Empowering the Help Desk
I’m done right? I was able to analyzing the malware to determine its functionality, create a custom antivirus signature to detect it, and found a way to run the custom signature using the portable apps ClamAv version. I wouldn’t be too quick to say my job is done though. Let’s be honest, going to any help desk and telling the staff from now on you have to use the command-line may not have the greatest chances of success. You need to provide options; one option most help desk staff will want is an antivirus program with a GUI that’s similar to their commercial antivirus programs. ClamAV has a pretty nice graphical user interface that can be configured to use custom signatures so we could leverage it.
The article How to create custom signatures for Immunet 3.0, powered by ClamAV explains how to write custom signatures and configure the Windows ClamAV version (Immunet) to use them. This is nice but Immunet has to be installed on a computer. A cool option would be to be able to run scans from a removable drive so when the help desk respond to a system all they need to do is to plug in their thumb drive. This is where the portable apps ClamAV version comes into play since it can run scans from a thumb drive. I couldn’t find any documentation about how to use custom signatures in the portable version but after a little testing I figured it out. All that has to be done is to copy the custom signature to the same folder where ClamAV stores it signature files main.cvd and daily.cdv. This location is ClamWinPortable\Data\db folder. I copied my adobe.ldb custom signature to the Data\db folder and was able to locate the malware sample I disguised as Notepad ++.
Wednesday, August 22, 2012
Posted by
Corey Harrell
Knowing what programs ran on a system can answer numerous questions about what occurred. What was being used to communicate, what browsers are available to surf the web, what programs can create documents, were any data spoliation programs ran, or is the system infected. These are only a few of the questions that can be answered by looking at program execution. There are different artifacts showing program execution; one of which is the application compatibility cache. Mandiant’s whitepaper Leveraging the Application Compatibility Cache in Forensic Investigations (blog post is here and paper is here) explains what the cache is in detail and why it’s important to digital forensics. One important aspect about the cache is it stores information about files such as names, size, and last modified times; all of which may be useful during a digital forensic examination. The application compatibility cache has provided additional information I wouldn’t have known about without it. As such I’m taking some time to write about this important new artifact.
I wanted to highlight the significance of the cache but I didn’t want to just regurgitate what Mandiant has already said. Instead I’m doing the DFIR equivalent of man versus the machine. I’m no John Henry but like him we are witnessing the impact modernization has on the way people do their jobs. One such instance is the way people try to determine if a system is infected with malware. A typical approach is to scan a system with antivirus software to determine if it is infected. There is a dependency on the technology (antivirus software) to do the work and in essence the person is taken out of the process. Seems very similar to what John Henry witnessed with the steam powered hammer replacing the human steel drivers. John Henry decided to demonstrate man’s might by taking the steam powered hammer head on in a race. I opted to do the same, to take on one of my most reliable antivirus scanners (Avast) in a head on match to see who can first locate and confirm the presence of malware on a system. I didn’t swing a hammer either. My tools of choice were RegRipper with the new appcompatcache plugin to parse the application compatibility cache along with the Sleuthkit and Log2timeline to generate a timeline containing filesystem metadata. Maybe, just maybe in some distant future in IT and security shops across the land people will be singing songs about the race of the century. When Man took on the Antivirus Scanner.
The Challenge
The challenge was to find malware that an organization somewhere in the land is currently facing. Before worrying about what malware to use I first configured the test system. The system was a Windows XP fresh install with Service Pack 3. I only installed Adobe Reader version 9.3 and Java version 6 update 27. These applications were chosen to make it easier to infect the system through a drive-by. I wanted to use unknown malware as a way to level the playing field; I didn’t need nor want any advantages over the antivirus scanner. To find the malware I looked at the recently listed URLs on the Malware Domain List to find any capable of doing a drive-by. I found two potential URLs as shown below.
The first URL pointed to a Blackhole exploit pack. I entered the URL into Internet Explorer and after waiting for a little bit the landing page appeared as captured below.
I gave Blackhole some more time to infect the computer before I entered the second URL. That was when I saw the first indication the system was successfully infected with an unknown malware.
The race was now officially on. Whoever finds the malware and any other information about the malware first wins.
On Your Mark, Get Set
I mounted the system to my workstation using FTK Imager in order for tools to run against it. I downloaded and installed the latest Avast version followed by updating to the latest virus signature definitions. I configured Avast to scan the mounted image and all that was left was to click “Scan”. With my challenger all set I made sure I had the latest RegRipper Appcompatcache plugin. Next I fired up the command prompt and entered the following command:
rip.pl –p appcompatcache –r F:\Windows\System32\config\system > C:\appcompt.txt
The command is using RegRipper’s command-line version and says to run the appcompatcache plugin against the system registry hive in the mounted image’s config folder. To make it easier to review the output I redirected it to a text file.
My challenger is all set waiting at the starting line. I’m all set just waiting for one little word.
Go!
The Avast antivirus scan was started as I pressed enter to run the RegRipper’s appcompatcache plugin against the system registry hive.
0 minutes 45 seconds
I opened the text file containing the parsed application compatibility cache. One cool thing about the plugin is that Harlan highlights any executables in a temporary folder. In the past I quickly found malware by looking at any executables present in temp folders so I went immediately to the end of the output. I found the following suspicious files which I inspected closer.
Temp paths found:
C:\Documents and Settings\Administrator\Local Settings\Temp\gtbcheck.exe
C:\Documents and Settings\Administrator\Local Settings\Temp\install_flash_player_ax.exe
C:\Documents and Settings\Administrator\Local Settings\Temp\install_flashplayer11x32ax_gtbd_chrd_dn_aih[1].exe
C:\Documents and Settings\Administrator\Local Settings\Temp\gccheck.exe
C:\Documents and Settings\Administrator\Local Settings\Temporary Internet Files\Content.IE5\4967GLU3\install_flashplayer11x32ax_gtbd_chrd_dn_aih[1].exe
C:\Documents and Settings\Administrator\Local Settings\Temp\install_flashplayer11x32ax_gtbd_chrd_dn_aih[1].bat
3 minutes 4 seconds
My hopes of a quick win came crashing down when I found out the executables in the temporary folders were no longer present on the system. I went back to the beginning of the application compatibility cache’s output and started working my way through each entry one at a time. Avast was scanning the system at a fast pace because the image was so small.
5 minutes 10 seconds
Avast was still scanning the system but it still didn’t find the malware. That was good news for me because I found another suspicious entry in the application compatibility cache.
C:\Documents and Settings\Administrator\Local Settings\Application Data\armfukk.exe
ModTime: Tue Aug 21 20:34:04 2012 Z
UpdTime: Tue Aug 21 20:38:03 2012 Z
Size : 495616 bytes
The file path drew my attention to the program and a check on the system showed it was still there. I quickly uploaded armfukk.exe to VirusTotal as stared at the Avast scan waiting to see if it would flag it before the VirusTotal scan completed.
VirusTotal delivered the verdict: 9 out of 42 antivirus scanners detected the armfukk.exe file as malware. Going head to head against Avast I located a piece of malware in about 5 minutes while Avast was still scanning. As you probably expected Avast still didn’t flag any files as being malicious.
Avast was still running the race as it kept scanning the system. I continued my examination by turning to my next tool of choice; a timeline. A timeline would provide a wealth of information by showing the activity around the time the armfukk.exe file was created on the system. I ran the following Sleuthkit command to create a bodyfile containing the filesystem metadata:
fls.exe -m C: -r \\.\F: > C:\bodyfile
9 minutes 30 seconds
Avast was still chugging along scanning but it still didn’t flag any files. The bodyfile was finally created but I needed to convert it into a more readable format. I wanted the timeline in log2timeline’s csv format so I next ran the command:
log2timeline.pl -z local -f mactime -w timeline.csv C:\bodyfile
11 minutes 22 seconds
I imported the timeline into Excel and sorted the output. Just as I was getting ready to search on the “armfukk.exe” keyword Avast finally completed its scan with zero detections.
Shortly There After
The race was over but I wasn’t basting in the glory of winning. I wanted to know how the malware actually infected the computer since I was so close to getting the answer. I searched on the armfukk.exe filename and found the entry showing when the file was created on the system.
There was activity showing Java was running and five seconds before the armfukk.exe file was created I came across an interesting file in the Java cache. VirusTotal gave me all the confirmation I needed.
Moral of the Story
As I said before, maybe, just maybe in some distant future in IT and security shops across the land people will be singing songs about the race of the century. Remembering the day when man demonstrated they were needed in the process to locate malware on a system. Putting antivirus technology into perspective as a tool; a great tool to have available in the fight against malware. Remembering the day when man stood up and said "antivirus technology is not a replacement for having a process to respond to malware incidents nor is it a replacement for the people who implement that process".
Wednesday, August 15, 2012
Posted by
Corey Harrell
In this Linkz edition I’m mentioning write-ups discussing tools. A range of items are covered from the registry to malware to jump lists to timelines to processes.
RegRipper Updates
Harlan has been pretty busy updating RegRipper. First RegRipper version 2.5 was released then there were some changes to where Regripper is hosted along with some nice new plugins. Check out Harlan’s posts for all the information. I wanted to touch on a few of the updates though. The updates to Regripper included the ability to run directly against volume shadow copies and parse big data. The significance to parsing big data is apparent in his new plugin that parses the shim cache which is an awesome artifact (link up next). Another excellent addition to RegRipper is the shellbags plugin since it parses Windows 7 shell bags. Harlan’s latest post Shellbags Analysis highlights the forensic significance to shell bags and why one may want to look at the information they contain. I think these are awesome updates; now one tool can be used to parse registry data when it used to take three separate tools. Not to be left out the community has been submitting some plugins as well. To only mention a few Hal Pomeranz provided some plugins to extract Putty and WinSCP information and Elizabeth Schweinsberg added plugins to parse different Run keys. The latest RR plugin download has the plugins submitted by the community. Seriously, if you use RegRipper and haven’t checked out any of these updates then what are you waiting for?
Shim Cache
Mandiant’s post Leveraging the Application Compatibility Cache in Forensic Investigations explained the forensic significance of the Windows Application Compatibility Database. Furthermore, Mandiant released the Shim Cache Parser script to parse the appcompatcache registry key in the System hive. The post, script, and information Mandiant released speaks for itself. Plain and simple, it rocks. So far the shim cache has been valuable for me on fraud and malware cases. Case in point, at times when working malware cases programs execute on a system but the usual program execution artifacts (such as prefetch files) doesn’t show it. I see this pretty frequently with downloaders which are programs whose sole purpose is to download and execute additional malware. The usual program execution artifacts may not show the program running but the shim cache was a gold mine. Not only did it reflect the downloaders executing but the information provided more context to the activity I saw in my timelines. What’s even cooler than the shim cache? Well there are now two different programs that can extract the information from the registry.
Searching Virus Total
Continuing on with the malware topic, Didier Stevens released a virustotal-search program. The program will search for VirusTotal reports using a file’s hash (MD5, SHA1, SHA256) and produces a csv file showing the results. One cool thing about the program is it only performs hash searches against Virustotal so a file never gets uploaded. I see numerous uses for this program since it accepts a file containing a list of hashes as input. One way I’m going to start using virustotal-search is for malware detection. One area I tend to look at for malware and exploits are temporary folders in user profiles. It wouldn’t take too much to search those folders looking for any files with an executable, Java archive, or PDF file signatures. Then for each file found perform a search on the file’s hash to determine if VirusTotal detects it as malicious. Best of all, this entire process could be automated and run in the background as you perform your examination.
Malware Strings
Rounding out my linkz about malware related tools comes from the Hexacorn blog. Adam released Hexdrive version 0.3. In Adam’s own words the concept behind Hexdrive is to “extract a subset of all strings from a given file/sample in order to reduce time needed for finding ‘juicy’ stuff – meaning: any string that can be associated with a) malware b) any other category”. Using Hexdrive makes reviewing strings so much easier. You can think of it as applying a filter across the strings to initially see only the relevant ones typically associated with malware. Then afterwards all of the strings can be viewed using something like Bintext or Strings. It’s a nice data reduction technique and is now my first go to tool when looking at strings in a suspected malicious file.
Log2timeline Updates
Log2timeline has been updated a few times since I last spoke about it on the blog. The latest release is version 0.64. There have been quite a few updates ranging from small bug fixes to new input modules to changing the output of some modules. To see all the updates check out the changelog.
Most of the time when I see people reference log2timeline they are creating timelines using either the default module lists (such as winxp) or log2timeline-sift. Everyone does things differently and there is nothing wrong with these approaches. Personally, both approaches doesn’t exactly meet my needs. The majority of the systems I encounter have numerous user profiles stored on them which mean these profiles contain files with timestamps log2timeline extracts. Running a default module list (such as winxp) or log2timeline-sift against the all the user profiles is an issue for me. Why should I include timeline data for all user accounts instead of the one or two user profiles of interest? Why include the internet history for 10 accounts when I only care about one user? Not only does it take additional time for timeline creation but it results in a lot more data then what I need thus slowing down my analysis. I take a different approach; an approach that better meets my needs for all types of cases.
I narrow my focus down to specific user accounts. I either confirm who the person of interest is which tells me what user profiles to examine. Or I check the user profile timestamps to determine which ones to focus on. What exactly does this have to do with log2timeline? The answer lies in the –e switch since it can exclude files or folders. The –e switch can be used to exclude all user profiles I don’t care about. There’s 10 user profiles and I only care about 2 profiles but I only want to run one log2timeline command. No problem if you use the –e switch. To illustrate let’s say I’m looking at the Internet Explorer history on a Windows 7 system with five user profiles: corey, sam, mike, sally b, and alice. I only need to see the browser history for the corey user account but I don’t want to run multiple log2timeline commands. This is where the –e switch comes into play as shown below:
log2timeline.pl -z local -f iehistory -r -e Users\\sam,Users\\mike,"Users\\sally b",Users\\alice,"Users\\All Users" -w timeline.csv C:\
The exclusion switch eliminates anything containing the text used in the switch. I could have used sam instead of Users\\sam but then I might miss some important files such as anything containing the text “sam”. Using a file path limits the amount of data that is skipped but will still eliminate any file or folder that falls within those user profiles (actually anything falling under the C root directory containing the text Users\username). Notice the use of the double back slashes (\\) and the quotes; for the command to work properly this is needed. What’s the command’s end result? The Internet history from every profile stored in the Users folder except for the sam, mike, sally b, alice, and all user profiles is parsed. I know most people don’t run multiple log2timeline commands when generating timelines since they only pick one of the default modules list. Taking the same scenario where I’m only interested in the corey user account on a Windows 7 box check out the command below. This will parse every Windows 7 artifact except for the excluded user profiles (note the command will impact the filesystem metadata for those accounts if the MFT is parsed as well).
log2timeline.pl -z local -f win7 -r -e Users\\sam,Users\\mike,"Users\\sally b",Users\\alice,"Users\\All Users" -w timeline.csv C:\
The end result is a timeline focused only on the user accounts of interest. Personally, I don't use the default module lists in log2timeline but I wanted to show different ways to use the -e switch.
Time and Date Website
Daylight savings time does not occur on the same day each year. One day I was looking around the Internet for a website showing the exact dates when previous daylight savings time changes occurred. I came across the timeanddate.com website. The site has some cool things. There’s a converter to change the date and time from one timezone to another. There’s a timezone map showing where the various timezones are located. A portion of the site even explains what Daylight Savings Time is. The icing on the cake is the world clock where you can select any timezone to get additional information including the historical dates of when Daylight Savings Time occurred. Here is the historical information for the Eastern Timezone for the time period from the year 2000 to 2009. This will be a useful site when you need to make sure that your timestamps are properly taken into consideration Daylight Savings Time.
Jump Lists
The day has finally arrived; over the past few months I’ve been seeing more Windows 7 systems than Windows XP. This means the artifacts available in the Windows 7 operating system are playing a greater role in my cases. One of those artifacts is jump lists and Woanware released a new version of Jumplister which parses them. This new version has the ability to parse out the DestList data and performs a lookup on the AppID.
Process, Process, Process
Despite all the awesome tools people release they won’t be much use if there isn’t a process in place to use them. I could buy the best saws and hammers but they would be worthless to me building a house since I don’t know the process one uses to build a house. I see digital forensics tools in the same light and in hindsight maybe I should have put these links first. Lance is back blogging over at ForensicKB and he posted a draft to the Forensic Process Lifecycle. The lifecycle covers the entire digital forensic process from the Preparation steps to triage to imaging to analysis to report writing. I think this one is a gem and it’s great to see others outlining a digital forensic process to follow. If you live under a rock then this next link may be a surprise but a few months back SANS released their Digital Forensics and Incident Response poster. The poster has two sides; one outlines various Windows artifacts while the other outlines the SANs process to find malware. The artifact side is great and makes a good reference hanging on the wall. However, I really liked seeing and reading about the SANs malware detection process since I’ve never had the opportunity to attend their courses or read their training materials. I highly recommend for anyone to get a copy of the poster (paper and/or electronic versions). I’ve been slacking updating my methodology page but over the weekend I updated a few things. The most obvious is adding links to my relevant blog posts. The other change and maybe less obvious is I moved around some examination steps so they are more efficient for malware cases. The steps reflect the fastest process I’ve found yet to not only find malware on a system but to determine how malware got there. Just an FYI, the methodology is not only limited to malware cases since I use the same process for fraud and acceptable use policy violations.
Sunday, August 12, 2012
Posted by
Corey Harrell
This past week I was vacationing with my family when my blog surpassed another milestone. It has been around for two years and counting. Around my blog’s anniversary I like to reflect back on the previous year and look ahead at the upcoming one. Last year I set out to write about various topics including: investigating security incidents, attack vector artifacts, and my methodology. It shouldn’t be much of a surprise then when you look at the topics in my most read posts from the past year:
1. Dual Purpose Volatile Data Collection Script
2. Finding the Initial Infection Vector
3. Ripping Volume Shadow Copies – Introduction
4. Malware Root Cause Analysis
5. More About Volume Shadow Copies
6. Ripping VSCs – Practitioner Method
Looking at the upcoming year there’s a professional change impacting a topic I’ve been discussing lately. I’m not talking about a job change but an additional responsibility in my current position. My casework will now include a steady dose of malware cases. I’ve been hunting malware for the past few years so now I get to do it on a regular basis as part of my day job. I won’t directly discuss any cases (malware, fraud, or anything else) that I do for my employer. However, I plan to share the techniques, tools, or processes I use. Malware is going to continue to be a topic I frequently discuss from multiple angles in the upcoming year.
Besides malware and any other InfoSec or DFIR topics that have my interest, there are a few research projects on my to-do list. First and foremost is to complete my finding fraudulent documents whitepaper and scripts. The second project is to expand on my current research about the impact virtual desktop infrastructure will have on digital forensics. There are a couple of other projects I’m working on and in time I’ll mention what those are. Just a heads up, at times I’m going to be focusing on these projects so expect some time periods when there isn’t much activity with the blog. As usual, my research will be shared either through my blog or another freely available resource to the DFIR community.
Again, thanks to everyone who links back to my blog and/or publicly discusses any of my write-ups. Each time I come across someone who says that something I wrote helped them in some way makes all the time and work I do for the blog worth the effort. Without people forwarding along my posts then people may not be aware about information that could help them. For this I’m truly grateful. I couldn’t end a reflection post without thanking all the readers who stop by jIIr. Thank you and you won’t be disappointed with what I’m gearing up to release over the next year.
Labels:
Sunday, July 29, 2012
Posted by
Corey Harrell
Wednesday, July 11, 2012
Posted by
Corey Harrell
“You do intrusion and malware investigations, we do CP and fraud cases” is a phrase I saw Harlan mention a few times on his blog. To me the phrase is more about how different the casework is; about how different the techniques are for each type of case. Having worked both fraud and malware cases I prefer to focus on what each technique has to offer as opposed to their differences. How parts of one technique can be beneficial to different types of cases. How learning just a little bit about a different technique can pay big dividends by improving your knowledge, skills, and process that you can use on your current cases. To illustrate I wanted to contrast techniques for malware and fraud cases to show how they help one another.
Program Execution
Malware eventually has to run and when it does execute then there may be traces left on a system. Understanding program execution and where these artifacts are located is a valuable technique for malware cases. Examining artifacts containing program execution information is a quick way to find suspicious programs. One such artifact is prefetch files. To show their significance I’m parsing them with Harlan’s updated pref.pl script. Typically I start examining prefetch files by first looking at what executables ran on a system and where they executed from. I saw the following suspicious programs looking at the output from the command “pref.exe -e -d Prefetch-Folder”.
TMP77E.EXE-02781D7C.pf Last run: Fri Mar 12 16:29:05 2010 (1)
\DEVICE\HARDDISKVOLUME1\DOCUME~1\ADMINI~1\LOCALS~1\TEMP\TMP77E.EXE
TMPDC7.EXE-2240CBB3.pf Last run: Fri Mar 12 16:29:07 2010 (1)
\DEVICE\HARDDISKVOLUME1\DOCUME~1\ADMINI~1\LOCALS~1\TEMP\TMPDC7.EXE
UPDATE.EXE-0825DC41.pf Last run: Fri Mar 12 16:28:57 2010 (1)
\DEVICE\HARDDISKVOLUME1\DOCUMENTS AND SETTINGS\ADMINISTRATOR\DESKTOP\UPDATE.EXE
ASD3.TMP.EXE-26CA54B1.pf Last run: Fri Mar 12 16:34:49 2010 (1)
\DEVICE\HARDDISKVOLUME1\DOCUME~1\ADMINI~1\LOCALS~1\TEMP\ASD3.TMP.EXE
ASD4.TMP.EXE-2740C04A.pf Last run: Fri Mar 12 16:34:50 2010 (1)
\DEVICE\HARDDISKVOLUME1\DOCUME~1\ADMINI~1\LOCALS~1\TEMP\ASD4.TMP.EXE
DRGUARD.EXE-23A7FB3B.pf Last run: Fri Mar 12 16:35:26 2010 (2)
\DEVICE\HARDDISKVOLUME1\PROGRAM FILES\DR. GUARD\DRGUARD.EXE
ASD2.TMP.EXE-2653E918.pf Last run: Fri Mar 12 16:34:27 2010 (1)
\DEVICE\HARDDISKVOLUME1\DOCUME~1\ADMINI~1\LOCALS~1\TEMP\ASD2.TMP.EXE
ASR64_LDM.EXE-3944C1CE.pf Last run: Fri Mar 12 16:29:06 2010 (1)
\DEVICE\HARDDISKVOLUME1\DOCUME~1\ADMINI~1\LOCALS~1\TEMP\ASR64_LDM.EXE
These programs stood out for a few different reasons. First I noticed the path they executed from was a temporary folder in a user’s profile. Unusual file paths are one way to spot malware on a system. The other thing I noticed is that some of the programs only executed once. This behavior resembles how downloaders and droppers work. Their sole purpose is to execute once to either download additional malware or install malware. The last dead giveaway is they all executed within a few minutes of each other. The first sweep across the prefetch files netted some interesting programs that appear to be malicious. The next thing to look at is the individual prefetch files to see what file handles were open when the program ran. The TMP77E.EXE-02781D7C.pf prefetch file showed something interesting as shown below (the command used was “pref.pl -p -i -f TMP77E.EXE-02781D7C.pf”).
EXE Name : TMP77E.EXE
Volume Path : \DEVICE\HARDDISKVOLUME1
Volume Creation Date: Fri Nov 2 08:56:57 2007 Z
Volume Serial Number: 6456-B1FD
\DEVICE\HARDDISKVOLUME1\WINDOWS\SYSTEM32\NTDLL.DLL
\DEVICE\HARDDISKVOLUME1\WINDOWS\SYSTEM32\KERNEL32.DLL
\DEVICE\HARDDISKVOLUME1\WINDOWS\SYSTEM32\UNICODE.NLS
*****snippet*****
EXEs found:
\DEVICE\HARDDISKVOLUME1\DOCUME~1\ADMINI~1\LOCALS~1\TEMP\TMP77E.EXE
\DEVICE\HARDDISKVOLUME1\WINDOWS\SYSTEM32\NET.EXE
\DEVICE\HARDDISKVOLUME1\WINDOWS\SYSTEM32\SC.EXE
\DEVICE\HARDDISKVOLUME1\DOCUME~1\ADMINI~1\LOCALS~1\TEMP\ASR64_LDM.EXE
DAT files found:
\DEVICE\HARDDISKVOLUME1\DOCUMENTS AND SETTINGS\ADMINISTRATOR\LOCAL SETTINGS\TEMPORARY INTERNET FILES\CONTENT.IE5\INDEX.DAT
\DEVICE\HARDDISKVOLUME1\DOCUMENTS AND SETTINGS\ADMINISTRATOR\COOKIES\INDEX.DAT
\DEVICE\HARDDISKVOLUME1\DOCUMENTS AND SETTINGS\ADMINISTRATOR\LOCAL SETTINGS\HISTORY\HISTORY.IE5\INDEX.DAT
Temp paths found:
\DEVICE\HARDDISKVOLUME1\DOCUME~1\ADMINI~1\LOCALS~1\TEMP\TMP77E.EXE
\DEVICE\HARDDISKVOLUME1\DOCUMENTS AND SETTINGS\ADMINISTRATOR\LOCAL SETTINGS\TEMPORARY INTERNET FILES\CONTENT.IE5\INDEX.DAT
\DEVICE\HARDDISKVOLUME1\DOCUMENTS AND SETTINGS\ADMINISTRATOR\LOCAL SETTINGS\TEMPORARY INTERNET FILES\CONTENT.IE5\M20M2OXX\READDATAGATEWAY[1].HTM
\DEVICE\HARDDISKVOLUME1\DOCUME~1\ADMINI~1\LOCALS~1\TEMP\ASR64_LDM.EXE
The pref.pl file handle portion to the output was trimmed to make it easier to read but I left in the intelligence provided by the script through its filters. The filters highlight file handles containing exes, dats, and temporary folders. The exe filter shows in additional to a handle to the TMP77E.EXE file there were handles to SC.EXE, NET.EXE, and ASR64_LDM.EXE. SC.EXE is a Windows program for managing services including creating new services while NET.EXE is a Windows program for doing various tasks including starting services. ASR64_LDM.EXE was another suspicious program that ran on the system after TMP77E.EXE. The file handles inside each prefetch file of interest provided additional information which is useful during a malware case.
Program execution is vital for malware cases and I saw how the same technique can apply to fraud cases. On fraud cases a typical activity is to identify and locate financial data. At times this can be done by running keyword searches but most of the time (at least for me) the actual financial data is unknown. What I mean by this is a system is provided and it’s up to you to determine what data is financial. This is where the program execution technique comes into play. The programs that ran on the system can be reviewed to provide leads about what kind of financial data may be present on the system. Using the first sweep across the prefetch files I located these interesting programs (command used was “pref.exe -e -d Prefetch-Folder”). Note: the system being looked at is not from a fraud case but it still demonstrates how the data appears.
WINWORD.EXE-C91725A1.pf Last run: Tue Jul 10 16:42:26 2012 (42)
\DEVICE\HARDDISKVOLUME2\PROGRAM FILES\MICROSOFT OFFICE\OFFICE12\WINWORD.EXE
ACROBAT_SL.EXE-DC4293F2.pf Last run: Fri Jun 22 18:14:12 2012 (1)
\DEVICE\HARDDISKVOLUME2\PROGRAM FILES\ADOBE\ACROBAT 9.0\ACROBAT\ACROBAT_SL.EXE
EXCEL.EXE-C6BEF51C.pf Last run: Tue Jul 10 16:30:18 2012 (22)
\DEVICE\HARDDISKVOLUME2\PROGRAM FILES\MICROSOFT OFFICE\OFFICE12\EXCEL.EXE
POWERPNT.EXE-1404AEAA.pf Last run: Thu Jun 21 20:14:52 2012 (22)
\DEVICE\HARDDISKVOLUME2\PROGRAM FILES\MICROSOFT OFFICE\OFFICE12\POWERPNT.EXE
When I look at program execution for fraud cases I look for financial applications, applications that can create financial data, and programs associated with data spoliation. The system didn’t have any financial or data spoliation programs but there were office productivity applications capable of creating financial documents such as invoices, receipts, proposals, etc. These programs were Microsoft Office and Adobe Acrobat and this means the data created on the system is most likely Word, Excel, Powerpoint, or PDFs. The number of executions for each program is also interesting. I look to see what applications are heavily used since it’s a strong indication about what program the subject uses. Notice Adobe only ran once while Word was ran 42 times. Looking at the file handles inside individual prefetch file also contains information relevant to a fraud case. Below are a few sanitized handles I found in the WINWORD.EXE-C91725A1.pf prefetch file (the command used was “pref.pl -p -i -f WINWORD.EXE-C91725A1.pf”).
EXE Name : WINWORD.EXE
Volume Path : \DEVICE\HARDDISKVOLUME2
Volume Creation Date: Fri Aug 26 18:13:26 2011 Z
Volume Serial Number: E4DD-S23A
\DEVICE\HARDDISKVOLUME2\WINDOWS\SYSTEM32\NTDLL.DLL
\DEVICE\HARDDISKVOLUME2\WINDOWS\SYSTEM32\KERNEL32.DLL
\DEVICE\HARDDISKVOLUME2\WINDOWS\SYSTEM32\APISETSCHEMA.DLL
*****snippet*****
\DEVICE\HARDDISKVOLUME4\FORENSICS\RESEARCH\Folder\SANTIZED.DOCX
\DEVICE\HARDDISKVOLUME4\FORENSICS\RESEARCH\Folder\~$SANTIZED.DOCX
\DEVICE\HARDDISKVOLUME2\USERS\USERNAME\APPDATA\LOCAL\TEMP\MSO4F30.TMP
\DEVICE\HARDDISKVOLUME4\FORENSICS\RESEARCH\Folder\SANTIZED 2.DOCX
\DEVICE\HARDDISKVOLUME4\FORENSICS\RESEARCH\Folder\~$SANTIZED 2.DOCX
\DEVICE\HARDDISKVOLUME2\USERS\USERNAME\APPDATA\LOCAL\TEMP\MSO7CF3.TMP
\DEVICE\HARDDISKVOLUME2\USERS\USERNAME\APPDATA\LOCAL\TEMP\20517251.OD
\DEVICE\HARDDISKVOLUME4\FORENSICS\RESEARCH\Folder\SANTIZED 3.DOCX
\DEVICE\HARDDISKVOLUME4\FORENSICS\RESEARCH\Folder\~$SANTIZED 3.DOCX
\DEVICE\HARDDISKVOLUME4\$MFT
\DEVICE\HARDDISKVOLUME2\USERS\USERNAME\APPDATA\LOCAL\MICROSOFT\WINDOWS\TEMPORARY INTERNET FILES\CONTENT.MSO\SANTIZED.JPEG
\DEVICE\HARDDISKVOLUME4\FORENSICS\RESEARCH\Folder\SANTIZED 3.DOCX:ZONE.IDENTIFIER
*****snippet*****
The file handles shows documents stored in the folder FORENSICS\RESEARCH\Folder\ on a different volume were accessed with Word. I think this is significant because not only does it provide filenames to look for but it also shows another storage location the subject may have used. Where ever there is storage accessible to the subject then there’s a chance that is where they are storing some financial data. Also, notice in the output how the last line shows one of the documents was downloaded from the Internet (zone identified alternate data stream).
User Activity
The program execution showed how fraud cases benefited from a technique used in malware cases. Now let’s turn the tables to see how malware cases can benefit from fraud. As I mentioned before most of the times I have to find where financial data is located whether if it’s on the system or in network shares. The best approach I found was to look at artifacts associated with user activity; specifically file, folder, and network share access. My reasoning is if someone is being looked at for committing a fraud and they are suspected of using the computer to commit the fraud then they will be accessing financial data from the computer to carry out the fraud. Basically, I let their user activity show me where the financial data is located and this approach works regardless if the data is in a hidden folder or stored on a network. There are numerous artifacts containing file, folder, and network share access and one of them is link files. To show their significance I’m parsing them with TZWorks LNK Parser Utility. When I examine link files I parse both the Recent and Office/Recent folders. This results in some duplicates but it catches link files found in one folder and not the other. I’ve seen people delete everything in the Recent folder while not realizing the Office\Recent folder exists. I saw some interesting target files, folders, and network shares by running the command “dir C:\Users\Username\AppData\Roaming\Microsoft\Windows\Recent\*.lnk /b /s | lp -pipe -csv > fraud_recent.txt”.
{CLSID_MyComputer}\E:\Forensics\Research\Folder\sanatized.doc
{CLSID_MyComputer}\E:\Forensics\Research\Folder\sanitized 2.doc
{CLSID_MyComputer}\E:\Forensics\Research\Folder\sanitized 3.doc
{CLSID_MyComputer}\C:\Atad\material\sanitized 1.pdf
{CLSID_MyComputer}\F:\Book1.xls
\\192.168.200.55\ share\TR3Secure
The output has been trimmed (only shows target file column) and sanitized since it’s from one of my systems. The link files show files and folders on removable media and a network share has been accessed in addition to a folder not inside the user profile. I’ve used this same technique on fraud cases to figure out where financial data was stored. One time it was some obscure folder on the system while the next time it was a share on a server located on the network.
Tracking user activity is a great way to locate financial data on fraud cases and I saw how this same technique can apply to malware cases. On malware cases it can help answer the question how did the computer become infected. Looking at the user activity around the time of the initial infection can help shed light on what attack vector was used to compromise the system. Did the user access a network share, malicious website, removable media, email attachment, or peer to peer application? The user activity provides indications about what the account was doing that contributed to the infection. On the malware infected system there were only two link files in the Recent folder shown below is the target create time and target name (command used was “dir "F:\Malware_Recent\*.lnk" /b /s | lp -pipe -csv > malware_recent.txt”).
3/12/2010 16:17:04.640 {CLSID_MyComputer}\C:\downloads
3/12/2010 16:18:59.609 {CLSID_MyComputer}\C:\downloads\link.txt
These link files show the user account accessed the downloads folder and the link text file just before the suspicious programs started executing on the system. Looking at this user activity jogged my memory about how the infection occurred. I was researching a link from a SPAM email and I purposely clicked the link from a system. I just never got around to actually examining the system. However, even though the system was infected on purpose examining the user activity on the malware cases I worked has helped answer the question how did the system become infected.
Closing Thoughts
DFIR has a lot of different techniques to deal with the casework we face. Too many times we tend to focus on the differences; the different tools, different processes, and different meanings of artifacts. Focusing on the differences distracts from seeing what the techniques have to offer. What parts of the techniques can strengthen our processes and make us better regardless of what case we are up against. If I didn’t focus on what the techniques had to offer then I would have missed an opportunity. A chance to develop a better DFIR process by combining malware and fraud techniques; a process that I think is far better than if each technique stood on their own.
Wednesday, July 4, 2012
Posted by
Corey Harrell
A penetration test is a method to locate weaknesses in an organization’s network by simulating how an attacker may circumvent the security controls. The Preface indicated Metasploit The Penetration Tester’s Guide was written specifically so “readers can become competent penetration testers”. The book further goes on to describe a penetration tester as someone who is able to find ways in which a “hacker might be able to compromise an organization’s security and damage the organization as a whole”. I’ve occasionally seen people talk about the book favorably but their comments were as it related to penetration testing. I wanted to review Metasploit The Penetration Tester’s Guide from a different angle; from the Digital Forensic and Incident Response (DFIR) perspective. As a DFIR professional it is important to not only understand the latest attack techniques but it’s equally important to be aware of what artifacts are left by those techniques. This is the perspective I used when reviewing the book and I walked away thinking. If you want to bring your Digital Forensic and Incident Response skills to the next level then throw Metasploit in your toolbox and work your way through this book.
From Methodology to Basics to Exploitation
The book starts out discussing the various phases to a penetration test which were: pre-engagement interactions, intelligence gathering, threat modeling, exploitation, and post exploitation. After covering the methodology there was an entire chapter dedicated to Metasploit basics. I liked how the basics were covered before diving into the different ways to perform intelligence gathering with the Metasplot framework. Not only did the intelligence gathering cover running scans using the Metasploit built-in scanners but it also discussed running scans with nmap and then building a database with Metasploit to store the nmap scans. Before getting into exploitation an entire chapter was dedicated to using vulnerability scanners (Nessus and Nexpose) to identify vulnerabilities in systems. After Chapter 4 the remainder of the book addresses exploitation and post-exploitation techniques. I liked how the book discussed simpler attacks before leading up to more advanced attacks such as client-side, spear phishing, web, and SQL injection attacks. The book even talked about some advanced topics such as building your own Metasploit module and creating your own exploits. I think the book fulfilled the reason for which it was designed: to “teach you everything from the fundamentals of the Framework to advanced techniques in exploitation”.
Prepare, Prepare, and Prepare
Appendix A in the book walks you through setting up some target machines. One of which is a vulnerable Windows XP box running a web server, SQL server, and a vulnerable web application. Setting up the target machines means you can try out the attacks as you work your way through the book. I found it a better learning experience to try things out as I read about them. One addition benefit to this is that it provides you with a system to analyze. You can attack the system then examine afterwards to see what artifacts were left behind. I think this is a great way to prepare and improve your skills to investigate different kinds of compromises. Start out with simple attacks before proceeding to the more advanced attacks.
This is where I think this book along with Metasploit can bring your skills to the next level. There are numerous articles about how certain organizations were compromised but the articles never mention what artifacts were found indicating how the compromise occurred. Does the following story sound familiar? Media reports said a certain organization was compromised due to a targeted email that contained a malicious attachment. The reports never mentioned what incident responders should keep an eye out for nor does it provide anything about how to spot this attack vector on a system. To fill in these gaps we can simulate the attack against a system to see for ourselves how the attack looks from a digital forensic perspective. Spear-phishing attack vector is covered on page 137 and the steps to conduct the attack is very similar to how those organizations are compromised. The simulated attacks don’t have to stop at spear phishing either since the following could also be done: Java social engineering (page 142), client-side web exploits also known as drive-bys (page 146), web jacking (page 151), multipronged attack (page 153), or pivoting onto other machines (page 89) are a few possibilities one could simulate against the targeted machines. It's better to prepare ourselves to see these attacks in advanced then it is to wait until we are tasked with analyzing a compromised system.
Where’s the Vulnerability Research
Metasploit The Penetration Tester’s Guide is an outstanding book and is a great resource for anyone wanting to better understand how attacks work. However, there was one thing I felt the book was missing. The process to identify and research what vulnerabilities are present in specific software you want to exploit. The book mentions how Metasploit exploits can be located by keyword searches but it doesn’t go into detail about how to leverage online resources to help figure out what exploits to use. A search can be done online for a program/service name and version to list all discovered vulnerabilities in that program. Also, there is additional information explaining what a successful exploit may result in such as remote code execution or a denial of service. This approach has helped me when picking what vulnerabilities to go after and I thought a book trying to make competent penetration testers would have at least mentioned it.
Four Star Review
If someone wants to know how to better secure a system then they need to understand how the system can be attacked. If someone wants to know how to investigate a compromised system then they need to understand how attacks work and what those attacks look like on a system. As DFIR professionals it is extremely important for us to be knowledgeable about different attacks and what artifacts those attacks leave behind. This way when we are looking at a system or network it’s easier to see what caused the compromise; a spear phish, drive-by, SQL injection, or some other attack vector. I think the following should be a standard activity for anyone wanting to investigate compromises. Pick up Metasploit The Penetration Tester’s Guide, add Metasploit to your toolbox, and work your way through the material getting shells on test systems. You will not only have a solid understanding about how attacks work but you will pick up some pen testing skills along the way. Overall I gave Metasploit The Penetration Tester’s Guide a four star review (4 out of 5).