Introducing the Digital Forensics Search

Saturday, April 9, 2011 Posted by Corey Harrell 7 comments
Have you ever run a *insert search engine* search to locate information about an artifact only to find a listing of mostly irrelevant hits? A lot of time is wasted going through the irrelevant hits to locate the few websites with information that helps you better see how the artifacts fit into your forensic examination. Wouldn't it be better if the majority of the search hits were in the context of digital forensics or incident response, thereby making the hits more relevant to your forensic examination? Here is the formal introduction of the Digital Forensic Search engine.

The combination of the Yahoo Win4n6 group's discussion about David Kovar's post The Fragmentation of the digital forensics community, hooked-on-mnemonics blog post Malware Analysis Search, and writing my last post on searching RSS feeds inspired me to want to search for information a different way. A more effective way is to use a custom search engine that's configured to only search blogs, groups, forums, or any other sites related to digital forensics and incident response. Digital Forensic Search is a custom Google search and in a way I think it harnesses the collective knowledge and research of the people/organizations who share information back to the forensics community.

Digital Forensic Search results in more search hits which are in the realm of digital forensics and incident response. Depending on the artifact being researched, the search hits may result in information on the artifact, tools to extract data from the artifact, and how the artifact affected other practitioners' examinations. For example, perform a search for the keyword "link file" (include the quotes) in your favorite search engine. The first 10 hits in my search only included one digital forensics hit while the other hits were for information not beneficial to any type of forensic investigation. Run the same search in the Digital Forensic Search and it results in the majority of the hits being directly related to link files in the context of a digital forensic examination. Three of the hits on the first page were an article about the Evidentiary Value of Link Files on Forensic Focus, Richard Drinkwater's blog post Link Files in System Restore Points, and the article The Meaning of Link Files in Forensic Examinations on the Computer Forensics Miscellany website.

If anyone still isn't convinced in the value of a custom search then I recommend performing a couple of searches between *insert search engine* and Digital Forensic Search. A few potential topics to search on are: comdlg32, tool validation, evidence collection, timeline analysis, or volume shadow copies. The searches should show that Digital Forensic Search has more relevant hits related to digital forensic and incident response which results in it being one effective method to locate information.

This post is where I'm going to be maintaining the list of sites included in the Digital Forensic Search so any updates to the index will be reflected below. The repository tries to focus on sites containing information on digital forensics and incident response as opposed to tool specific sites. With this in mind, if you see any sites missing or URLs with too much noise (such as job postings) then post a comment or send me an email.

Digital Forensic Search can be found at the top of jIIr or directly at this link:

http://www.google.com/cse/home?cx=011905220571137173365:7eskxxzhjj8


**********Sites Last Updated on 02/15/2015**********

The following is the listing of sites indexed by the Digital Forensic:

DFIR Blogs

A Geek Raised by Wolves  http://jessekornblum.livejournal.com/
A Renaissance Security Professional  http://renaissancesecurity.blogspot.com/
Adventures in Security http://securitykitten.github.io/
An Eye on Forensics  http://eyeonforensics.blogspot.com/
Active Security  http://active-security.blogspot.com/
Andrew Hay  http://www.andrewhay.ca
All things time related http://blog.kiddaland.net/
American Destroyer http://megadeus.com/
Another Forensics Blog  http://az4n6.blogspot.com/
Anton Chuvakin  http://blogs.gartner.com/anton-chuvakin
appointments-uk  http://appointments-uk.blogspot.com/
Ball In Your Court  http://ballinyourcourt.wordpress.com/
binary foray http://binaryforay.blogspot.com/
Blog Matt Churchill  http://mattchurchill.net/blog/
Bradley Schatz on the intersection of technology and the law  http://blog.schatzforensic.com.au/
BriMor Labs  http://brimorlabs.blogspot.com
Browser Forensics  http://www.browserforensics.com/
c-APT-ure  http://c-apt-ure.blogspot.com/
cci  http://takahiroharuyama.github.io/
Cellular.Sherlock - Mobile Forensics from the front lines  http://blog.csvance.com/
Cheeky4n6Monkey - Learning About Digital Forensics  http://cheeky4n6monkey.blogspot.com/
Chip_DFIR  http://chip-dfir.techanarchy.net/
Chris Sanders  http://chrissanders.org/
Christa Miller  http://christammiller.com/
CnW Recovery  http://cnwrecovery.blogspot.com/
Codeslack  http://codeslack.blogspot.com/
Command Line Kung Fu  http://blog.commandlinekungfu.com/
Computer Forensic Blog  http://computer.forensikblog.de/en/
Computer Forensic Graduate  http://computerforensicgraduate.wordpress.com
Computer Forensic Source  http://forensicsource.blogspot.com/
Computer Forensics and IR - What's New  http://newinforensics.blogspot.com/
Computer Forensics, Malware Analysis & Digital Investigations  http://www.forensickb.com/
Computer Forensics-E-Discovery Tips-Tricks and Information  http://cfed-ttf.blogspot.com/
ComputerForensicSource.com  http://www.computerforensicsource.com/
Consortium of Digital Forensic Specialists CDFS Blog  http://www.cdfs.org/blog/
copgeek018  http://copgeek018.wordpress.com/
Crucial Security Forensics Blog http://crucialsecurityblog.harris.com/
CSITech - Computer Forensics  http://nickfurneaux.blogspot.com/
Cyber Security Maven -- Techie  http://cybersecuritymave-techie.blogspot.com
CyberSpeak's Podcast  http://cyberspeak.libsyn.com/
Cylance Blog  http://blog.cylance.com
Dancho Danchev's Blog - Mind Streams of Information Security Knowledge  http://ddanchev.blogspot.com/
Default Deny  http://kurtaubuchon.blogspot.com/
Derek Newton « Information Security Insights http://dereknewton.com/
DF Procedures and Musings  http://dfprocedures.blogspot.com
DFF and Open Sourse Digitial Forensics blog http://www.digital-forensic.org/blog/
Digital Forensics Solutions  http://dfsforensics.blogspot.com/
Enterprise Detection & Response  http://detect-respond.blogspot.com
Every Bit Counts  http://forensicmatt.blogspot.com

Ex Forensis  http://exforensis.blogspot.com/
FireEye Malware Intelligence Lab  http://blog.fireeye.com/research/
Forensic 4cast  http://www.forensic4cast.com/
forensic . seccure . net  http://seccure.blogspot.com/
Forensic Artifacts  http://forensicartifacts.com/
Forensic Computing — Digital forensics from the view of a computer scientist  http://www.forensicblog.org/
Forensics For the Newbs  http://forensicnewbs.wordpress.com/
Forensic Incident Response  http://forensicir.blogspot.com/
Forensic interviews  http://f-interviews.com/
Forensic Methods http://forensicmethods.com/
Forensic Photoshop  http://forensicphotoshop.blogspot.com/
Forensicaliente - because digital forensics is "hot"  http://forensicaliente.blogspot.com/
Forensically sound(ing off) http://marshalla99.wordpress.com/
Forensicator Of The Dead  http://forensicotd.blogspot.com/
Forensics from London  http://forensiccontrol.blogspot.com/
Forensics from the sausage factory  http://forensicsfromthesausagefactory.blogspot.com/
ForensicZone  http://forensiczone.blogspot.com/
Fun with Lost Bits n Bytes  http://blog.roberthaist.com
G33k G1r1 goes Binary  http://g33k-g1rl.blogspot.com/
Geoff Black's Forensic Gremlins - Everything that gives you fits in Digital
Ghetto Forensics  http://www.ghettoforensics.com
Girl, Unallocated  http://girlunallocated.blogspot.com/
GPS Evidence Tracking Issues http://gpsevidence.blogspot.com/
Grand Stream Dreams  http://grandstreamdreams.blogspot.com/
Forensics and E-Discovery  http://www.geoffblack.com/
Hacking Exposed Computer Forensics blog  http://hackingexposedcomputerforensicsblog.blogspot.com/
HandlerDiaries  http://blog.handlerdiaries.com
Happy As A Monkey  http://happyasamonkey.wordpress.com/
Hexacorn Blog  http://www.hexacorn.com/blog/
HeX-OR Forensics  http://nicoleibrahim.com
HolisticInfoSec http://holisticinfosec.blogspot.com/
InfoSec Insights  http://www.seanmason.com
integriography A Journal of Broken Locks, Ethics, and Computer Forensics  http://integriography.wordpress.com/
Internet Storm Center Diary  http://isc.sans.edu/
JonRajewski  http://www.jonrajewski.com/cyberblog/
Journey into Incident Response  http://journeyintoir.blogspot.com/
JustAskWeg  http://justaskweg.com
Lenny Zeltser on Information Security  http://blog.zeltser.com
Linux Sleuthing  http://linuxsleuthing.blogspot.com/
Lowmanio (digital forensic category)  http://www.lowmanio.co.uk/blog/categories/digital-forensics/
Macaroni Forensics  http://macaroniforensics.blogspot.com/
man allyn-blog http://allynstott.blogspot.com/
Matthieu Suiche’s blog ! - Happiness only real when shared.  http://www.msuiche.net/
Memory Forensics  http://memoryforensics.blogspot.com/
MetaDatum  http://metadatum.me
MNIN Security  http://www.malwarecookbook.com/
MNIN Security Blog  http://mnin.blogspot.com/
Mobile Device Forensics  http://mobileforensics.wordpress.com/
Mobile Forensics Inc Blogger  http://blog.mobileforensicsinc.com/
Mobile Telephone Evidence  http://trewmte.blogspot.com/
Post Humorous  http://www.posthumorous.com/
Practical Digital Forensics http://practicaldigitalforensics.blogspot.com/
Propeller Head Forensics  http://propellerheadforensics.com/
Push the Red Button  http://moyix.blogspot.com/
RAM Slack – Random Thoughts from a Computer Forensic Examiner  http://ramslack.wordpress.com/
Riij morf tnetnoc siht elots I  http://journeyintoir.blogspot.com
Ryan Stillions  http://ryanstillions.blogspot.com

SANs Penetration Testing Blog  http://pen-testing.sans.org/blog
Sketchymoose's Blog  http://sketchymoose.blogspot.com/
Security Ripcord  http://www.cutawaysecurity.com/blog/
Securosis Blog  https://securosis.com/blog
Sempersecurus http://sempersecurus.blogspot.com/
Sergio Hernando http://www.sahw.com/wp/
Scudette in Wonderland  http://scudette.blogspot.com/
Student of Security http://mikeahrendt.blogspot.com/
Sucuri Blog  http://blog.sucuri.net
System Forensics  http://www.sysforensics.org/
Seculert  http://blog.seculert.com/
Secureartisan http://secureartisan.wordpress.com/
Security Braindump  http://securitybraindump.blogspot.com/
TaoSecurity  http://taosecurity.blogspot.com/
Taksati  http://www.taksati.org/
The Cave  http://cyb3rdaw6.harpermountain.net/
The Digital Standard  http://thedigitalstandard.blogspot.com/
The Digital4rensics Blog  http://www.digital4rensics.com/
The Forensics Ferret Blog http://forensicsferret.wordpress.com/
The Last Line of Defense  http://blog.tllod.com/
Trace Evidence  http://traceevidence.blogspot.com
trustedsignal -- blog  http://trustedsignal.blogspot.com/
Unchained Forensics  http://unchainedforensics.blogspot.com/
Unmask Parisites blog  http://blog.unmaskparasites.com/
ViaForensics  https://viaforensics.com/blog/
Volatility Advanced Memory Forensics  http://volatility.tumblr.com/
Windows Incident Response  http://windowsir.blogspot.com/
WriteBlocked  http://writeblocked.org/
Wyatt Roersma Blog  http://www.wyattroersma.com/
Yogesh Khatri's forensic blog  http://www.swiftforensics.com/

DFIR Websites

Brian Carrier Digital Investigation - Forensics and Evidence Research  http://www.digital-evidence.org/
CERIAS Reports and Papers Archive  https://www.cerias.purdue.edu/apps/reports_and_papers/
Computer Crime & Intellectual Property Section US DOJ  http://www.justice.gov/criminal/cybercrime/
Computer Forensics Miscellany  http://computerforensics.parsonage.co.uk/
Craig Gall Helping Lawyers Master Technology  http://www.craigball.com/
DFRWS (Digital Forensics Research Conference)  http://www.dfrws.org/
Digital Forensics Magazine supporting the professional computer security industry  http://www.digitalforensicsmagazine.com/
Digital Forensics Solutions' Research http://www.digitalforensicssolutions.com/research.shtml
ENSIA CERT  http://www.enisa.europa.eu/act/cert/
E-Evidence Information Center - Home  http://www.e-evidence.info/
FIRST - Improving security together  http://www.first.org/
Forensic Focus  www.forensicfocus.com/
Forensic Magazine Issues  http://www.forensicmag.com/
Forensics Wiki  http://www.forensicswiki.org/
HolisticInfoSec toolsmith http://holisticinfosec.org/toolsmith
Inside the registry  http://www.insidetheregistry.com/regdatabase/
I-Sight's Investigations http://i-sight.com/investigation/
International Journal of Digital Evidence on Utica College  http://www.utica.edu/academic/institutes/ecii/ijde/
Into The Boxes  http://intotheboxes.wordpress.com/
IronGeek's InfoSec Articles http://www.irongeek.com/i.php?page=security/
Journal of Digital Forensics, Security and Law  http://www.jdfsl.org/
Lenny Zeltser  http://zeltser.com/
log2timeline  http://log2timeline.net/
mnin.org  http://www.mnin.org/
Mobile Forensics Central  http://www.mobileforensicscentral.com/
National Institute of Justice Publications  http://nij.gov/nij/pubs-sum/
National White Collar Crime Center  http://www.nw3c.org/
Network Forensics Puzzle Contest  http://forensicscontest.com/
NIST Computer Security Division Special Publications  http://csrc.nist.gov/publications/nistpubs/
Open Source Digital Forensics  http://www2.opensourceforensics.org/
SANs Computer Forensics  http://computer-forensics.sans.org/
SANS InfoSec Reading Room - Forensics  http://www.sans.org/reading_room/whitepapers/forensics/
SANS InfoSec Reading Room - Incident Handling  http://www.sans.org/reading_room/whitepapers/incident/
SANS InfoSec Reading Room - Malicious Code  http://www.sans.org/reading_room/whitepapers/malicious/
SANS InfoSec Reading Room - Steganography  http://www.sans.org/reading_room/whitepapers/stenganography/
SANs Summit Archives  http://digital-forensics.sans.org/summit-archives
Small Scale Digital Device Forensics Journal  http://www.ssddfj.org/
SWGDE  http://www.swgde.org/
The Honeynet Project Challenges  https://www.honeynet.org/challenges/
Welcome AppleExaminer  http://www.appleexaminer.com/
Williballenthin.com  http://williballenthin.com

DFIR Webpages

AuSCERT Forming an Incident Response Team  http://www.auscert.org.au/render.html?it=2252&cid=1938
Cybercrime.gov searching and seizing manual  http://www.cybercrime.gov/ssmanual/index.html
Daubert v. Merrell Dow Pharmaceuticals  http://www.law.cornell.edu/supct/html/92-102.ZS.html
Default Processes in Windows 2000  http://support.microsoft.com/kb/263201
Digital Evidence: Standards and Principles  http://www.fbi.gov/about-us/lab/forensic-science-communications/fsc/april2000/swgde.htm
Digitalcorpora Disk Images  http://digitalcorpora.org/corpora/disk-images/
FileSignatures Table  http://www.garykessler.net/library/file_sigs.html
Forensically interesting spots in the Windows 7, Vista and XP file system and registry (and anti-forensics)  http://www.irongeek.com/i.php?page=security/windows-forensics-registry-and-file-system-spots
Microsoft Windows XP - Default settings for services  http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/sys_srv_default_settings.mspx?mfr=true
QQIS Whitepapers  http://qccis.com/resources/publications/
RFC 3227 - Guidelines for Evidence Collection and Archiving  http://www.rfc-archive.org/getrfc.php?rfc=3227
SEI Handbook for Incident Response Teams  http://www.sei.cmu.edu/library/abstracts/reports/03hb002.cfm
Windows 7 Default Services and Suggested Startup Mode  http://www.windowsnetworking.com/articles_tutorials/Windows-7-Default-Services-Suggested-Startup-Mode.html

DFIR Groups

Yahoo Win4n6 Group  http://tech.groups.yahoo.com/group/win4n6/
Yahoo Linux Forensics Group  http://tech.groups.yahoo.com/group/linux_forensics/ 
The Vol-users Archives  http://lists.volatilesystems.com/pipermail/vol-users/

DFIR Tool Websites

Digital Forensics Framework Wiki  http://wiki.digital-forensic.org/
Jafat Archive of Forensic Analysis Tools  http://jafat.sourceforge.net/
Joakim Schicht  https://github.com/jschicht
Live View  http://liveview.sourceforge.net/
md5deep and hashdeep  http://md5deep.sourceforge.net/
mft2csv  http://code.google.com/p/mft2csv
MiTec  http://www.mitec.cz/
My SecTools  http://www.mysectools.com/
NirSoft  http://www.nirsoft.net/
OpenSourceForensics  http://code.google.com/p/opensourceforensics/
plaso - home of the super timeline  http://plaso.kiddaland.net
pydetective  http://code.google.com/p/pydetective/
Registry Decoder  http://code.google.com/p/registrydecoder/
Registry Decoder Live  http://code.google.com/p/regdecoderlive/
RegRipper  http://regripper.wordpress.com/
Rekall Memory Forensic Framework  http://www.rekall-forensic.com
Shadow Explorer  http://www.shadowexplorer.com/
Sleuthkit  http://www.sleuthkit.org/
TZWorks LLC  http://www.tzworks.net/
Volatility An advanced memory forensics framework  http://code.google.com/p/volatility/
Winforensicaanalysis  http://code.google.com/p/winforensicaanalysis/
Windows Forensic Environment  http://winfe.wordpress.com/
Woanware  http://www.woanware.co.uk/

DFIR Tool Webpages

Digital Detective - Free Tools  http://www.digital-detective.net/digital-forensic-software/free-tools/
Forensic Control Free Computer Forensic Tools  http://forensiccontrol.com/resources/free-software/
HB Gary Free Security Tools  http://www.hbgary.com/free-tools
Mandiant Free Software  http://www.mandiant.com/products/free_software
QCC Information Security Free Forensic Tools  http://www.qccis.com/forensic-tools
RedWolf Computer Forensics http://redwolfcomputerforensics.com/index.php?option=com_content&task=view&id=42&Itemid=55
Sanderson Forensics Free Utilities  http://www.sandersonforensics.com/content.asp?page=15

How do you use your feeds?

Tuesday, April 5, 2011 Posted by Corey Harrell 0 comments
A feed reader is a valuable resource since the software manages the content from websites such as news sites, blogs, or other online publishers. A reader not only enables you to stay informed of the latest content from the sites but it also enables content to be leverage to help keep your knowledge current and to assist with research for your investigations. This post is about how I’ve been using RSS feeds to help keep my knowledge current and conduct research.

Before I discovered the value of RSS feeds I wasted a lot of time and energy on trying to stay current with the latest content from information security and digital forensics websites. Periodically I checked the sites to see if anything was new, I wasted time trying to find an article I read but couldn’t remember where, and I struggled to remember all of the articles/posts I wanted to read on new sites I came across. Needless to say this was the wrong approach so I turned to RSS feeds to help me manage this content.

Getting Started with Feeds

RSS (Rich Site Summary) is a “format for delivering regularly changing web content”. A feed reader is software that downloads feeds from various sites and stores them for a person to read and use. The first and only program I tested was FeedReader and this has become my reader of choice. The software has no fees and a range of capabilities to read, collect, and organize web content using RSS or Atom feeds. I’m not going to go into detail about FeedReader’s features or its configuration since I wanted to focus on the benefit of feeds.

Right away I knew the one feature I wanted in any reader was portability. I use numerous computers between work and home so I didn’t want to be tied to one system or have to worry about syncing content between systems. FeedReader can either be installed on a computer using the installer or the zip package can be used for the program to run from a thumb drive. I opted for the latter option and this has allowed me to have access to the web content no matter where I am. Plus an additional benefit is being able to access the content stored in the database without needing Internet access.

Adding Feeds

There are different ways to find digital forensics and incident response related websites. Most blogs have an area where the authors share links or blogs they follow. Authors’ may also include links to content on other sites in their posts/articles. Following all of these links can lead to interesting sites that can be used to create a collection of feeds. In addition to blogs and news sites, I’ve been working on adding social media sites, such as Twitter, to my feed collection. After the sites are located then the next step is to determine if a site supports RSS or Atom feeds. One quick way to determine this is to look for the icon in the web browser. The picture below shows the icon highlighted in Firefox and Internet Explorer.

Adding feeds to a reader will vary depending on the program being used. FeedReader supports adding the following types: feeds, smartfeeds, and search feeds. My current FeedReader database consists of 159 feeds, 20,031 news (downloaded web content), 141 unread news, and the database is only 76 MB. I organized the content into folders to make it easier to manage. The picture below shows FeedReader’s interface and the web content downloaded from jIIr. Unread items are highlighted in bold and the numbers to the left of the folders show the amount of unread content in that folder.

Leverage the Feeds

FeedReader automatically downloads feeds from sites and this saves me a lot of time since I no longer have to periodically check sites for new content. The reader allows me to stay informed about the latest content and helps me organize the content. This isn’t the only benefit of a reader because another benefit is the ability to search the content for research or investigations. To see how it's possible I’ll perform three different searches against my FeedReader’s database.

The first search will be on random topic and Internet Explorer 8 InPrivate browsing feature is the first thing I thought about. The feature enables users to surf the web without leaving any traces of their activity on the computer being used. To obtain information about this feature I performed a search against my feeds using the keyword inprivate. The following is the summary of three of the keyword hits:

* Derek Newton’s blog post Internet Explorer InPrivate URL Artifacts. The post discusses a few areas that could contain InPrivate URL artifacts and how those areas can be searched.

* Digital Detective’s blog post NetAnalysis v1.50 - New Release and the post advertizes that Netanalysis can recover data from InPrivate browsing.

* Computer Forensics and IR – What’s New blog post Internet Evidence Finder - new release and more and the post mentions how IEF is able to recovery IE8 URLs.

The previous search showed how to locate information on a random topic. The search located research on InPrivate browsing artifacts and three possible ways to try to recover data from InPrivate browsing. The next search will illustrate how the feeds can help in obtaining more information about an artifact found during an investigation. If the investigation involves the activity of a user account then one of the artifacts of interest could be the UserAssist key in the Ntuser.dat registry hive. A search was conducted using the keyword userassist and the following is a summary of some of the hits.

* ForensicArtifacts blog post UserAssist which is a write-up about what the key is and contains useful references about the key.

* Richard Drinkwater’s Forensics from the Sausage Factory blog post Prefetch and User Assist. This write-up was about determining how often a program was ran and one of the areas that provided this information was the UserAssist key.

* Harlan Carvey’s Windows IP blog post Accessing Volume Shadow Copies where he discusses how the registry key could be analyzed in Volume Shadow Copies.

* Chris Pogue’s Digital Standard blog post The “Not So” Perfect Keylogger. In this write-up the UserAssist key showed the initial execution of a keylogger.

* Into the Boxes Digital Forensics and Incident Response Magazine Issue 0x0. Didier Stevens wrote an article for this issue about the Windows 7 Userassist Registry key.

* Dave Hull’s post Digital Forensics: Detecting time stamp manipulation on the SANs forensics blog. This write-up was about identifying time stamp manipulation and the UserAssist key was one of the artifacts including in a timeline.

The previous search showed the potential wealth of information that could be obtained about an artifact of interest. The last search will illustrate how the feeds can help in conducting research about an item such as an email. The picture below shows an email that was in one of my throw away email accounts and this email will be used for this demonstration.

The email appears to be a notification from the United Parcel Service and the attachment is supposed to contain the tracking number and more information about a shipment. This is the type of email I would do additional research on so I can learn more about the Spamming campaign and the artifacts left on a system by opening the attachment. The first keyword I searched for was the name of the attachment which was upsnotify. This only resulted in one hit in my feeds and this was for the post Spamvertised United Parcel Service notifications serve malware on Dancho Danchev's blog - Mind Streams of Information Security Knowledge. His post was about the current spam campaign impersonating UPS for malware serving purposes. The information covered was the detection rates for the attachment contents, additional executables downloaded, and domains contacted. I wanted more information so I ran another search using the keyword United Parcel Service. The following is the summary of some of the keyword hits:

* MXLab blog post “United Parcel Service notification” from UPS contains trojan. The post discusses how MXlab started receiving a new trojan distribution campaign by email with the subject “United Parcel Service notification" and it provides some information about the email.

* MXLab blog post “United Parcel Service notification 48161” from UPS contains trojan. This write-up is about the SPAM campaign and provides details about the spoofed email address, URLs the Trojan downloads data from, payload artifacts created on the system, and processes started on system.

* Microsoft Malware Protection Center post Trojan downloader Chepvil on the UPSwing. The post discusses the email campaign and the attachment that was detected as TrojanDownloader:Win32/Chepvil.I.

* There were a couple of tweets mentioning the SPAM email as well.

The searches against my feeds provided a wealth of information. I was able to determine an email sitting in my Inbox was a part of a Spamming campaign and identified some of the potential artifacts on a system where the attachment was opened. The two other searches located information on how to recover the InPrivate browsing data and a wealth of information about the UserAssist key.

The best part about the moving to a feed reader is that I have access to the information at any time since it is stored in the RSS feed database stored on the thumb drive. Sometimes it feels like I have a portable Google in my pocket.
Labels: ,

CVE-2010-0840 (Trusted Methods) Exploit Artifacts

Monday, March 21, 2011 Posted by Corey Harrell 0 comments
Artifact Name

CVE-2010-0840 (Trusted Methods) Exploit Artifacts

Attack Vector Category

Exploit

Description

Vulnerability present in the code responsible for privileged execution of methods affects Oracle Java 6 prior to update 19 and 5 prior to update 23. Exploitation allows for the execution arbitrary code under the context of the currently logged on user.

Attack Description

This description was obtained using the Metasploit exploit reference and it involves having a user visit a malicious website.

Exploits Tested

Metasploit v3.6 multi\browser\java_trusted_chain

Target System Information

* Windows XP SP3 Virtual Machine with Java 6 update 16 using administrative user account

* Windows XP SP3 Virtual Machine with Java 6 update 16 using non-administrative user account

Different Artifacts based on Administrator Rights

No

Different Artifacts based on Software Versions

Not tested

Potential Artifacts

The potential artifacts include the CVE 2010-0840 exploit and the changes the exploit causes in the operating system environment. The artifacts can be grouped under the following three areas:

     * Temporary File Creation
     * Indications of the Vulnerable Application Executing
     * Internet Activity

Note: the documenting of the potential artifacts attempted to identify the overall artifacts associated with the vulnerability being exploited as opposed to the specific artifacts unique to the Metasploit. As a result, the actual artifact storage locations and filenames are inside of brackets in order to distinguish what may be unique to the testing environment.

     * Temporary File Creation

          -JAR file created in a temporary storage location on the system within the timeframe of interest. [C:/Documents and Settings/Administrator/Local Settings/Temp/jar_cache3590475423724669955.tmp. The contents of the JAR file contained a manifest file and one class file was detected as the CVE 2010-0840 exploit. There were other class files whose md5 hash was not present in VirusTotal database.

     * Indications of the Vulnerable Application Executing

          - Log files indicating Java was executed within the timeframe of interest. [C:/Documents and Settings/Administrator/Application Data/Sun/Java/Deployment/deployment.properties, C:/Documents and Settings/Administrator/Local Settings/Temp/java_install_reg.log, and C:/Documents and Settings/Administrator/Local Settings/Temp/jusched.log] The picture below shows the contents of the java_install_reg.log file.

          - Prefetch files of Java executing. [C:/WINDOWS/Prefetch/JAVA.EXE-0C263507.pf]

          - Registry modification involving Java executing. [HKCU-Admin/Software/JavaSoft/Java Update/Policy/JavaFX]

          - Folder activity involving the Java application. [C:/Program Files/Java, C:/Documents and Settings/Administrator/Application Data/Sun/Java/Deployment/cache/, and C:/Documents and Settings/Administrator/Local Settings/Temp/hsperfdata_username]

     * Internet Activity

          - Web browser history of user accessing websites within the timeframe of interest. [Administrator user account accessed the computer -192.168.11.200- running Metasploit]

          - Activity involving the Temporary Internet Files folder. [C:/Documents and Settings/Administrator/Local Settings/Temporary Internet Files]

Timeline View of Potential Artifacts

The images below shows the above artifacts in a timeline of the file system from the Windows XP SP3 system with an administrative user account. The timeline includes the file system, registry, and Internet Explorer history entries.





References

Vulnerability Information

Mitre’s CVE http://cve.mitre.org/cgi-bin/cvename.cgi?name=2010-0840

NIST National Vulnerability Database http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2010-0840

Zero Day Initiative http://www.zerodayinitiative.com/advisories/ZDI-10-056/

SecurityFocus http://www.securityfocus.com/bid/39065

Exploit Information

Metasploit Exploit http://www.metasploit.com/modules/exploit/multi/browser/java_trusted_chain

CVE-2010-0094 (RMIConnectionImpl) Exploit Artifacts

Saturday, March 12, 2011 Posted by Corey Harrell 2 comments
Artifact Name

CVE-2010-0094 (RMIConnectionImpl) Exploit Artifacts

Attack Vector Category

Exploit

Description

Vulnerability present within the deserialization of RMIConnectionImpl objects affects Oracle Java 6 Update 18 and 5.0 Update 23 and earlier versions on Windows, Solaris and Linux systems. Exploitation allows for the execution of arbitrary code under the context of the currently logged on user.

Attack Description

This description was obtained using the Zero Day Initiative reference and it consists of having a user visit a malicious website.

Exploits Tested

Metasploit v3.6 multi\browser\java_rmi_connection_impl

Target System Information

* Windows XP SP3 Virtual Machine with Java 6 update 16 using administrative user account

* Windows XP SP3 Virtual Machine with Java 6 update 16 using non-administrative user account

Different Artifacts based on Administrator Rights

No

Different Artifacts based on Software Versions

Not tested

Potential Artifacts

The potential artifacts include the CVE 2010-0094 exploit and the changes the exploit causes in the operating system environment. The artifacts can be grouped under the following three areas:

     * Temporary File Creation
     * Indications of the Vulnerable Application Executing
     * Internet Activity

Note: the documenting of the potential artifacts attempted to identify the overall artifacts associated with the vulnerability being exploited as opposed to the specific artifacts unique to the Metasploit. As a result, the actual artifact storage locations and filenames are inside of brackets in order to distinguish what may be unique to the testing environment.

     * Temporary File Creation

          - JAR file created in a temporary storage location on the system within the timeframe of interest. [C:/Documents and Settings/Administrator/Local Settings/Temp/jar_cache8659615251018636226.tmp. The contents of the JAR file contained a manifest file and other files which were detected as the CVE-2010-0094 exploit. Exploit.class and PayloadClassLoader.class are two of the files detected as containing the exploit.

     * Indications of the Vulnerable Application Executing

          - Log files indicating Java was executed within the timeframe of interest. [C:/Documents and Settings/Administrator/Application Data/Sun/Java/Deployment/deployment.properties, C:/Documents and Settings/Administrator/Local Settings/Temp/java_install_reg.log, and C:/Documents and Settings/Administrator/Local Settings/Temp/jusched.log] The picture below shows the contents of the deployment.properties file.

          - Prefetch files of Java executing. [C:/WINDOWS/Prefetch/JAVA.EXE-0C263507.pf]

          - Registry modification involving Java executing. The last write time on the registry key is the same thime that is reflected in the jusched.log file. [HKLM-Admin/Software/JavaSoft/Java Update/Policy/JavaFX. One of the entries in the jusched.log file was "SetDefaultJavaFXUpdateSchedule: Frequency:16, Schedule: 3:52" and this occurred when the registry key was modified]

          - Folder activity involving the Java application. [C:/Program Files/Java/jre6/, C:/Documents and Settings/Administrator/Application Data/Sun/Java/Deployment/cache/, and C:/Documents and Settings/Administrator/Local Settings/Temp/hsperfdata_username]

     * Internet Activity

          - Web browser history of user accessing websites within the timeframe of interest. [Administrator user account accessed the computer -192.168.11.200- running Metasploit]

          - Files located in the Temporary Internet Files folder. [C:/Documents and Settings/Administrator/Local Settings/Temporary Internet Files/Content.IE5/]

Timeline View of Potential Artifacts

The images below shows the above artifacts in a timeline of the file system from the Windows XP SP3 system with an administrative user account. The timeline includes the filesystem, registry, event logs, and Internet Explorer history entries.





References

Vulnerability Information

Mitre’s CVE http://cve.mitre.org/cgi-bin/cvename.cgi?name=2010-0094

NIST National Vulnerability Database http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2010-0094

Zero Day Initiative http://www.zerodayinitiative.com/advisories/ZDI-10-051/

Exploit Information

Metasploit Exploit Information http://www.metasploit.com/modules/exploit/multi/browser/java_rmi_connection_impl

Smile for the Camera

Sunday, March 6, 2011 Posted by Corey Harrell 2 comments
What's one of the new forensic artifacts a Kinect leaves on the Xbox 360 which may be beneficial to an investigation? Depending on the game or application using the Kinect, there could be photographic evidence and this evidence could be used to determine the person using Xbox, the other people in a room, or the state of a room over a period of time. The corporate environment doesn't deploy gaming systems to support the business so I won't come across the Kinect's photographic evidence until the technology has a business use for the Windows computer. The topic of this post is a little different than my usual content but there's a Kinect in my house and I wanted to find the photos or videos created by any of the Kinect games.

What is the Kinect?

The Kinect is a peripheral for the Xbox 360 and according to Microsoft it is a "controller-free gaming means full body play". The Kinect senses body movement and this movement lets people interact with the Xbox whether if it's playing a game or watching a movie. The Kinect was a Christmas present to my entire family and if you do your research on the games then it really does work as advertised. I spike volleyballs by jumping in the air, my teenager scores goals by kicking a soccer ball, and my three year old runs in place while jumping over hurdles as he races down the track. Gaming systems have come a long way since my days of playing Contra and Super Mario Brothers using a controller with two buttons and a directional pad.

The Wired article How Motion Detection Works in Xbox Kinect describes the Kinect technology including the camera that's a part of the hardware. There are a few games that make use of the camera for entertainment purposes by providing slideshows of everyone who played the games. Certain games even store the captured pictures so people can access them at a later time.

Accessing the Multimedia the Xbox Way

Kinect Adventures comes bundled with the Kinect and this is one of the games which take pictures during game play. Kinect Adventures stores the pictures on the Xbox's hard drive and people can view the photos at a later time. The game's menu is used to access any of the created photos as opposed to the Xbox menu. The photos can be uploaded to websites and services such as Kinectshare.com. I uploaded a few Kinect Adventures photos to Kinectshare. The image below shows which games support Kinectshare and as you can see the Kinect Adventures game has uploaded photos (yup, that's my mug on the camera).

The pictures can be uploaded to Facebook, printed, or downloaded using Kinectshare. This is a downloaded picture with one of my sons.

Accessing the Multimedia the Post-mortem Way

An investigation may have some issues trying to use the photos or videos uploaded to Kinectshare. The first issue is Kinectshare uses the Windows Live ID associated with the Xbox live gamertag which will make it harder to access the uploaded files since the site is password protected. The second issue is the files are automatically deleted after 14 days which limits the timeframe of when the files can be accessed. Both of these issues can be avoided by directly accessing the Kinect multimedia stored on the Xbox's hard drive.

I mentioned previously I don't examine Xboxes but I was interested in the gaming photos. This post isn't intended to cover how to perform Xbox 360 examinations. If anyone is looking for this type of information there's a book called Xbox 360 Forensics published by Syngress (I came across this book while writing this post).

Right off the bat I found out that FTK imager and Encase don't display the partitions on the Xbox hard drive. A few quick Google searches not only provided me with a program to browse the hard drive but the searches also explained the folder structure. The folder structure stores content in a global area that applies to all users and content is stored in each user account's profile. The global area is located at /partition3/content/0000000000000000/TITLEID/OFFERID/ while the content in the user profiles are located at /partition3/content/PROFILEID/TITLEID/OFFERID/. The PROFILEID is the ID of the user account, the TITLEID is the name of game or application that created the folder, and the OFFERID is the type of content the folder stores. I used Kingla's Xbox 360 HDD Folder List website to determine the TITLEID and OFFERID. The picture below shows the global content for my Xbox and the Kinect games' folders are 4D5308ED (Kinect Adventures), 4D5308C9 (Kinect Sports), and 545607D3 (Dance Central).

The Kinect Adventures photos are located in the global folder 4D5308ED. There were two content folders with one for photos (OFFERID 000000001) and the other for videos (OFFERID 00090000). The videos folder didn't contain any videos of people playing the Kinect. However, there were numerous photos stored in the 000000001 folder as illustrated below.

The names of the files are based on the date and time of when they were created. It doesn't help much in my case since the Xbox's time was wrong. The files contain the Kinect Adventures photos as well as additional data. Examining the files I noticed some consistent file offsets containing data.

          * File offset 5778: name of the game and the data was K•i•n•e•c•t• •A•d•v•e•n•t•u•r•e•s
          * File offset 5914: PNG image and the image was an icon
          * File offset 22298: Same PNG image of an icon
          * File offset 49152: file name and the data was M9_0_2005_11_22_7_9_38_784
          * File offset 53328: JPG image which is the Kinect photo

I used a hex editor to copy out all of the data for the JPG image. As illustrated below the start of the JPG image is at file offset 53328.

The JPG data was copied and saved as a new file with a jpg file extension. The image was the Kinect photo showing my three year old playing Kinect Adventures while my teenager waits on the couch.

What's Next

Only certain games or applications create videos or photos with the Kinect. Kinect Adventures is one of the games that do and this game comes bundled with the Kinect. As I said before, this technology hasn't reached the corporate environment yet but I think it's only a matter of time before it does. A quick Google search provides a ton of hits of how various people adopted the Kinect technology for other uses including controlling a Windows 7 computer. Winrumors.com posted that Microsoft is going to be releasing its own Windows based Kinect SDK in the spring amid a growing community of "Kinect hackers". This could be the beginning of this technology extending beyond gaming and research to serve other purposes more suitable for the corporate environment. Time will tell what new forensic artifacts this technology will bring and how beneficial the artifacts are to an investigation.
Labels: ,

Finally... Timeline Analysis Links

Tuesday, February 22, 2011 Posted by Corey Harrell 0 comments
Finally

As I’m writing the first paragraph of a paper for my Masters of Science program the only thought that keeps running through my mind is finally. I finally reached not only the last week of class but the last week of my master’s program. In a few days I will finally complete the MSIA program when I submit my paper and my experience -not the knowledge- will gradually become a distant memory.

The second thought to run through my mind was everything on my to do list. My list has been piling up over the months and one of the more recent items on the list is the lack of my blog posts over the past few weeks. This will hopefully change once I’m done with school and a few of the future posts will cover some of the things I’m looking at including Java vulnerability exploit artifacts, my introduction to log analysis, and possibly a new crime scene camera that people are putting into their homes.

In the meantime here are a few links about timelines.

Timeline Analysis Links

Kristinn has an excellent post about analyzing timelines which can be found here. I previously blogged about reviewing timelines with Excel (post is here) and Calc (post is here). I created the timelines using mactime and redirected the output to a csv file which I then imported into Excel. Kristinn approaches analyzing timelines with Excel a different way. Kristinn mentioned that filtering is not optimal with mactime and Excel so he uses the CSV output module in log2timeline to create the timeline. One of the limitations I found with Excel was the limit on the number of variables you can filter on using basic filters (Calc had a higher limit but it was still only eight variables). This was one of the reasons I looked into using advanced filters, Kristinn's approach is really interesting since the CSV module breaks up the description field which makes it easier to filter on using basic filters. His write-up is very informative and educational. Trying out this approach has been added to my to do list. Kristinn, thanks again for the write-up and sharing this information.

One of the downsides to being state public sector employee – especially for New York state- is the lack of funds to attend trainings and conferences. This is the main reason why I like when speakers share their conference presentation slides since it lets people who couldn’t attend the conference (aka me) to see some of the presented material. Mandiant posted their DoD Cyber Crime 2011 presentations and one of them was Rob Lee’s Super Timeline Analysis presentation. My biggest take away from Rob's slides was his research on the Windows time rules (I was already familiar with the other content in the slides since I read Rob's post on the SANs forensic blog about supertimelines and volume shadow copy timelines). The Windows time rules (slides 15 and 16) outline how the timestamps in the Standard Information Attribute and Filename Attribute are changed by actions taken against a file. For example, you can see the difference between the changes to a file's timestamps when it is moved locally as compared being moved to another volume. The charts are a great reference and thank you Rob for sharing this information.

(Almost) Cooked Up Some Java

Monday, February 7, 2011 Posted by Corey Harrell 4 comments
I was working on a computer a few weeks ago (non-work related issue) when my antivirus scanner flagged two files. The names of the two files were 2371d6c6-2a6da032.idx and 2371d6c6-2a6da032; both files were located in the Sun Java cache folder. The first time I came across this type of artifact I mentioned it in the jIIr post Anatomy of Drive-by Part II. This time around things were different because I didn’t have to identify the files since the antivirus software marked them as containing the CVE-2010-0094 exploit. I thought this known Java exploit was a good candidate for a sample to practice on. Not only could the sample be used to learn how to analyze Java exploits with REMnux but it could also be used to try out a few recipes from the Malware Analyst’s Cookbook. This post is the examination of the 2371d6c6-2a6da032.idx and 2371d6c6-2a6da032 files which consists of the following:

        * Understand the Java Cache folder
        * Examine the IDX File
        * Examine the JAR File
        * Extract Java Source from the JAR File
        * Examine Java Source

Understand the Java Cache folder

The 2371d6c6-2a6da032.idx and 2371d6c6-2a6da032 files were located in the following folder: Users\\AppData\LocalLow\Sun\Java\Deployment\cache\6.0\6\. This folder is the default location where Java stores temporary files on the computer so the files can be executed faster in the future. The picture below highlights where the temporary file location can be changed from its default value in the Java Control Panel. Note: the picture was taken from a Windows XP system but the samples came from a Windows 7 system which is why the path shown below is different than the one I mentioned.

Examine the IDX File

The storage location of the 2371d6c6-2a6da032.idx and 2371d6c6-2a6da032 indicate they are temporary files. The files in the cache folder with an extension of IDX are Java applet cache index. The index tracks information about the temporary files in the cache folder such as: the file’s name, URL the file came from, IP address of the computer the file came from, the last modified date of the file, and what appears to be the date of when the file was downloaded. The picture below shows this information stored in an index file I grabbed from my Java cache folder. Note: Skillport is a legitimate website so the information below is not malicious.

The temporary files’ indexes can be viewed using the View button in the Java Control Panel (the button is to the right of the Settings button in the picture of the Java Control Panel above). I verified this by comparing the contents of an IDX file on my computer with the information in the Java Control Panel viewer. The picture below highlights the relationship between the values in the IDX file and the viewer.

As can be seen in the picture above, the Java Cache Viewer doesn’t provide all of the information that’s available in the index file. For example, the last modified date only shows the date while the index file also contains the time. Another example is the Java Cache Viewer not showing the IP address of the computer where the file came from even though the address is present in the index.

The 2371d6c6-2a6da032.idx file is the index for the 2371d6c6-2a6da032 file and this index provides valuable information about where the file came from. The picture below shows the information in the 2371d6c6-2a6da032 file’s index (the file was viewed using the vi text editor in REMnux).

The index file contains some valuable information about the JAR file and some of that information is listed below.

        * Filename: 7909df6ac8d.jar
        * URL where file came from: hxxp://partersl(dot)com/new/2fcf33c783
        *File size: 23996
        * File's last modified date: Wed Oct 27, 2010 14:44:55 GMT
        * IP address of the computer where file came from: 91.213.217.35
        * Web software involved: Apache
        * Deploy request content type: application/ java archive
        * Date when file was downloaded: Sun Oct 31, 2010 21:49:25 GMT

Side note: I conducted a few tests on the download date. I'm not sure if any actions alter this date but during my testing the date didn’t change the same file was accessed on a server at a later time.

Examine the JAR File

The 2371d6c6-2a6da032.idx file provided some interesting information about the 2371d6c6-2a6da032 file. Further research could be done on some of this information such as the IP address or domain names; the Malware Analyst Cookbook has a few recipes for this type of research. The index file indicated that the 2371d6c6-2a6da032 file was an application/java archive but there wasn't any other information about what was the file's purpose. A closer examination was needed to find out the file’s purpose.

JAR files are package with the ZIP file format; this means JAR files can be used for ZIP tasks such as archiving files. This means the 2371d6c6-2a6da032 JAR file is an archival. I wanted to become more familiar with the JD-GUI program so I downloaded it to REMnux in order to view the contents of the JAR file. The picture below shows the contents of the 2371d6c6-2a6da032 file (I added the .jar extension when I had an issue with JD-GUI not seeing the file).

A JAR contains a manifest which “is a special file that can contain information about the files packaged in a JAR file”. The information contained in the manifest enables the JAR file to be used for multiple purposes. This also means the manifest can help determine what the purpose of an unknown JAR is. The picture below shows 2371d6c6-2a6da032 file’s manifest.

The 2371d6c6-2a6da032.idx file indicated this JAR was an application and if an application is bundled in a JAR file then there needs to be a way to indicate which class file within the JAR is the application’s entry point. The entry point is identified using the Main-Class header and the picture above shows the main class is the starting point for the bundled application. This information established the starting point of the examination of the Java in the eight class files.

Extract Java Source from the JAR File

The SANs Internet Storm Center posted an entry -Java Exploits- and this post discussed how Java exploits can be analyzed. To examine the class files a Java decomplier is required in order to extract the Java source from the files. I wanted to become familiar with the jad decomplier in REMnux so I attempted to extract the source code using the method outlined in the post Java Exploits.

The class files were unzip from the JAR.

Jad was used to extract the source code when the following error was encountered.

I looked into the error and came across a forum that stated jad has issues handling Java 5.0. To get around the issue I used JD-GUI to extract the source code.

Examine Java Source Code

This post is called (Almost) Cooked Up Some Java because I wasn't successful in examining the Java source code. I wanted to complete the 6-2 recipe in the Malware Analyst Cookbook to see if the exploit could be identified in the Java code. This would have completed the examination of the 2371d6c6-2a6da032.idx and 2371d6c6-2a6da032 files. However, the exploit wasn't identified because of an error running the source code. I received the error when running the code through jsunpack-n and spidermonkey (REMxun has both programs). The screenshots of the errors are shown below.


The error referenced a missing semicolon but the review of the code showed there wasn't a semicolon missing. I even ran the Java source code through Wepawet to see if the result would be different but the site doesn't show if an error occurred similar to jsunpack-n. I reached out for help on this issue and someone helped identified the lines in the code causing the errors. The lines had string variables with values containing numbers. When the numbers were removed then the error would disappear. However, removing the numbers wasn’t a solution to the error so this meant I couldn’t examine the Java source code.

               ******** Update ********

A reader responded and pointed out the errors were due to the programs I was using. Spidermonkey and jsunpack-n are used to analyze Javascript instead of Java code. Thank you again to the reader who took the time to contact me.

I was hoping someone would see what I was doing wrong because I still wanted to know how to examine the Java code in order to locate the exploit and its payload. If there are any more updates to this post in the future, I'll put them at the end of the post after the summary.


               ******** Update ********

At this point I thought the next step could be to conduct a few searches using keywords from the JAR file. I performed a few quick searches using different combinations of the names of the class files in the JAR file. I wasn’t able to find the same 2371d6c6-2a6da032 file but I found other files that had similar class file names. A few of the search hits are listed below.

        * Oct 7, 2010: JAR file was run through ThreatExpert and there were no detections
        * Oct 29, 2010: JAR file was run through ThreatExpert and there were no detections
        * Oct 31, 2010: 2371d6c6-2a6da032 file being examined was downloaded
        * Dec 06, 2010: JAR file was run through ThreatExpert and there was one detection
        * Dec 23, 2010: Microsoft Malware Protection write-up on TrojanDownloader:Java/Rexec.C
        * Feb 03, 2010: 2371d6c6-2a6da032 file being examined was run through ThreatExpert and there were numerous detections

Summary

The Java cache folder is one location on a system where there could be artifacts of a Java exploit. The folder’s location can be changed but the default location for Windows 7 is \\AppData\LocalLow\Sun\Java\Deployment\cache\6.0\6\ while Windows XP is \\Application Data\Sun\Java\Deployment\cache\6.0.

For each temporary file downloaded into the Java Cache folder will result in two files being present. One file will be the actual temporary file while the second file will be the Java applet cache index for the temporary file. The index tracks information about the temporary file such as: the file’s name, URL the file came from, and the file’s last modified date. The temp file and its index can provide valuable information about the file’s purpose and where it came from.

Now back to my examination. I was unable to analyze the Java code in the JAR file and the few quick Google searches I performed didn’t provide a good hit (around the time of 10/31/2010) to confirm my suspicions about the file. However, I was able to quickly confirm my suspicions using the information from the index file. A quick Google search of the IP address 91.213.217.35 resulted in a hit for the Malc0de database. The Malc0de database entry is shown below.

Notice the first entry was on 11/01/2010 which was one day after the JAR file was downloaded to the system. The database entry contained the IP address and domain that was tracked in the 2371d6c6-2a6da032.idx file.

For anyone interested here is the VirusTotal report about the 2371d6c6-2a6da032 file. The report analyzed the file three months after the file was downloaded to the system I was looking at.

Forget The Beer I Will Take Wine

Thursday, January 27, 2011 Posted by Corey Harrell 4 comments
Wine is a program that lets Windows software run on other operating systems. This means Wine can be used to run Windows only forensic or malware analysis tools on the Sift workstation and REMnux. It’s so easy to get Wine up and running I wasn’t even sure if a blog post was needed. However, it never hurts to be informed. Here's a quick post on installing Wine and running Windows tools on Sift and REMnux.

Install Wine

The Sift v2 workstation and REMnux v2 both were built using Ubuntu Linux. The Wine website shows the different options for installing Wine on Ubuntu including using repositories, the GUI, or the command line. All of these options require the Sift and REMnux to have Internet access. I used the command line option since it only involved running the following commands:

          * sudo add-apt-repository ppa:ubuntu-wine/ppa
          * sudo apt-get update
          * sudo apt-get install wine1.3
               - Enter Yes to proceed with the installation

That’s right, just three commands to install Wine. The next few pictures show Wine being installed on the Sift workstation.






Running Windows Programs on the Sift and REMnux

Wine can be used to run standalone Windows programs or programs that require an installation process. I wanted Wine so I could run a few standalone Windows programs so this post won’t cover installing a program in Wine (the Wine website has information on this topic). To run a standalone Windows program the program needs to be launched with Wine. Most of the programs I’ve tested run without any issues but a couple programs required some tinkering. The pictures below show Windows programs running on the Sift and REMnux.

First up is Nirsoft’s IEHistoryView running on Sift.


Next is McAfee’s BinText running on REMnux.


Here is PEID running on REMnux.


As I mentioned before, not all of the Windows programs will run without any issues. For example, Digital Detective’s Dcode program fails to run because of a missing dll. This is shown below with the missing dll highlighted in the red box.


A quick search on a Windows system locates the msvbvm60.dll in the Windows\System32 folder (this search was done on a Windows XP system). To fix the missing dll error, just copy the msvbvm60.dll from a Windows system to Wine’s Windows\System32 folder as shown below.


Now here is the picture of Dcode running on the Sift. Some messages appear while Dcode runs so testing has to be done to make sure the program still converts all of the dates properly.



REMnux and Sift are great distributions since they come preconfigured with some of the tools I use. My main platform is Windows so REMnux and Sift save me a lot of time because I don’t have to setup my own Linux environments. At times I find myself switching between Windows and Linux to run certain tools. Wine gives me the option of bringing a few Windows tools over to the Linux so I won’t have to switch between the two operating systems as much.


Labels:

Forensicator Readiness

Sunday, January 23, 2011 Posted by Corey Harrell 0 comments
I attended a lot of trainings as a communications technician in the Marines. There were formal trainings by outside parties, organized trainings by my unit, and formal education on my own to hone my troubleshooting skills. The goal of all of those trainings was to prepare me to perform my job regardless of the situation I might face. Even though I left the Marines, I carried over the same mentality to my career in information security especially for digital forensics. Using a mixture of formal education, paid training, and a lot of self training to ensure I'm capable of performing my job regardless of the situation I might encounter. However, outside of forensic challenges or forensic datasets I never had an organized way to approach self training until I wanted to learn about incident response investigations. This post will explain the approach I've been using but haven't documented until now.

Forensicator readiness is the method I've been using to help prepare myself to be able to investigate any situation regardless of the circumstances. If someone said "I need you to help figure out what caused this incident” I wanted to be able to hit the ground running. This is better than having to reply with "I'd like to help out but first I have to attend a training which by the way costs a few thousand dollars". I'm sharing my approach because I think it might be useful to others. Experienced analysts/examiners can use it to learn how to investigate new types of cases or students can use it to help prepare them for the common cases they might face. Forensicator readiness consists of the following six steps:

          * Pick a Scenario
          * Establish the Scenario’s Scope
          * Collect Digital Information
          * Examine Digital Information
          * Scenario Simulation
          * Identify Areas for Improvement

Ever since I read the Alexiou Principle (as described by Chris Pogue here and here) I've been using it in my cases. The principle has been helpful in planning out the investigation and keeping the investigation on track. I thought if it works for actual cases then it should work for simulations. Well, the principle does work in simulations so I use it in the forensicator readiness steps.

Pick a Scenario

There is always something to learn in digital forensics whether it’s a student studying the field in school or a forensicator who has been in the field for a number of years. It could be trying to understand how to investigate different types of cases, how to examine new data, or how to extract data from certain devices. The first step is to pick a scenario containing the item or situation of what is to be learned. The scenario could be based on the common types of cases processed in your organization or on a potential situation someone might need to investigate. A few scenario examples are: data leakage through USB device, sexual harassment involving company email, or a malware infected server.

Next a determination needs to be made to see if it’s possible to set up test systems to simulate the scenario. Research is completed in the collection, examination, and simulation steps and systems will be needed to run a few tests on. For example, I’d like to learn how to investigate a hacked database server. Unfortunately, I can’t use this scenario since I can’t simulate a SQL injection attack against a test database server. As a result, my focus is on the scenarios I can simulate in a test environment such as a malware infected system.

Having a scenario by itself isn’t enough because an end goal hasn’t been established; what is trying to be accomplished. This is where the first question of the Alexiou principle comes into play which is “what question are you trying to answer”. Identify a few potential questions to help guide the goal of the investigation. For example, the two potential questions of a suspected malware infected system I've been using are: is the system infected and how did the system become infected. The two questions identify what I’m trying to accomplish and helped guide my research in investigating a malware infected system.

Establish the Scenario’s Scope

The selected scenario has an end goal and can be replicated in a test environment. The next step is to determine the scope of the testing environment. Is the test environment going to be one computer or multiple computers? What operating systems are going to be on the computers? Are there going to be any networking devices such as routers, switches, or firewalls? Another consideration when determining the scope of the testing environment is what resources are available. Is there the necessary hardware and software to build the test environment? I'd like to be able to simulate a test environment of over 20 machines for my malware scenario but I can’t pull it off due to the lack of resources. I had to settle on just a few test systems and I’m still able to simulate my scenario. The above questions are only some of the things that have to be considered when scoping the test environment.

Setting up of the test environment during the next few steps may take some time. Despite the time required, one of the benefits of setting up your own environment is learning about the technology as you install and configure it. For example, if your scenario requires a web server running IIS (Windows Internet Information Service) then setting up IIS will provide a better understanding of what the default settings are and how it can be configured.

Collect Digital Information

At this point, the scenario has been identified, goals have been established, and the testing environment has been identified. The next step is to collect the digital information. The second question in the Alexiou principle states “what data do you need to answer that question”. The data sources in the test environment need to be evaluated to determine which ones can help you answer your question(s). The data sources could include hard drives, memory, logs, or captured network traffic.

Once the data sources of interest are identified then it’s time to research how these sources should be collected and what tools can be used. The amount of research required for this step will depend on the experience of the person conducting this exercise. In some instances, a person will already have a procedure in place for collecting the data sources and will have knowledge of the tools to use so additional research may not be necessary. On the other hand, the person may be facing a new data source(s) so there won’t be a procedure for the collection and the person won’t have knowledge about the tools to use. For example, in one of my scenarios I wanted to collect a hard drive and volatile data. I had experience with hard drives but collecting volatile data was new. I conducted research with the intention of modifying my collection procedures to include volatile data. The research involved reviewing RFC 3227, forensic books, blogs, and forums to determine what procedural steps were required to collect volatile data and what tools could be used to acquire volatile data from a system.

The new procedural steps and tools will need to be evaluated to determine if they work as intended. This evaluation will require a small test environment. Continuing with my volatile data example, I tested the procedural steps I researched and my list of tools to see which one best met my needs. I ran a few tests against Windows XP virtual machines by acquiring the volatile data from them. This not only allowed me to see if the steps were correct and what tool worked best but it also showed me what changes I had to make to the collection steps.

Examine Digital Information

At this point it's time to identify and extract the data required to answer the scenario questions. Continuing on with the Alexiou principle's second question “what data do you need to answer that question”; this question can further identify the data needed to answer the questions. The data sources have already been identified so the next part is to identify what information in those data sources can answer the questions. For example, in my scenario one of my data sources was volatile data so I had to figure out what information I needed from it. Some of the information was the running processes, established network connections, and loaded drivers. Once the exact information in the data sources is identified, the third Alexiou principle comes into play which is "how do you extract the data". Using the volatile data example, this would be determining how to extract the established network connections, loaded drivers, and running processes from the data.

As might be expected, research has to be conducted to decide what information in the data sources is needed to answer the questions and how that information can be extracted. The same types of references used in the collection step can be used such as blogs, forums, and forensic books.

Similar to the collection step, the new examination steps and tools need to be evaluated to see if they work as intended. The evaluation can use available forensic datasets or a small test environment. Forensic datasets can be used for testing different types of data sources and this is a faster option then setting up a test environment. The datasets available for testing the examination steps and tools for the volatile data example include: NIST CFReDS Project, Forensic Educational Datasets, Honeynet Challenges, or the memory images on the Forensic Incident Response blog. If there isn't an available dataset then a small test environment has to be set up. The fourth question of the Alexiou principle is "what does the data tell you". This question should be kept in mind during the evaluation because the purpose of the examination steps and tools are to extract information needed to answer the scenario questions. If the information doesn't help answer the question then additional research may have to be performed so the examination steps and tools can be adjusted.

Scenario Simulation

This step is where all of the hard work of researching and evaluating pays off. The scenario simulation is when the test environment is created and the scenario is simulated in that environment. The first scenario I've been working with is a computer infected with malware, and one of the ways I simulated this scenario was by visiting known malicious websites with a computer running vulnerable software. After the scenario is simulated then the next step is to treat the test environment like a real investigation. The data sources of interest get collected, and information is extracted from those sources to answer the scenario's questions.

Identify Areas for Improvement

Now that the dust has settled from investigating the scenario in a test environment; it's time to reflect back on what was done. The purpose of this step is to see if there is anything to improve upon. A few things for consideration are: did the tools perform as expected, were the procedures correct, what didn't work, and what can be done better. Something else to keep in mind during this reflection is to decide if any additional research has to be performed on any artifacts in order to get a better understanding about them. During my simulation, I didn't have a good understanding about the attack vector artifacts such as those left by exploits. I spent some time researching a few of these artifacts so I'd have a better understanding the next time I come across a similar artifact.

Summary

There isn’t a set time table to complete the forensicator readiness steps. It could take days, weeks, or months to complete. The time all depends on the scenario and how much of an understanding someone wants. People prepare for things differently and forensicator readiness is no different. If the steps can accomplish the end goal of preparing someone to investigate an incident regardless of the circumstances - like it did for me- then the process has served its purpose.

So when someone said to me "I need your help to figure out what caused this infection"; I was ready to rock and roll. I’ve been successful numerous times locating malware on systems and identifying the attack vector that put the malware on the systems. A few of the vectors were: a malicious email attachment, a drive-by download using a malicious PDF, and third party content pointing to a website hosting Windows help center and Java exploits. I don't think my success is a string of luck. It's due to my preparation for a situation I thought I would face sooner or later. It just so happened to be sooner than I was expecting.
Labels: