Building Timelines – Thought Process Behind It

Saturday, September 17, 2011 Posted by Corey Harrell 0 comments
Timelines are a valuable technique to have at your disposal when processing a case. They can reveal activity on a system that may not be readily apparent or show the lack of certain activity helping rule theories out. Timelines can be used on case ranging from human resource policy violations to financial investigations to malware infections to even auditing. Before the technique can be used one must first know how to build them.

This is the first post in my two part series on building timelines. Part 1 discusses the thought process behind building timelines while Part 2 demonstrates different tools and methods to build them.

Things to Consider

There are two well known approaches to building timelines. On the one hand is the minimalist approach; only include the exact data needed. On the other hand is the kitchen sink approach; include all data that is available. My approach falls somewhere in the middle. I put the data I definitely need into timelines and some data I think I may need. The things I take into consideration when selecting data is:

        - Examination’s Purpose
        - Identify Data Needed
        - Understand Tools’ Capabilities
        - Tailor Data List to System

Examination’s Purpose

The first thing I consider when building timelines is what is the examination’s purpose. Every case should have a specific purpose or purposes the DF analyst needs to accomplish. For example, did an employee violate an acceptable usage policy, how was a system infected, how long was a web server compromised, or locate all Word documents on a hard drive?

Identify Data Needed

The next area to consider is what data is needed to accomplish the purpose(s). This is where I make a judgment about the artifacts I think will contain relevant information and the artifacts that could contain information of interest. A few potential data sources and their artifacts are:

        - Hard drives: file system, web browsing history, registry hives, Windows short cut files, firewall logs, restore points, volume shadow copies, prefetch files, email files, or Office documents

        - Memory: network connections, processes, loaded dlls, or loaded drivers

        - Network shares: email files (including archives), office documents, or PDFs

        - Network logs: firewall logs, IDS logs, proxy server logs, web server logs, print/file server logs, or authentication server logs

I take into account the case type and examination's purpose(s) when picking the artifacts I want. To illustrate the affect case type has on my choice I'll use a malware infected system and Internet usage policy violation as examples. The malware infected system would definitely be interested in the artifacts showing program execution, firewall logs, antivirus logs, and file system metadata. The additional items I'd throw into a timeline would be the user's web browsing history, removable media usage, and registry keys last write times since those artifacts might show information about the initial infection vector and persistence mechanism. For an Internet usage policy violation I'd only include the file system metadata and web browsing history since my initial interest is limited to the person’s web browsing activities.

The examination purpose(s) will point to other artifacts of interest. Let's say if the Internet usage policy violation's purpose was to determine if an employee was surfing pornographic websites and if they were saving pornographic images to the company issued thumb drive. In addition to file system metadata and web history, I’d now want to include artifacts showing recent user activity such as Windows shortcut files or the userassist registry key.

I try to find a balance between the data I know I'll need and the data that may contain relevant information. I don't want to put everything into the timeline (kitchen sink approach) but I'm trying to avoid frequently adding more data to the timeline (minimalist approach). Finding a balance between the two lets me create one main timeline with the ability to create mini timelines using spreadsheet filters. Making the call about what data to select is not going to be perfect initially. Some data may not contain any information related to the examination while other left out data is going to be important. The important thing to remember is building timelines is a process. Data can be added or removed at later times which means thinking about data to incorporate into a timeline should occur continuously. This is especially true as more things are learned while processing the case.

Understand Tools’ Capabilities

After the examination’s purpose(s) are understood and the potential data required to accomplish it is identified then the next consideration is understanding my tools’ capabilities. Timeline tools provide different support for the artifacts they can parse. I review the items I want to put into my timeline against the artifacts supported by my tools to identify what in my list I can’t parse. If any items are not supported then I decide if the item is really needed and is there a different tool that will work. Another benefit to making this comparison is that helps to identify artifacts I might not have thought about. The picture below shows some artifacts supported by the tools I’ll discuss in the post Building Timelines – Tools Usage.


Some may be wondering why I don’t think about the tools’ capability before I consider the data I need to accomplish the examination’s purpose(s). My reason is because I don’t want to restrict myself to the capability provided by my tools. For example, none of my commercial tools are able to create the timelines I’m talking about. If I based my decision on how to accomplish what I need to do solely on my commercial tools then timelines wouldn’t even be an option. I’d rather first identify the data I want to examine then determine if my tools can parse it. This helps me see the shortcomings in my tools and lets me find other tools to get the job done.

Tailor Data List to System

At this point in the thought process potential data has been identified to put into a timeline. A timeline could be built now even though the artifact list is pretty broad. My preference is to tailor the list to the system under examination. To see what I mean I’ll discuss a common occurrence I encounter when building timelines which is including a user account’s web browser history. Based on my tools supported artifacts, the web browsing artifacts could be from: Google Chrome, Firefox 2, Firefox 3, Internet Explorer, Opera, or Safari. Is it really necessary to have my tools search for all these artifacts? If the system only has Internet Explorer (IE) installed then why spend time looking for the other items. If the same system has 12 loaded user profiles but the examination is only looking at one user account then why parse the IE history for all 12 user profiles? To minimize the time building timelines and reduce the amount of data in them the artifact list needs to be tailored to the system. A few examination checks will be enough narrow down the list. The exact checks will vary by case but one step that holds across all cases is obtaining information about the operating system (OS) and its configuration.

I previously discussed this examination step in the post Obtaining Information about the Operating System and it covers the three different information categories impacting the artifact list. The first category is the General Operating System Information and it shows the operating system version. The version will dictate whether certain artifacts are actually in the system since some are OS specific. The second category is the User Account Information which shows the user accounts (local accounts as well as accounts that logged on) associated with the system. When building a timeline it’s important to narrow the focus for the user accounts under examination; this is even more so on computers shared by multiple people. Identifying the user accounts can be done by confirming the account assigned to person, looking at the user account names, or looking at when the user accounts were last used. The third and final category is the Software Information. The category shows information about programs installed and executed on the system. The software on a system will dictate what artifacts are present. Quickly review the artifacts supported by my tools (picture above) to see how many are associated with specific applications. This one examination step can take a broad list and make it more focused to the environment where the artifacts are coming from.

Select Data for the Timeline

I reflect on the things I considered when coming up with a plan on how to build the timeline The examination's purpose outlined what I need to accomplish, potential data I want to examine was identified, my tool's capabilities were reviewed to see what artifacts can be parsed, and then checks were made to tailor the artifact list to the system I’m looking at. The list I’m left with afterwards is what gets incorporated into my first timeline. Working my way through this thought process reduces the amount of artifacts going into a timeline; thus reducing the amount of data I’ll need to weed through.

Thought Process Example

The thought process I described may appear to be pretty extensive but that is really not the case. The length is because I wanted to do a good job explaining it since I feel it’s important. The process only takes a little time to complete and most of it is already done when processing a case. Follow along a DF analyst on a hypothetic case to see how the thought process works in coming up with a plan to build the timeline. Please note, the case only mentions a few artifacts to get my point across but an actual case may use more.

Friend: “Damm … Some program keeps saying I’m infected and won’t go away. Let me call the DF analyst since he does something with computers for a living. He can fix it

Phone rings and DF analyst picks up

Friend: “DF analyst … Some program keeps saying I’m infected with viruses and blocks me from doing anything.”

DF analyst: “Do you have any security programs installed such as antivirus software, and if so is that what you’re seeing

Friend: “I think I have Norton installed but I’ve never seen this program before. Wait … hold on … Oh man, now pornographic sites are popping up on my screen

DF analyst: “Yup, sounds like you’re infected

Friend: “I know I’m infected. That’s what I told you this program has been telling me

DF analyst: “Umm .. The program saying you are infected is actually the virus.”

Friend: “Hmmmm….”

DF analyst: “Just power down the computer and I’ll take a look at later today.

Computer powering down

DF analyst: “When did you start noticing the program?

Friend: “Today when I was using the computer.

DF analyst: “What were you doing?

Friend: “Stuff… Surfing the web, checking email, and working on some documents. I really need my computer. Can you just get rid of the virus and let me know if my wife or kids did this to my computer?

Later that day

DF analyst has the system back in the lab. He thinks about what he needs to do which is to remove the malware from the system and determine how it got there. The potential data list he came up with to accomplish those tasks was: known malware files, system’s autostart locations, programs executed (prefetch, userassist, and muicache), file system metadata, registry hives, event logs, web browser history, AV logs, and restore points/volume shadow copies.

Wanting to know what launches when his friend logs onto the computer the DF analyst uses the Sysinternals autorun utility in offline mode to find out. Sitting in one run key was an executable with a folder path to his friend’s user profile. A Google search using the file’s MD5 hash confirmed the file was malicious and his friend’s system was infected. DF analyst decided to leverage a timeline to see what else was dropped onto the system and what caused it to get dropped in the first place.

DF analyst pulls out his reference showing the various artifacts supported by his timeline tools. He confirms that all the potential data he identified is supported. Then he moves on to his first examination step which is examining the hard drive’s layout. Two partitions, one is the Dell recovery formatted with Fat32 while the other is for the operating system formatted with NTFS. DF analysts just added NTFS artifacts ($MFT) to his potential data list. To get a better idea about the system he uses Regripper to rip out the general operating system information. Things he learned from the Regripper reports and the decisions he made based on the information:

         - OS version is XP (restore points are in play while shadow copies are out. Need to parse event logs with evt file extensions)

        - Three user accounts were used in the past week (initial focus for certain artifacts will be from friend’s user account since malware was located there. The two other user accounts may be analyzed depending on what the file system metadata shows)

        - Internet Explorer was only web browser installed (all other web browser artifacts won’t be parsed at this time)

        - Kaspersky antivirus software was installed (tools don’t support this log format. AV log will be reviewed and entries will be put into the timeline manually)

DF analyst performs a few other checks. Prefetch folder has files in it and his friend’s user account recycle bin has numerous files in it. Both were added to the timeline artifact list. The final list contains items from the system and one user account. The system data has: prefetch files, event logs (evt), system restore points, Master file Table. The artifacts from one user account are: userassist registry key, muicache registry key, IE history, and the Recycle bin contents. DF analyst is ready to build his timeline …. Stay tuned for the post "Building Timelines – Tools Usage" to see one possbile way to do it.


I'd like to hear feedback about how other's approach building timelines; especially if it's different than what I wrote. It's helpful to see how other analysts are building timelines.
Labels:

Linkz 4 Advice

Monday, September 12, 2011 Posted by Corey Harrell 2 comments
There won’t be any links pointing to Dr. Phil, Dear Abby, or Aunt Cleo. Not that there’s anything wrong that… They just don’t provide advice on a career in DFIR.

Getting Started in DFIR

Harlan put together the post Getting Started which contains great advice for people looking to get into DF. I think his advice even applies to folks already working in the field. DF is huge with a lot of areas for specialization. Harlan’s first tip was to pick something and start there. How true is that advice for us since we aren’t Abby from NCIS (a forensic expert in everything)? People have their expertise: Windows, Macs, cell phones, Linux, etc. but there is always room to expand our knowledge and skills. The best way to expand into other DF areas is to “pick something and start there”.

Another tip is to have a passion for the work we do. In Harlan’s words “in this industry, you can't sit back and wait for stuff to come to you...you have to go after it”. I completely agree with this statement and DF is not the field to get complacent in. There needs to be a drive deep down inside to continuously want to improve your knowledge and skills. For example, it would be easy to be complacent to maintain knowledge only about the Windows XP operating system if it’s the technology normally faced. However, it would be ignoring the fact that at some point in the near future encounters with Windows 7 boxes and non-Windows system will be the norm. A passion for DF is needed to push yourself so you can learn and improve your skills on your own without someone (i.e. an employer) telling you what you should be doing.

I wanted to touch on those two tips but the entire post is well worth the read, regardless if you are looking to get into DF or already arrived.

Speaking about a Passion

Little Mac over at the Forensicaliente blog shared his thoughts about needing a drive to succeed in DF. I’m not musically inclined but he uses a good analogy to explain what it takes to be successful. Check out his post Is Scottish Fiddle like Digital Forensics?.

Breaking into the Field

Lenny Zelster discussed How to Get Into Digital Forensics or Security Incident Response on his blog last month. One issue facing people looking to break into the field is that organizations may not be willing to spend the time and resources to train a person new to the field. Lenny suggested people should leverage their current positions to acquire relevant DFIR skills.

Lenny’s advice doesn’t apply to how I broke into the field since DFIR was basically dropped into my lab when I was tasked with developing the DF capability for my organization. However, his advice is spot on for how I was able to land my first position in the information security field (which is what lead me into DFIR). I was first exposed to security during my undergraduate studies when I took a few courses on the topic. It was intriguing but the reality was there weren’t a lot of security jobs in my area which meant my destination was still IT operations. I continued down the track pushing me further into IT but I always kept my desire for security work in mind. After graduation I took a position in an IT shop where I had a range of responsibilities including networking and server administration. In this role, I wanted to learn how to secure the technology I was responsible for managing and what techniques to use to test security controls. This is due diligence as being a system admin but it also allowed me to get knowledge and some skills in the security field. In addition to operational security, I even tried to push an initiative to develop and establish an information security policy. Unfortunately, the initiative failed and it was my first lesson in nothing will be successful without management’s support. All was not lost because the experience and my research taught me a lot about security being a process that supports the business. This is a key concept about security and up until that point my focus was on security's technical aspects.

I leveraged the position I was in to acquire knowledge and skills about my chosen field (security). My actions weren’t completely self serving since my employer benefited from having someone to help secure their network. I didn’t realize how valuable it was to expand my knowledge and skills until my first security job interview. Going in I thought I lacked the skills and knowledge but over the course of the interview I realized I had a lot more to offer. I took the initiative to expand my skillset and it was an important factor in helping me land in the security field. My experience is very similar to the Lenny’s advice except his post is about getting into the DFIR field.

Get a plan before going into the weeds

Rounding out the links providing sound guidance, Bill over at the Unchained Forensics blog gave some good advice in his recent post Explosions Explosions. He shared his thoughts on how he approaches examinations. One comment he made that I wanted to highlight was “more and more of my most efficient time is being used at the case planning stage”. He mentions how he thinks about his plan to tackle the case, including identifying potential data of interest, before he even starts his examination. I think it’s a great point to keep reinforcing for people new and old to DFIR.

I remember when I was new to the field. I had a newly established process and skillset but I lacked certain wisdom in how to approach cases. As expected, I went above and beyond in examining my first few cases. I even thought I was able to do some “cool stuff” the person requesting DF assistance would be interested in. There was one small issue I overlooked. The person was only interested in specific data’s content while I went beyond that, way beyond that. I wasted time and the cool stuff I thought I did was never even used. I learned two things from the experience. First was to make sure I understand what I’m being asked to do; even if it means asking follow-up questions or educating the requestor about DF. The second lesson was to think about what I’m going to do before I do it. What data do I need? What steps in my procedures should I complete? What procedural steps can be omitted? What’s my measure for success telling me when the examination is complete? Taking the time beforehand to gather your thoughts and develop a plan helps to keep the examination focused on the customer’s needs while limiting the “cool stuff” that’s not even needed.

Books On demand

If someone were to ask me what is the best training I have every taken I know exactly what I would say. A book, computer, Google, and time. That’s it and the cost is pretty minimal since only a book needs to be purchased. I’m not knocking training courses but classes cannot compare to educating yourself through reading, researching, and testing. I never heard about Books24x7 until I started working for my current employer. Books24x7 is virtual library providing access to “in-class books, book summaries, research reports and best practices”. The books in my subscription include topics on: security, DFIR, certification, business, programming, operating systems, networking, and databases. I can find the information I’m looking for by searching numerous books whether I’m researching, testing, or working. A quick search for DFIR books located: Malware Forensics: Investigating and Analyzing Malicious Code, Windows Registry Forensics: Advanced Digital Forensic Analysis of the Windows Registry, Windows Forensic Analysis Toolkit Second Edition, Malware Analyst's Cookbook: Tools and Techniques for Fighting Malicious Code, EnCase Computer Forensics: The Official EnCE: EnCase Certified Examiner Study Guide, and UNIX and Linux Forensic Analysis Toolkit. That’s only a few books from the pages and pages of search results for DFIR. Talk about a wealth of information at your fingertips.

The cost may be a little steep for an individual but it might be more reasonable for an organization. If an organization’s employees have a passion for their work and take the initiative to acquire new skills then Books24x7 could be an option as a training expense. Plus, it could save money from not having to purchase technical books for staff. Please note, I don’t benefit in any way by mentioning this service on my blog. I wanted to share the site since it’s been a valuable resource when I’m doing my job or self training to learn more about DFIR and security.
Labels: ,

What’s a Timeline

Wednesday, September 7, 2011 Posted by Corey Harrell 3 comments
Timeline analysis is a great technique to determine the activity that occurred on a system at a certain point in time. The technique has been valuable for me on examinations ranging from human resource policy violations to financial investigations to malware infections. Here is an analogy I came up with to explain what timelines are.

Not Even Close To a Timeline

The picture below shows how data looks on a hard drive using the operating system. It does a decent job if you are using the computer but the method doesn’t work for a forensic examination. There’s a lot of missing data such as: file system artifacts, hidden files/folders, and the metadata stored in files/folders.


In technical books cabinets are used to explain how hard drives function since they store items similar to how a drives store data. Using the operating system to view data on a hard drive is the equivalent to looking at the cabinet as pictured below. You are unable to see what lies beneath.


Getting Closer To a Timeline

The picture below shows how data on a hard drive looks using a digital forensic tool. The tool does a better job than the operating system since it displays a lot more data. File system artifacts, hidden files/folders, and file system metadata can now be examined. However, the tool does not readily show some data such as the metadata stored inside of files. The picture highlights the need for additional steps to extract the data inside prefetch files.


The cabinet’s contents can now be seen since the doors are opened. There are containers, pots, and pans. However, additional steps need to be taken to determine what is inside those items. Just like more steps are required in Encase to see prefetch files’ metadata.


This is What I’m Talking About

The picture below shows how data looks on a hard drive using a timeline. It might not look as pretty as a Graphical User Interface but it provides so much more data. The timeline section shown contains: both timestamps from the Master File Table (MFT), data stored in prefetch files, events from an event log, and registry keys.


The opened cabinet doors allowed the pots, pans, and containers’ contents to be examined. To the untrained eye it might look like chaos but to the knowledgeable observer they can now see what was stored in the cabinet including the now visible measuring cups. It's kind of like how a timeline makes visible activity on a system that may not have been readily apparent.

Labels:

Batch Scripting References

Tuesday, August 30, 2011 Posted by Corey Harrell 3 comments
“Give a man a fish; you have fed him for today. Teach a man to fish; and you have fed him for a lifetime”—Author unknown.


My ability to use my weak kung fu to put together batch scripts has only been a recent occurrence. For the most part I was always constricted by my tools. If my tool wasn’t able to automate a process then I’d adapt and take a little bit more time to complete a task. If my tools didn’t perform a task then I’d search for another tool or script to accomplish what I needed. Basically, I had to adapt to my tools to perform a task instead of making my tools adapt to the task at hand. Things changed when I spent a week working on a case when I realized knowing how to script was a necessity. I’m sharing the references I came across that did a decent job of teaching me how to write batch files.

The first reference was what taught me how to fish. Batch Files (Scripts) in Windows provides an introductory overview about batch files. The article starts out explaining what a batch file is and how to construct one before it covers more advanced topics. A few topics include explanations about using if statements and for loops in scripts. The author provides links pointing to explanations about terms the reader may want more information on. The article taught me the basics of writing batch files and afterwards I was able to write simple scripts without needing to do anymore research. In a way the article converted me from being a person who receives fish from others (scripts) to one who is able to catch my own fish (write my own scripts).

The scripts I’ve been writing automate repetitive tasks such as running the same command against different folders. The for loop is one option to complete repetitive tasks and this is where the next reference comes into play. ss64.com’s For loop webpages breaks down the syntax for the different ways to implement a for loop. The information on the site gave me a better understanding on how to write for loops. If Batch File (Scripts) in Windows taught me how to fish then ss64 helped me to improve my casting.

Despite having a pretty decent cast, I’m still fishing with a bobber. Beginner fishermen may have a tough time knowing when to set the hook in the fish’s mouth so a bobber helps them. Bobbers are a visual indicator that a fish is biting your line which alerts the fisherman when to set the hook. Similar to a beginner fisherman, I still need to learn a lot more. Rob van der Woude’s Scripting Pages website has a few pages discussing batch scripting. So far the site has helped me solve a few scripting problems I encountered but there’s still a wealth of information I haven’t even read.

One item that makes batch scripting a little easier is native Windows commands can be used in addition to third party tools. Microsoft’s Command-line reference A-Z is a great resource for learning about commands. The command-line reference A-Z is the equivalent to adding additional lures and bait to your tackle box so you can catch bigger and better fish.

The last reference and one that shouldn’t be overlooked is having a person to bounce ideas off of. The person doesn’t need to be an expert either. My coworker is in the same boat as me and is trying to learn how to write batch files. It’s been helpful to have someone to provide feedback on what I’m trying to do and to help me work through complex code. A person is like a fishing buddy who can provide you with some tips, better ideas, or helps you become a better fisherman.

Learning how to write batch scripts has been an awaking. I’m leveraging my tools to extract data in different ways and I'm cutting the time required to complete some tasks in half. I constantly reflect on what tasks can be automated with scripting and how I can present extracted data to better suite my needs. Paraphrasing the quote I referenced through out my post is the best way to illustrate how I benefited from learning how to script.

“Give a man a script; you have solved his issue for today. Teach a man to script; and you help him solve his own issues for a lifetime.”
Labels:

Where Is the Digital Forensics Threat Report

Monday, August 22, 2011 Posted by Corey Harrell 8 comments
Every year brings a new round of reports outlining the current trends. Information security threats, data breaches, and even cyber crime are covered in the reports. The one commonality across every report is they are lacking the digital forensic perspective. The reports address the question of what are the current threats potentially affecting your information and systems. However, the DFIR point of view asks the follow-up questions: how would you investigate a current threat that materialized on your systems and what would the potential artifacts look like? If a DF Threat Report existed then I think those two questions would be answered.

What The DF Threat Report Could Contain?

I’d like to see the report use case examples to illustrate a specific threat on a system. I find it easier to understand an investigative method and potential artifacts by following along an examination from start to finish. A simple way to demonstrate the threats would be to just replicate it on a test system. The use of test systems would enable the threat to be discussed in detail without revealing any specific case details. The current trend reports would just be guides highlighting what threats to focus on.

To see what I’m talking about I’ll walk through the process of how threats can be identifed for the DF Threat Report. The threats can then be simulated against test systems in order to answer the questions I'm bringing up. In the past two weeks I read the Sophos Security Threat Report Mid-Year 2011 and Securelist Exploit Kits Attack Vector – Mid-year Update reports. I’m using both reports since they are fresh in my mind but the areas I’m highlighting as lacking are common to most threat reports I’ve read (I’m not trying to single out these two organizations).

Example DF Threat Report Topics

The Sophos Security Threat Report Mid-year 2011 talked about the different ways malware is distributed. The threats covered included web threats, social networking, and email SPAM / spearphishing. The Securelist Exploit Kits Attack Vector Mid-year Update discussed the popular exploit kits in use and what vulnerabilities are targeted by the two new kits in the list (Blackhole and Incognito). There are threats in both reports that merit further discussion and would fit nicely in a DF Threat Report.

Sophos stated they “saw an average of 19,000 new malicious URLs every day” with more than 80% of the URLs belonging to legitimate companies whose websites were hacked. The report provided some statics on the URLs before moving on to the next threat. The DF Threat report could take two different angles in explaining the web threat; the server or client angle. If the phone rang at your company and the person on the other end said your website was serving up malware then how would you investigate that? What are the potential artifacts to indicate if malware is actually present? How would you determine the attack vector used to compromise the website? Now for the client angle, a customer comes up to you saying there is a rogue program holding their computer hostage. What approach would you use to identify the initial infection vector? What are the potential artifacts on the system to indicate the malware came from a compromised website as opposed to an email? These are valid follow-up questions that should be included in the web threat’s explanation.

The next threat in the Sophos report was Blackhat search engine optimization (SEO). SEO is a marketing technique to draw visitors to companies’ websites but the same technique can be used to lure people to malicious websites. “Attackers use SEO poisoning techniques to rank their sites highly in search engine results and to redirect users to malicious sites”. As expected the report doesn’t identify what the potential artifacts are on a system to indicate SEO poisoning. I could guess what the system would look like based on my write-up on the potential artifacts from Google image search poisoning. However, answering the question by examining a test system is a better option than making an assumption.

Another threat in the Sophos report was the ongoing attacks occurring in Facebook, Twitter, and Linkdin. “Scams on Facebook include cross-site scripting, clickjacking, survey scams and identity theft”. On Twitter attackers are using shortened URLs to redirect people to malicious websites. LinkedIn malicious invitation reminders contain links to redirect people to malicious websites. Again, the investigative method and artifacts on a system were missing. The same question applies to this threat as well. What are the potential artifacts on a system to indicate Blackhat SEO?

Rounding out the Sophos threats I’m discussing is SPAM /spearphishing. A few of the high profile breaches this year were covered and a few of them involved spearphishing attacks. Unfortunately, there was no mention explaining the artifacts tying malware to a specific email containing an exploit. Nor was there a mention of how your investigation method should differ if there is even a possibility spearphishing was involved.

The Securelist Exploit Kits Attack Vector Mid-year Update report identified the top exploit kits used in the first half of the year. One interesting aspect in the report was the comparison between the vulnerabilities targeted by the Blackhole and Incognito exploit kits. The comparison showed the kits pretty much target the same vulnerabilities. The DF Threat Report may not be able to cover all the vulnerabilities in the list but it could dissect one or two of them to identify the potential artifacts left on a system from exploitation.

Conclusion

The process I walked through to identify content for the DF Threat Report used reports related to security threats. However, a DF Threat report could cover various topics ranging from security to cybercrime. The report’s sole purpose would be to make people more aware about how to investigate a threat that materialized on your systems and what might the potential artifacts look like? In my short time researching and documenting attack vector artifacts I’ve found the information valuable when examining a system. I’m more aware about what certain attacks look like on a system and this helps me determine the attack vector used (and not used). I think the DF Threat Report could have a similar effect on the people who read it.

It would take an effort to get an annual/semi-annual DF Threat report released. People would be needed to organize its creation, research/test/document threats, edit the report, and to release the report. I wouldn’t only be an occasional author to research/test/document threats but I’d be a reader eager to see the DF Threat Report with each new year. Maybe this is just wishful thinking on my part that one day when reading a report outlining the year’s trends there will actually be useful DFIR information that could be used when investigating a system.
Labels:

Links 4 Everyone

Wednesday, August 10, 2011 Posted by Corey Harrell 1 comments
In this edition of Links I think there is a little bit of something for everyone regardless if your interest is forensics, malware, InfoSec, security auditing or even good a rant ….

Digital Forensic Search Updates

The Digital Forensic Search index has been slowly growing since it was put together four months ago. Last Sunday’s update brought the sites in the index to: 103 DFIR blogs, 38 DFIR websites, 13 DFIR web pages, and 2 DFIR groups. The initial focus of DFS was to locate information related to specific artifacts as opposed to locating tools to parse those artifacts. My reasoning was because I didn’t want to weed through a lot of irrelevant search hits. Most tools’ websites only provided a high level overview of an artifact the tool parses instead of in-depth information. It made sense to leave out tool specific sites to reduce the amount of noise but things change.

A question I ask myself at times is what tool can parse artifact XFZ. I’m not alone asking the question because I see others asking the same thing. To make things easier in locating tools I’m now adding tool specific sites to the Digital Forensic Search. So far 15 websites and 7 web pages are indexed. I ran a few tests and the search results seem to be a good mixture of hits for information and tools. My testing was limited so if anyone sees too much noise then just shoot me an email telling me who the culprit is.

Let me know of any links missing from DFS

Windows Shortcut File Parser Update

My post Triaging My Way mentions a need I had for a command line tool to parse Windows Shortcut files. In my quest for a tool I modified the lslnk.pl perl script to produce the output I wanted. One of the modifications I made to the script was to examine all of the files in a folder and to only parse files with the lnk file extension. I was running lslnk-directory-parse.pl (modified script) against some shortcut files when the script would abruptly stop. The parsed information from the last file only contained the file system timestamps. Examination of the file showed that it was empty and this was what caused lslnk-directory-parse.pl to die. I made a slight modification to lslnk-directory-parse.pl so the script checks each files’ header to confirm it is indeed a Windows shortcut file. I uploaded the new scripts (lslnk-directory-parse.pl and lslnk-directory-parse2.pl) to the Yahoo Win4n6 group and added a version number (v1.1) in the comments.

There are always different ways to accomplish something. When faced with trying to parse all of the Window shortcut files in a folder I opted to modify an existing script to meet my needs. The Linux Sleuthing blog took a different approach in the post Windows Link Files / Using While Loops. The author uses a while loop with an existing script to parse all of the shortcut files in a folder. Their approach is definitely simpler and quicker than what I tried to do. I learned a lot from the approach I took since I had to understand what modifications to make to an existing script in order to get the output I wanted.

How to Mount a Split Image

Speaking of the Linux Sleuthing blog. They provided another useful tip in the post Mounting Split Raw Images. As the name of the post implies it is about how to mount a split image in a Linux environment. I can’t remember the last time I dealt with a split image since I no longer break up images. However, when I used to create split images I remember asking myself how to mount it in Linux. To others the question may be simple but I didn’t have a clue besides concatenating to make a single image. The Mounting Split Raw Images post shows that sharing information – no matter how simple it may appear – will benefit someone at some point in time.

$UsnJrnl Goodness

Bugbear over at Security Braindump put together a great post Dear Diary: AntiMalwareLab.exe File_Created. I recommend anyone who will be encountering a Windows Vista or 7 system to read the post even if malware is not typically encountered during examinations. The $UsnJrnl record is an NTFS file system artifact which is turned on by default in Vista and 7. Bugbear discusses what the $UsnJrnl record is and how to manually examine it before discussing tools to automate the examination.

What I really like about the post is the way he presented the information. He explains an artifact, how to parse the artifact, a tool to automate the parsing and then shares an experience of how the artifact factored into one of his cases. I think the last part is important since sharing his experience provides context to why the artifact is important. His experience involved files created/deleted on the system as a result of a malware infection. Providing context makes it easier to see the impact of $UsnJrnl on other types of investigations. For example, a reoccurring activity I need to determine on cases is what files were deleted from a system around a certain time. Data in the $UsnJrnl record may not only show when the files of interest were deleted but could highlight what other files were deleted around the same time.

Memory Forensic Image for Training

While I’m on the topic of malware I wanted to pass along a gem I found in my RSS feeds and seen others mention. The MNIN Security Blog published the Stuxnet's Footprint in Memory with Volatility 2.0 back in June but I didn’t read it until recently. The post demonstrates Volatility 2.0’s usage by examining a memory image of a system infected with Stuxnet. A cool thing about the write-up is the author makes available the memory image they used. This means the write-up and the memory image can be used as a guide to better understand how to use Volatility. Just download Volatility, download the memory image, read the post, and follow along by running the same commands against the memory image. Not bad for a free way to improve your Volatility skills.

Easier Way to Generate Reports from Vulnerability Scans

Different methods are used to identify known vulnerabilities on systems. Running various vulnerability scanners, web application scanners, and port scanners are all options. One of the more tedious but important steps in the process is to correlate all of the tools’ outputs to identify: what vulnerabilities are present, their severity, and their exposure on the network. Obtaining this kind of information from the scans was a manual process since there wasn’t a way to automate it. James Edge over at Information Systems Auditing is trying to address this issue in something he calls the RF Project (Reporting Framework Project). RF Project is able to take scans from Nessus, Eeye Retina, Nmap, HP WebInpect, AppScan AppDetective, Kismet, and GFI Languard so custom reports can be created. Want to know the potential vulnerabilities detected by Nessus, Retina, and Nmap against server XYZ? Upload the scans to the reporting framework and create a custom report showing the answer instead of manually going through each report to identify the vulnerabilities. I tested an earlier version of the framework when it only supported Nessus and Retina a few years ago. It’s great to see he continued with the project and added support for more scans.

Jame’s site has some useful stuff besides the RF project. He has a few hacking tutorials and some technical assessment plans for external enumeration, Windows operating system enumeration, and Windows passwords.

Good InfoSec Rant

I like a good rant ever once in awhile. Assuming the Breach’s I do it for the Lulz explains the reason the author works in security. It’s not about the money, job security, or prestige; he works in security because it’s a calling. The post was directed at the InfoSec field but I think the same thing applies to Digital Forensics. Take the following quote:

“Technology, and especially information security has always been more than a job to me. More than even a career. It's a calling. Don't tell my boss, but I'd do this even if they didn't pay me. It's what I do. I can't help it.”

I can’t speak for others but digital forensics is the most changing field I’ve ever worked in. Technology (hardware and software) is constantly changing in how it stores data and the tools I use to extract information are also evolving. Digital forensics can’t be treated as a normal 8 to 4 job with any chance of being successful. Five days a week and eight hours each day is not enough time for me to keep my knowledge and skills current about the latest technology, tool update, threat, or analysis technique. It’s not a job; it’s my passion. My passion enables me to immerse myself in DFIR so I can learn constantly and apply my skills in different ways outside of work for my employer.

I wouldn’t last if digital forensics was only a day job. Seriously, how could I put myself through some of the things we do if there is no passion? We read whitepapers dissecting artifacts and spend countless hours researching and testing to improve our skills. Doing either of these things would be brutal to someone who lacks passion for the topic. For example, I couldn’t hack it being a dentist because I lack the passion for dentistry. I wouldn’t have the will power to read a whitepaper explaining some gum disease or spend hours studying different diagnosis. Dentistry would just be an 8 to 4 day job that pays the bills until I could find something else. DFIR on the other hand is another story as I spend my evening blogging about it after spending the day working on a case.

Happy Birthday jIIr

Saturday, August 6, 2011 Posted by Corey Harrell 2 comments
It’s hard to believe a year has gone by since I launched my blog. I didn’t know what to expect when I took an idea and put it into action. All I knew was I wanted to talk about investigating security incidents but at the time I didn’t have the IR skillset. I also wanted to provide useful content but I was short on personal time to research, test, and write. I went ahead anyway despite the reasons discouraging me from blogging.

The experience has been rewarding. I’m a better writer from explaining various topics in a way that others can learn from my successes and failures. I have a better understanding about DFIR from the feedback I received. The feedback also helps to validate what 'm thinking and doing. Different opportunities arose -such as talking with other forensicators- as a direct result of my willingness to share information.

The top six posts of the year covered a range of topics from detecting security incidents to examining an infected system to a book review. The most read posts of the year were:

     1.  Google the Security Incident Detector
     2.  Introducing the Digital Forensics Search
     3.  Reviewing Timelines with Excel
     4.  Review of Digital Forensics with Open Source Tools
     5.  Smile for the Camera
     6.  Anatomy of a Drive-by Part 2

I’m looking forward to another year and there is a range of ideas in the hopper. I’ll still touch on investigating security incidents as well as researching attack vector artifacts. However, my focus will gradually extend from the artifacts on a single system to the artifacts located on different network devices. Besides IR, I’m planning on talking about supporting financial investigations, Windows 7 (and Server 2008) artifacts, my methodology, different information security topics, and random DFIR thoughts inspired by things I come across along the way.

Thanks to everyone who keeps stopping by jIIr. There’s no need to be a stranger when there’s a comment feature to let me know what you think. ;) A special thank you to all of the other bloggers and authors who link to my blog and share their thoughts about my posts. I'm thankful for the additional traffic you send my way since it helps to let others know about the blog.
Labels: