Week 5 – Advanced Windows Security Course

Week 5 – Advanced Windows Security Course

Firstly apologies for taking this long to get back to doing the week 5 blog.  It has been a manic couple of months and it doesn’t look like it is going to let up either!  Anyway, so far we have covered

This week we are going to cover

Forensic Techniques

Forensic Techniques first and foremost comes down to preserving the current state.  This is one of the main reasons why it is critical to have an incident response plan.  You do not want to turn off/reboot/uninstall anything until you have a copy for later analysis.

Now for virtual machines it is trivial to take a snapshot and then you can create a new isolated VM from that any time you like.  However for physical machines it can be trickier.  You need to get a memory dump as well as a full copy of all of the files on the disks, including all non-file data, such as the NTFS Journal, VSS, etc..

Searching for a trace on Disk

Once we have a copy of the machine or its data then we need to know where to look.  There are many places to find traces of what has been done on a machine.  These include:

  • Security Log, RDP Operational Log, Application Logs
  • User Profiles (NTUser.dat)
  • Run dialog
  • Most Recently Used list
  • Management Console
  • Remote Desktop Connections
  • Prefetch Files
  • Recent Documents
  • Automatic Destinations (lnk files)
  • Temporary Internet Files
  • Deleted files
  • NTFS Structures
  • Hiberfil.sys

Of course intruders will try to evade auditors through a variety of methods like

  • Manipulating logs (erasing data or overwriting)
  • Dual booting a machine to access data offline
  • File modifications (metadata, NTFS journal, deleting files)

And naturally all of these leave a trace of some sort 🙂

There are a variety of ways to manipulate files and disks in order to hide data

  • Changing file extensions
  • Joining files together
  • Using the Alternative Data Streams
  • Embedding one file within another
  • Playing with the content of the data itself
  • Steganography (hiding data within data)
  • Deletion
  • Hiding files on disk
  • Encryption

Searching for a trace in Memory

Memory has all kinds of structures and lists that we can investigate including

  • Handles
  • Processes
  • Hidden Processes
  • Memory mapped files
  • Threads
  • Modules
  • Registry
  • API Hooks
  • Services
  • UserAssist
  • Shellbags
  • ShimCache
  • Event Logs

There are two kinds of memory dumps that can be taken, process and system dumps.  You can use tools like Process Explorer, Process Hacker, Task Manager and Procdump to create a dump of a specific process.  Similarly you can use tools like windbg, MemDD, WinDD, Dumpit and System Recovery Settings to create a dump of the system as a whole.

The best thing about memory forensics is that it is very hard to hide!  After all it is hard to run on a computer and not consume some memory of some sort somewhere!

Handles are a great place to start looking since handles are used by every process and can be for processes, threads, registry, files, mutexes, etc.

Helpful tools

Any investigator needs their tools, and the following tools are useful to a forensic expert

Yara (http://virustotal.github.io/yara/) is a malware identification and classification tool

MemGator (http://orionforensics.com/w_en_page/MemGator.php) automates the extraction of data from memory

Memoryze (https://www.fireeye.com/services/freeware/memoryze.html) is a live analysis tool used by professional forensic investigators

Security Monitoring Operations

Performance Monitor (perfmon)

Whilst Performance Monitor isn’t strictly speaking a security tool, it can be useful to find out what is happening on a machine and what the impact is.  A counter path is made up of \\Computer\Object(Instance)\Counter

  • Computer: the name of the machine
  • Object: The logical grouping of counters (Processor, memory, etc.)
  • Counter: The name of the measurement (% Processor Time)
  • Instance: A specific occurrence of the counter on this machine (_Total)

for example \\MyComputer\Processor(_Total)\% Processor Time

The _Total instance is a special counter instance, although be careful how you interpret this instance since sometimes it is an average, other times it is a sum and other times it is a point in time measurement

To change the instance names from svchost#1, svchost#2, etc. you set the following registry value to 2


Then perfmon will append the process id instead of an instance counter

Data Collector Sets

You can get Perfmon to save data out to disk by using Data Collector Sets.  You can chose counters individually or use wildcards, however be careful since there are a lot of counters and some generate masses of data.

  • Choose Objects if you are doing troubleshooting work and need to take a short term log but don’t want to miss anything
  • Choose Counters if you’re doing longer term logging and you have an idea what to look for
  • Choose Instances if you are interested in long term trend analysis

The default collection frequency is 15 seconds which may not be the most appropriate depending on the length of the trace.  As a starting point take the total amount of time you want to trace for and divide that by 500.  e.g. if you want to trace for 24 hours (14400 seconds) then trace every 30 seconds in order to end up with approximately 500 measurements

Perfmon supports four different types of output formats, but generally speaking use the binary log file as you can convert that afterwards with other tools.

Use circular logging for problems that occur ‘randomly’.  It is similar to an aircraft blackbox in that the file always contains the last xMB of log data


The one tool you should definitely know about and use if you’re doing anything performance related is PAL (http://pal.codeplex.com/)  It is written by Microsoft and has input from the product teams on what to log.  You can choose the type of logging you want to do (system, Exchange, Active Directory, etc) and get it to create an xml file that you can use to create a Data Collector Set with that records the data that PAL thinks is useful.  Then when you’re done use PAL to analyse the logs against thresholds set by the product groups and let it highlight what you should investigate further.  Simply brilliant!