This repository contains the software developed for the paper Introducing the Temporal Dimension to Memory Forensics [1], available here.
In this paper we proposed to add a new dimension, which we called temporal dimension, to memory forensics. This dimension was used to study the inconsistencies caused by memory smearing.
For these purposes we modified LiME - a well-known memory acquisition tool - to capture the temporal dimension while the memory is collected, and Volatility to make use of this information.
Moreover, to reduce the impact of memory smearing, we also proposed a new approach to memory acquisition - dubbed locality-based in the paper - which is divided in two steps.
In the first one, the physical pages containing critical forensics information (for example page tables, task_struct
s, memory mappings, kernel modules..) are selected and acquired, while the rest of the memory is sequentially acquired in the second step.
Building LiME is pretty straightforward: install the kernel headers and run cd lime-pagetime/src/ && make
.
This modified version of LiME captures the timing information by default. Running insmod lime.ko path=/path/to/dump.lime format=lime timeout=0
results in the memory dump to be saved in /path/to/dump.lime
along with the timing information in /path/to/dump.lime.times
.
To enable the locality-based memory acquisition, you can add smart=1
when inserting the module. Since the locality-based pages are stored at the beginning of the memory dump file, to merge them you must run: python scripts/patch_smart_dump.py /path/to/dump.lime
. The last command will save the merged file in /path/to/dump.lime.clean
.
The timing information are saved in a human readable format. The format of the lines is the following:
[PTIME] PHYSICAL_MEM_ADDRESS : RELATIVE_TIME
[PEND] RELATIVE_TIME
[STIME] PHYSICAL_MEM_ADDRESS : RELATIVE_TIME
[END] RELATIVE_TIME
Formats 1. and 3. are used to timestamp the acquisition of a physical page in the first and second step respectively, while 2. and 4. record the end of each of the two steps.
All the timestamps are relative to the beginning of the acquisition process. The only difference is that 1. is used for each and every locally-acquired page, while timings saved with 3. are sampled. The sampling frequency can be tweaked by changing the TP_DELTA
macro in lime.h
.
We modified Volatility to parse the timing information generated by LiME and to keep track of every access to the memory dump.
This feature can be enabled by specifying the --pagetime
argument in the Volatility command line.
For example running python volatility-pagetime/vol.py -f ram.lime --profile=Linuxatomicity16 --pagetime linux_pslist
results in:
Volatility Foundation Volatility Framework 2.6.1
Offset Name Pid PPid Uid Gid DTB Start Time
------------------ -------------------- --------------- --------------- --------------- ------ ------------------ ----------
0xffff880216141680 systemd 1 0 0 0 0x0000000215f6a000 2019-07-09 13:11:30 UTC+0000
0xffff880216140000 kthreadd 2 0 0 0 ------------------ 2019-07-09 13:11:30 UTC+0000
...
0xffff88021559da00 kworker/u8:2 6136 2 0 0 ------------------ 2019-07-09 14:34:25 UTC+0000
min: 0.0346 | max: 71.7584 | window: 71.7239 | smart: 0.0000 | total: 73.6950 | status: NON ATOMIC :-(
atomic: 0 | not-atomic: 85690 | unique : 156
[X-----------------------------------------------------------XX----X-XXX------X--]
The last three lines show several statistics about the atomicity of the structures and of the memory pages used by the linux_pslist
plugin. In particular:
- min: the timestamp of the earliest acquired page used by the plugin
- max: the timestamp of the latest acquired page used by the plugin
- window: the difference between max and min
- smart: how many seconds the locality-based acquisition step took
- total: how many seconds the full acquisition took
- atomic, not-atomic, unique: respectively the number of memory accesses in locally-acquired pages, not locally-acquired pages and unique number of accessed pages
- status: shows if all the used pages were acquired in the first step
- timeline: graphically shows how sparse the memory access were
In the latter example the entire memory was acquired sequentially. On the other hand, when the locality-based acquisition scheme is used, the output of the same plugin looks like the following:
Volatility Foundation Volatility Framework 2.6.1
Offset Name Pid PPid Uid Gid DTB Start Time
------------------ -------------------- --------------- --------------- --------------- ------ ------------------ ----------
0xffff880216141680 systemd 1 0 0 0 0x0000000215f6a000 2019-07-09 13:11:30 UTC+0000
0xffff880216140000 kthreadd 2 0 0 0 ------------------ 2019-07-09 13:11:30 UTC+0000
0xffff880216142d00 kworker/0:0 3 2 0 0 ------------------ 2019-07-09 13:11:30 UTC+0000
...
0xffff8802141e2d00 insmod 5700 5699 0 0 0x00000002156d1000 2019-07-09 13:26:30 UTC+0000
min: 0.0113 | max: 0.6715 | window: 0.6602 | smart: 0.6717 | total: 70.7332 | status: ATOMIC :-)
atomic: 86576 | not-atomic: 0 | unique : 461
[X-------------------------------------------------------------------------------]
As part of our experiments, we discovered a recurring type of inconsistency that affects how the memory mappings of a process are stored.
To detect these cases, we developed a new plugin linux_validate_vmas
which can be found in the plugins
folder. The plugin checks that the number of vm_area_struct
contained in the list rooted at mm_struct.mmap
and in the red-black tree pointed by mm_struct.mm_rb
corresponds to the value of mm_struct.map_count
.
In case a mismatch is detected, the plugin outputs the following:
[+] VMAs validation
ERR Web Content (7179) vma_list = 28 vma_rb = 46 vma_counter = 938
ERR firefox (7003) vma_list = 197 vma_rb = 21 vma_counter = 1019
You can find all the memory dumps and Volatility profiles used in our paper here.
[1] Pagani, Fabio, Oleksii Fedorov, and Davide Balzarotti. "Introducing the Temporal Dimension to Memory Forensics." ACM Transactions on Privacy and Security (TOPS) 22.2 (2019): 9.