-
Notifications
You must be signed in to change notification settings - Fork 1
Hiding Techniques
- File Slack (FAT, NTFS, EXT4)
- MFT Slack (NTFS)
- Additional Cluster Allocation (FAT, NTFS)
- Bad Cluster Allocation (FAT, NTFS)
- Reserved GDT Blocks (EXT4)
- Superblock Slack (EXT4, APFS)
- OSD2 (EXT4)
- obso_faddr (EXT4)
- Inode Padding (APFS)
- write_gen (APFS)
- Timestamp Hiding (APFS)
- Extended Field Padding (APFS)
- Additional Commands (fattools, metadata)
The fileslack subcommand provides functionality to read, write and clean the file slack of files in a filesystem. Available for these filesystem types:
- FAT
- NTFS
- EXT4
The smallest unit of a filesystem is called cluster or block (EXT4). The size of these units is determined during the creation of the filesystem. It can be calculated in the following way: sector size * sectors per cluster. If a file is smaller than the cluster size an empty space usually called file slack within that cluster is produced. This empty space can be used to hide data.
File slack is composed of two separate Slack components - RAM slack and Drive slack. RAM slack starts at the end of a file and ends at the end of the sector. Drive slack on the other hand encompasses the area at the end of the RAM slack and ends at the end of the cluster. Most FAT and NTFS filesystem implementations fill the RAM slack with zeroes - this has to be accounted for when trying to hide data using file slack.
EXT4 does not distinguish between RAM slack and Drive slack and fills up the entire potential slack area with zeroes. Therefore detection of File slack hiding is very likely.
Capacity: High - Reliant on block size, cluster size and file sizes. ((cluster size - block size) stored files.)
Detectability: Medium - Testing with fsck.fat, chkdsk(NTFS) and fsck.ext4 show no immediate clues. Manual detection or detection via specific tools is easiest in EXT4 due to the fact that EXT4's slack is usually filled with zeroes.
Stability: Low - If size of the original file changes the hidden data could be overwritten.
# write into slack space
$ echo "TOP SECRET" | fishy -d testfs-fat12.dd fileslack -d myfile.txt -m metadata.
˓→json -w
# read from slack space
$ fishy -d testfs-fat12.dd fileslack -m metadata.json -r
TOP SECRET
# wipe slack space
$ fishy -d testfs-fat12.dd fileslack -m metadata.json -c
# show info about slack space of a file
$ fishy -d testfs-fat12.dd fileslack -m metadata.json -d myfile.txt -i
File: myfile.txt
Occupied in last cluster: 4
Ram Slack: 508
File Slack: 1536
The mftslack subcommand provides functionality to read, write and clean the slack of mft entries in a filesystem. Available for these filesystem types:
- NTFS
The Master File Table (MFT) contains all necessary metadata for every file and every folder of a NTFS partition. An entry in MFT doesn't necessary use all of its allocated space - the unused space could even still contain parts of previous entries, which makes the MFT Entry Slack an inconspicuous place to hide data.
NTFS uses Fixup to determine bad sectors and data structures. Whenever a MFT entry is written to a disk, its last two bytes are being used as a signature. It is important not to overwrite these two bytes when hiding data, else the MFT can become damaged. NTFS saves a copy of at least the first four MFT entries ($MFT, $MFTMirr, $LogFile, $Volume) in a file named $MFTMirr in case of needed recoveries. To stop detection of hidden data via chkdsk you have to hide a copy of the original hidden data at the corresponding entry in $MFTMirr.
Capacity: High - dependent on allocated MFT entry size and their actual size. (allocated size - actual size) - 2 can be saved per Fixup value.
Detectability: High - domirr protects you from chkdsk detection, there will still be an error detected in $MFTMirr.
Stability: Low - Hidden data could get overwritten when MFT entries change.
# write into slack space
$ echo "TOP SECRET" | fishy -d testfs-ntfs.dd mftslack -m metadata.json -w
# read from slack space
$ fishy -d testfs-ntfs.dd mftslack -m metadata.json -r
TOP SECRET
# wipe slack space
$ fishy -d testfs-ntfs.dd mftslack -m metadata.json -c
The addcluster subcommand provides methods to read, write and clean additional clusters for a file where data can be hidden. Available for these filesystem types:
- FAT
- NTFS
The additional clusters are either unallocated or assigned to a file. If you assign a cluster to file the filesystem will no longer try to use the cluster which gives you space to hide data.
An issue arises if the associated file grows. In that case the file might overwrite your hidden data.
Capacity: High - This technique uses entire clusters and is therefore only limited by count_of_clusters.
Detectability: Medium - chdsk in NTFS can be tricked into not noticing the hidden data if the file attributes are adjusted. fsck.fat for example will detect if cluster chains are bigger than the defined directory entries.
Stability: Low - Hidden data can be overwritten if associated files grow.
# Allocate additional clusters for a file and hide data in it
$ echo "TOP SECRET" | fishy -d testfs-fat12.dd addcluster -d myfile.txt -m metadata.
˓→json -w
# read hidden data from additionally allocated clusters
$ fishy -d testfs-fat12.dd addcluster -m metadata.json -r
TOP SECRET
# clean up additionally allocated clusters
$ fishy -d testfs-fat12.dd addcluster -m metadata.json -c
The badcluster subcommand provides methods to read, write and clean bad clusters, where data can be hidden into. Available for these filesystem types:
- FAT
- NTFS
If a sector or a cluster is damaged read and write operations lead to damaged data. To prevent this, these bad sectors or clusters are marked by the filesystem. The filesystem also saves a reference to the bad sectors or clusters an no longer uses them. To hide data using this technique, you have to mark free and undamaged clusters as bad clusters.
In NTFS filesystems the damaged are saved using the MFT attribute $BadClus. The areas mentioned here will be ignored by the filesystem. FAT filesystems save references to broken clusters in the file allocation table.
Capacity: High - This technique uses entire clusters and is therefore only limited by count_of_clusters.
Detectability: Medium - The flag is deprecated and fsck.fat will detect clusters marked as bad.
Stability: High - Bad clusters will no longer be regarded by the filesystem, the hidden data is therefore unlikely to be overwritten.
# Allocate bad clusters and hide data in it
$ echo "TOP SECRET" | fishy -d testfs-fat12.dd badcluster -m metadata.json -w
# read hidden data from bad clusters
$ fishy -d testfs-fat12.dd badcluster -m metadata.json -r
TOP SECRET
# clean up bad clusters
$ fishy -d testfs-fat12.dd badcluster -m metadata.json -c
The reserved_gdt_blocks subcommand provides methods to read, write and clean the space reserved for the expansion of the GDT. Available for these filesystem types:
- EXT4
The Reserved GDT Blocks will only be used if the filesystem is expanded and additional Group Descriptors are added. The Reserved GDT Blocks are located behind the Group Descriptor Table and its backup copies. The amount of reserved blocks can be seen at offset 0xCE. This hiding technique can hide up to count_reserved_gdt_blocks * count_of_blockgroups_with_copies * block_size bytes. The amount of copies vary depending on the sparse_super flag which saves the copies of the Reserved GDT Blocks in all block groups with group numbers to the power of 3, 5 or 7.
In a 512 MB with a block size of 4096 bytes this technique can hide up to 64 * 2 * 4096 = 524288 bytes. This however would be very easy to detect during forensic analysis. For this reason, the primary reserved area and its first copies are being skipped and the data is only being hidden in the following Reserved GDT Blocks. This will also hide the data from the fsck tool.
First the reserved block IDs have to be calculated. This is being done by using the amount of blocks, amount of blocks per group and filesystem architecture as well as the amount of Reserved GDT Blocks with consideration of the sparse_super flag. This information is gathered from the Superblock. Following the calculation, the block groups can be used to hide data.
Capacity: High - reserved_gdt_blocks * block_groups * block size bytes can be hidden.
Detectability: High - Several analysis tools and checking the hex editor can quickly reveal the hidden data.
Stability: Medium - The hidden data is safe as long as the filesystem isn't expanded. If an expansion happens the hidden data will be overwritten.
# write int reserved GDT Blocks
$ echo "TOP SECRET" | fishy -d testfs-ext4.dd reserved_gdt_blocks -m metadata.json -w
# read hidden data from reserved GDT Blocks
$ fishy -d testfs-ext4.dd reserved_gdt_blocks -m metadata.json -r
TOP SECRET
# clean up reserved GDT Blocks
$ fishy -d testfs-ext4.dd reserved_gdt_blocks -m metadata.json -c
The superblock_slack subcommand provides methods to read, write and clean the slack of superblocks in an ext4 filesystem. The APFS version of this technique uses both types of superblocks as well as the corresponding object maps. Available for these filesystem types:
- EXT4
- APFS
Depending on the block size there can be a slack space after each copy of the superblock in every block group. If the block size in 1024 byte there will be no slack space as the superblock has a size of 1024 byte. The amount of copies of superblocks is dependent on the sparse_super flag. If the flag is set there will be significantly less potential hiding space.
The potential hiding space is usually multiple KB. Every copy of the superblock can hide block_size - 1024 Bytes of data. The primary superblock is an exception - due to the bootsector also being present, only block_size - 2048 Bytes is free.
This hiding technique first gathers all block IDs of copies of the superblock with consideration to the sparse_super flag. Afterwards, depending on the block size, the data is being hidden in the slack spaces. The hidden data benefits from characteristics of the superblock - it is virtually impossible for the data to be overwritten as there usually is no data written to the superblock slack. Though just like other slack space related hiding techniques it is easily discoverable to forensic analysis.
Capacity: Medium - block size - 2048 + superblock copies * (block size - 1024) Bytes can be hidden.
Detectability: High - Similar to the GDT Reserved Blocks method, some forensic tools and manual analysis can find this hidden data easily.
Stability: High - Data in superblock slack will not be overwritten
While both versions of this technique exploit the slackspace created by superblock structures, the APFS version of this technique is somewhat different from the EXT4 version. While the EXT4 implementation of this technique uses a single block type, the APFS version uses 4 block types of which 3 have unique amounts of slack space available.
The potential hiding space is dependent on the size of the container, as a larger container has more potential checkpoints and therefore more superblocks and object maps to write to. The sizes of potential hiding places differ for every block type:
- Container Superblocks can hide 2616 Bytes
- Container Object Maps and Volume Object Maps can hide 3984 Bytes
- Volume Superblocks can hide 3060 Bytes
All needed blocks are gathered and sorted by version number in specific lists through a complimentary Checkpoint class. The technique then writes into the first entry of each list, only using older structures if there is more data to hide. Potential growth of the blocks has been taken into account when calculating the potential hiding space. Therefore hidden data can not be overwritten that way. An issue considering the stability of this technique is the unknown factor of the checkpoint write and overwrite system. At some point the Checkpoint Descriptor Area will be full and the oldest checkpoint will have to be overwritten. It is unknown if this write process could affect data written to the slack space.
Capacity: High - One standard checkpoint (1 Container Superblock and Object Map as well as 4 Volume Superblocks and Object Maps) can potentially hide 34776 Bytes of Data. Furthermore, the Capacity is dependent on the size of the container.
Detectability: Medium - Some forensic tools may locate the data hidden here, but the usage of multiple block types obfuscates a manual investigation further.
Stability: High/Low - The slack space will not be overwritten by potentially growing blocks, but the Checkpoint writing method could potentially overwrite data. If that is the case, Stability would drop based on the size of the hidden data and time the system is used.
The command syntax is the same for both versions of the technique.
# write into Superblock Slack
$ echo "TOP SECRET" | fishy -d testfs-ext4.dd superblock_slack -m metadata.json -w
# read hidden data from Superblock Slack
$ fishy -d testfs-ext4.dd superblock_slack -m metadata.json -r
TOP SECRET
# clean up Superblock Slack
$ fishy -d testfs-ext4.dd superblock_slack -m metadata.json -c
The osd2 subcommand provides methods to read, write and clean the unused last two bytes of the inode field osd2. Available for these filesystem types:
- EXT4
The osd2 hiding technique uses the last two bytes of the 12 byte osd2 field, which is located at 0x74 in each inode. This field only uses 10 bytes at max, depending on the tag being whether linux2, hurd2 or masix2. This results in count_of_inodes * 2 bytes hiding space, which is not much, but might be enough for small amounts of valuable data, because it's not easy to find. ext4 introduced a lot of checksums for all kinds of metadata, which leads to invalid inode checksums. In an ~235Mb image with 60.000 inodes this technique could hide 120.000 bytes.
To hide data, the method writes data directly to the two bytes in the osd2 field in each inode, which address is taken from the inode table, until there is either no inode or no data left. The method is currently limited to 4Mb.
Capacity: Low - Every Inode can only hide 2 Bytes.
Detectability: Low/High - Analysis tools like fsck can recognize changed checksums so hidden data can be found. If the checksum is being calculated after the data is hidden, it highly unlikely to be found by fsck or similar tools.
Stability: High - The 2 bytes used to hide data are not used and will therefore not be overwritten.
# write int osd2 inode field
$ echo "TOP SECRET" | fishy -d testfs-ext4.dd osd2 -m metadata.json -w
# read hidden data from osd2 inode field
$ fishy -d testfs-ext4.dd osd2 -m metadata.json -r
TOP SECRET
# clean up osd2 inode field
$ fishy -d testfs-ext4.dd osd2 -m metadata.json -c
The obso_faddr subcommand provides methods to read, write and clean the unused inode field obso_faddr. Available for these filesystem types:
- EXT4
The obso_faddr field in each inode at 0x70 is an obsolete fragment address field of 32bit length. This technique works accordingly to the osd2 technique, but can hide twice the data. Taking the 235Mb example from above, this method could hide 240.000 bytes. Besides that it has the same flaws and advantages.
Capacity: Low - count_of_inodes * 4 bytes can be hidden
Detectability: Low/High - Analysis tools like fsck can recognize changed checksums so hidden data can be found. If the checksum is being calculated after the data is hidden, it highly unlikely to be found by fsck or similar tools.
Stability: High - These inode fields are not being used and the data will therefore not be overwritten.
# write int obso_faddr inode field
$ echo "TOP SECRET" | fishy -d testfs-ext4.dd obso_faddr -m metadata.json -w
# read hidden data from obso_faddr inode field
$ fishy -d testfs-ext4.dd obso_faddr -m metadata.json -r
TOP SECRET
# clean up obso_faddr inode field
$ fishy -d testfs-ext4.dd obso_faddr -m metadata.json -c
Note: In its current form, this technique is not usable as data hidden with it is detected by fsck_apfs. Specifically, two different errors are noticed. One indicating that the first padding field can not be filled with data at all and the second requiring an unknown and so far unreferenced has_uncompressed_size inode flag to be set.
The inode_padding subcommand provides methods to read, write and clean the unused padding fields in an inode. Available for these filesystem types:
- APFS
The inode_padding technique uses 2 unused padding fields that have a combined size of 10 bytes. The padding fields are always located at the end of the regular value part of an inode and separate it from the inodes' extended fields. As there are only 10 bytes of potential hiding space per inode available, only small amounts of data can be hidden. This technique is dependent on the size of the container and how many files it contains.
This technique uses a complimentary class named InodeTable to gather all Inodes. The class traverses all Volume Object Maps and saves the node adresses and specific inode value offsets in a tuple list. The inode_padding technique then uses this list to calculate the offset to the padding fields and write the given data to them.
Capacity: Low - It is only possible to write to 10 bytes per Inode.
Detectability: High - As of right now, this technique is detected by fsck_apfs and should not be used to hide sensible data.
Stability: High - The data can only be lost if the inode is deleted. Even in that case the data may be preserved in a snapshot.
# write into inode padding fields
$ echo "TOP SECRET" | fishy -d testfs-apfs.dd inode_padding -m metadata.json -w
# read hidden data from inode padding fields
$ fishy -d testfs-apfs.dd inode_padding -m metadata.json -r
TOP SECRET
# clean up inode padding field
$ fishy -d testfs-apfs.dd inode_padding -m metadata.json -c
The write_gen technique uses the inode field write_gen_counter to hide up to 4 bytes of data per inode. Available for the following filesystem types:
- APFS
The field write_gen_counter is a counter the increases when the inode or its data is modified. This leads to a potentially low stability. While the capacity is also quite low with only 4 bytes of possible hiding space per inode, it should be very hard to detect this method, as it is a field that is usually filled with data as well as being surrounded by fields that are usually not filled with zeroes.
The general structure of this technique resembles the previously mentioned inode_padding technique and functions in a very similar way. It uses the same complimentary class and its own structure is the same as inode_padding's, the only difference being the amount of data written, read or cleared.
Capacity: Low - There are only 4 bytes of hiding space per inode. Like other inode-dependent APFS techniques, the capacity rises proportionally to the amount of files in the system.
Detectability: Low - So far, fsck_apfs does not find any issues. External forensic tools should also not be able to detect hidden data here, as the field seems to have no limits (besides size) on the data it can contain. A manual investigation should also prove difficult.
Stability: Low - Whenever the inode or its data is changed, this counter increases which could lead to data being partially overwritten.
# write into inode write_gen_counter fields
$ echo "TOP SECRET" | fishy -d testfs-apfs.dd write_gen -m metadata.json -w
# read hidden data from inode write_gen_counter fields
$ fishy -d testfs-apfs.dd write_gen -m metadata.json -r
TOP SECRET
# clean up write_gen_counter fields
$ fishy -d testfs-apfs.dd write_gen -m metadata.json -c
The timestamp_hiding subcommand provides methods to write, read and clean the nanosecond part of 64 bit timestamps. Currently available for these filesystem types:
- APFS
APFS has multiple structures that include 64 bit nanosecond timestamps. This technique however focuses on the timestamps found in the systems' inodes. As of right now, all four timestamps are used:
- Create Time - Time the record was created. Using only this timestamp would be the most stable version.
- Mod Time - Date of the last time the record was modified.
- Change Time - Date of the last time this records' attributes were modified.
- Access Time - Date of the last time this record was accessed.
While this lowers the overall stability, it increases the capacity of this technique. Changing the technique to use only certain timestamps is easily doable as well.
As of right now, this technique writes data to the first 4 bytes of each timestamp. This could cause slight complications, as the seconds might be affected when writing to all 4 bytes. It has been observed that the seconds area slightly increases or decreases by a few seconds when data is being written or removed. A workaround to this issue is to only use the first 3 bytes instead of the first 4 bytes. Another workaround would be to use the exact amount of bits used by the nanosecond part of the timestamp.
Capacity: Low - This version of the technique hides 16 bytes in one inode. This is the highest possible capacity for this technique unless non-inode timestamps such as the volume superblock timestamps are added.
Detectability: Medium - The currently implemented version also has a miniscule effect on the seconds part of the timestamp, which makes it slightly easier to detect. Implementing either workaround would lower the Capacity but also affect the Detectability in a positive way.
Stability: Medium - This current version uses all 4 timestamps, 3 of which could change, which would lead to overwritten data. Changing the technique to get a higher stability while having a lower capacity is possible and requires minimal changes.
# write into inode nanosecond timestamps
$ echo "TOP SECRET" | fishy -d testfs-apfs.dd timestamp_hiding -m metadata.json -w
# read hidden data from inode nanosecond timestamps
$ fishy -d testfs-apfs.dd timestamp_hiding -m metadata.json -r
TOP SECRET
# clean up inode nanosecond timestamps
$ fishy -d testfs-apfs.dd timestamp_hiding -m metadata.json -c
The xfield_padding techniques hides data in the padding generated by an inodes' extended fields. The size of the padding varies and depends on the size of the previous extended field. This technique is currently available for the following filesystem types:
- APFS
The inode record entry types have a unique feature called Extended Fields that follow the regular entry contents. Usually, these fields’ sizes are a multiple of 8 bytes. However, some of these fields have variable sizes based on their content. If a field is not the right size, it is enlarged to the next possible multiple of 8 bytes by adding a padding field.
While this technique is similar to the other APFS-specific techniques that hide data in the inodes, there are some additional steps for this technique. First, the table of contents for the extended field of an inode, which can be found at the end of the inode, has to be interpreted and the number of extended fields has to be extracted. Second, the sizes of the extended fields have to be extracted from the list of extended field headers. This list immediately follows the aforementioned table of contents. Third, the sizes have to be used to determine if there is any padding among this set of extended fields and if so, the size of the padding has to be calculated. Once all padding fields have been found and calculated, the data can be hidden.
Capacity: Low - Although the capacity of this technique is rather low, it is not possible to make a general statement about how capable this technique will be. Besides depending on the amount of created files and the total size of a file system, there is also a reliance on the size of some extended fields’ content, such as file names.
Detectability: Low - The file system check does not find any inconsistencies and a manual investigation would be difficult due to the dynamic and irregular nature of the extended fields and their padding. The reconstruction of already found hidden data would be somewhat easier since the size of each extended field is known through its header.
Stability: Medium - While the dynamic and irregular nature of the extended fields is beneficial to the detectability of this technique, it is detrimental to its stability. Not all dynamic extended fields are known, but one of them is the file name. If the file name is changed, the size of the field may also change and could overwrite the hidden data in this extended field padding, possibly corrupting the entire set of hidden data.
# write into inode extended field padding
$ echo "TOP SECRET" | fishy -d testfs-apfs.dd xfield_padding -m metadata.json -w
# read hidden data from inode extended field padding
$ fishy -d testfs-apfs.dd xfield_padding -m metadata.json -r
TOP SECRET
# clean up inode extended field padding
$ fishy -d testfs-apfs.dd xfield_padding -m metadata.json -c
In addition to creating a test image (to see how that's done visit this page) and the hiding techniques, there are two more commands to help you use this toolkit.
To get information about a FAT filesystem you can use the fattools subcommand:
# Get some meta information about the FAT filesystem
$ fishy -d testfs-fat32.dd fattools -i
FAT Type: FAT32
Sector Size: 512
Sectors per Cluster: 8
Sectors per FAT: 3904
FAT Count: 2
Dataregion Start Byte: 4014080
Free Data Clusters (FS Info): 499075
Recently Allocated Data Cluster (FS Info): 8
Root Directory Cluster: 2
FAT Mirrored: False
Active FAT: 0
Sector of Bootsector Copy: 6
# List entries of the file allocation table
$ fishy -d testfs-fat12.dd fattools -f
0 last_cluster
1 last_cluster
2 free_cluster
3 last_cluster
4 5
5 6
6 7
7 last_cluster
[...]
# List files in a directory (use cluster_id from second column to list subdirectories)
$ fishy -d testfs-fat12.dd fattools -l 0
f 3 4 another
f 0 0 areallylongfilenamethatiwanttoreadcorrectly.txt
f 4 8001 long_file.txt
d 8 0 onedirectory
f 10 5 testfile.txt
Metadata files will be created while writing information into the filesystem. They are required to restore those information or to wipe them from filesystem. To display information, that are stored in those metadata files, you can use the metadata subcommand.
# Show metadata information from a metadata file
$ fishy metadata -m metadata.json
Version: 2
Module Identifier: fat-file-slack
Stored Files:
File_ID: 0
Filename: 0
Associated File Metadata:
{'clusters': [[3, 512, 11]]}
Currently, fishy does not provide on the fly encryption and does not apply any data integrity methods to the hidden data. Thus its left to the user, to add those extra functionality before hiding the data. The following listing gives two examples, on how to use pipes to easily get these features. To encrypt data with a password, one can use gnupg:
$ echo "TOP SECRET" | gpg2 --symmetric - | fishy -d testfs-fat12.dd badcluster -m metadata.json -w
To detect corruption of the hidden data, there exist many possibilities and tools. The following code listing gives an easy example on how to use zip for this purpose.
$ echo "TOP SECRET" | gzip | fishy -d testfs-fat12.dd badcluster -m metadata.json -w