lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 14 Jan 2020 16:48:29 +0000
From:   David Howells <>
Subject: Problems with determining data presence by examining extents?

Again with regard to my rewrite of fscache and cachefiles:

I've got rid of my use of bmap()!  Hooray!

However, I'm informed that I can't trust the extent map of a backing file to
tell me accurately whether content exists in a file because:

 (a) Not-quite-contiguous extents may be joined by insertion of blocks of
     zeros by the filesystem optimising itself.  This would give me a false
     positive when trying to detect the presence of data.

 (b) Blocks of zeros that I write into the file may get punched out by
     filesystem optimisation since a read back would be expected to read zeros
     there anyway, provided it's below the EOF.  This would give me a false

Is there some setting I can use to prevent these scenarios on a file - or can
one be added?

Without being able to trust the filesystem to tell me accurately what I've
written into it, I have to use some other mechanism.  Currently, I've switched
to storing a map in an xattr with 1 bit per 256k block, but that gets hard to
use if the file grows particularly large and also has integrity consequences -
though those are hopefully limited as I'm now using DIO to store data into the

If it helps, I'm downloading data in aligned 256k blocks and storing data in
those same aligned 256k blocks, so if that makes it easier...


Powered by blists - more mailing lists