lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <02D551EF-C975-4B91-86CA-356FA0FF515C@gmail.com>
Date:   Wed, 12 Aug 2020 10:15:22 -0400
From:   Chuck Lever <chucklever@...il.com>
To:     James Bottomley <James.Bottomley@...senPartnership.com>
Cc:     Mimi Zohar <zohar@...ux.ibm.com>, James Morris <jmorris@...ei.org>,
        Deven Bowers <deven.desai@...ux.microsoft.com>,
        Pavel Machek <pavel@....cz>, Sasha Levin <sashal@...nel.org>,
        snitzer@...hat.com, dm-devel@...hat.com,
        tyhicks@...ux.microsoft.com, agk@...hat.com,
        Paul Moore <paul@...l-moore.com>,
        Jonathan Corbet <corbet@....net>, nramas@...ux.microsoft.com,
        serge@...lyn.com, pasha.tatashin@...een.com,
        Jann Horn <jannh@...gle.com>, linux-block@...r.kernel.org,
        Al Viro <viro@...iv.linux.org.uk>,
        Jens Axboe <axboe@...nel.dk>, mdsakib@...rosoft.com,
        open list <linux-kernel@...r.kernel.org>, eparis@...hat.com,
        linux-security-module@...r.kernel.org, linux-audit@...hat.com,
        linux-fsdevel <linux-fsdevel@...r.kernel.org>,
        linux-integrity@...r.kernel.org,
        jaskarankhurana@...ux.microsoft.com
Subject: Re: [dm-devel] [RFC PATCH v5 00/11] Integrity Policy Enforcement LSM
 (IPE)



> On Aug 11, 2020, at 11:53 AM, James Bottomley <James.Bottomley@...senPartnership.com> wrote:
> 
> On Tue, 2020-08-11 at 10:48 -0400, Chuck Lever wrote:
>>> On Aug 11, 2020, at 1:43 AM, James Bottomley <James.Bottomley@...se
>>> nPartnership.com> wrote:
>>> 
>>> On Mon, 2020-08-10 at 19:36 -0400, Chuck Lever wrote:
>>>>> On Aug 10, 2020, at 11:35 AM, James Bottomley
>>>>> <James.Bottomley@...senPartnership.com> wrote:
> [...]
>>>>> The first basic is that a merkle tree allows unit at a time
>>>>> verification. First of all we should agree on the unit.  Since
>>>>> we always fault a page at a time, I think our merkle tree unit
>>>>> should be a page not a block.
>>>> 
>>>> Remote filesystems will need to agree that the size of that unit
>>>> is the same everywhere, or the unit size could be stored in the
>>>> per-filemetadata.
>>>> 
>>>> 
>>>>> Next, we should agree where the check gates for the per page
>>>>> accesses should be ... definitely somewhere in readpage, I
>>>>> suspect and finally we should agree how the merkle tree is
>>>>> presented at the gate.  I think there are three ways:
>>>>> 
>>>>> 1. Ahead of time transfer:  The merkle tree is transferred and
>>>>> verified
>>>>>    at some time before the accesses begin, so we already have
>>>>> a
>>>>>    verified copy and can compare against the lower leaf.
>>>>> 2. Async transfer:  We provide an async mechanism to transfer
>>>>> the
>>>>>    necessary components, so when presented with a unit, we
>>>>> check the
>>>>>    log n components required to get to the root
>>>>> 3. The protocol actually provides the capability of 2 (like
>>>>> the SCSI
>>>>>    DIF/DIX), so to IMA all the pieces get presented instead of
>>>>> IMA
>>>>>    having to manage the tree
>>>> 
>>>> A Merkle tree is potentially large enough that it cannot be
>>>> stored in an extended attribute. In addition, an extended
>>>> attribute is not a byte stream that you can seek into or read
>>>> small parts of, it is retrieved in a single shot.
>>> 
>>> Well you wouldn't store the tree would you, just the head
>>> hash.  The rest of the tree can be derived from the data.  You need
>>> to distinguish between what you *must* have to verify integrity
>>> (the head hash, possibly signed)
>> 
>> We're dealing with an untrusted storage device, and for a remote
>> filesystem, an untrusted network.
>> 
>> Mimi's earlier point is that any IMA metadata format that involves
>> unsigned digests is exposed to an alteration attack at rest or in
>> transit, thus will not provide a robust end-to-end integrity
>> guarantee.
>> 
>> Therefore, tree root digests must be cryptographically signed to be
>> properly protected in these environments. Verifying that signature
>> should be done infrequently relative to reading a file's content.
> 
> I'm not disagreeing there has to be a way for the relying party to
> trust the root hash.
> 
>>> and what is nice to have to speed up the verification
>>> process.  The choice for the latter is cache or reconstruct
>>> depending on the resources available.  If the tree gets cached on
>>> the server, that would be a server implementation detail invisible
>>> to the client.
>> 
>> We assume that storage targets (for block or file) are not trusted.
>> Therefore storage clients cannot rely on intermediate results (eg,
>> middle nodes in a Merkle tree) unless those results are generated
>> within the client's trust envelope.
> 
> Yes, they can ... because supplied nodes can be verified.  That's the
> whole point of a merkle tree.  As long as I'm sure of the root hash I
> can verify all the rest even if supplied by an untrusted source.  If
> you consider a simple merkle tree covering 4 blocks:
> 
>       R
>     /   \
>  H11     H12
>  / \     / \
> H21 H22 H23 H24
> |    |   |   |
> B1   B2  B3  B4
> 
> Assume I have the verified root hash R.  If you supply B3 you also
> supply H24 and H11 as proof.  I verify by hashing B3 to produce H23
> then hash H23 and H24 to produce H12 and if H12 and your supplied H11
> hash to R the tree is correct and the B3 you supplied must likewise be
> correct.

I'm not sure what you are proving here. Obviously this has to work
in order for a client to reconstruct the file's Merkle tree given
only R and the file content.

It's the construction of the tree and verification of the hashes that
are potentially expensive. The point of caching intermediate hashes
is so that the client verifies them as few times as possible.  I
don't see value in caching those hashes on an untrusted server --
the client will have to reverify them anyway, and there will be no
savings.

Cache once, as close as you can to where the data will be used.


>> So: if the storage target is considered inside the client's trust
>> envelope, it can cache or store durably any intermediate parts of
>> the verification process. If not, the network and file storage is
>> considered untrusted, and the client has to rely on nothing but the
>> signed digest of the tree root.
>> 
>> We could build a scheme around, say, fscache, that might save the
>> intermediate results durably and locally.
> 
> I agree we want caching on the client, but we can always page in from
> the remote as long as we page enough to verify up to R, so we're always
> sure the remote supplied genuine information.

Agreed.


>>>> For this reason, the idea was to save only the signature of the
>>>> tree's root on durable storage. The client would retrieve that
>>>> signature possibly at open time, and reconstruct the tree at that
>>>> time.
>>> 
>>> Right that's the integrity data you must have.
>>> 
>>>> Or the tree could be partially constructed on-demand at the time
>>>> each unit is to be checked (say, as part of 2. above).
>>> 
>>> Whether it's reconstructed or cached can be an implementation
>>> detail. You clearly have to reconstruct once, but whether you have
>>> to do it again depends on the memory available for caching and all
>>> the other resource calls in the system.
>>> 
>>>> The client would have to reconstruct that tree again if memory
>>>> pressure caused some or all of the tree to be evicted, so perhaps
>>>> an on-demand mechanism is preferable.
>>> 
>>> Right, but I think that's implementation detail.  Probably what we
>>> need is a way to get the log(N) verification hashes from the server
>>> and it's up to the client whether it caches them or not.
>> 
>> Agreed, these are implementation details. But see above about the
>> trustworthiness of the intermediate hashes. If they are conveyed
>> on an untrusted network, then they can't be trusted either.
> 
> Yes, they can, provided enough of them are asked for to verify.  If you
> look at the simple example above, suppose I have cached H11 and H12,
> but I've lost the entire H2X layer.  I want to verify B3 so I also ask
> you for your copy of H24.  Then I generate H23 from B3 and Hash H23 and
> H24.  If this doesn't hash to H12 I know either you supplied me the
> wrong block or lied about H24.  However, if it all hashes correctly I
> know you supplied me with both the correct B3 and the correct H24.

My point is there is a difference between a trusted cache and an
untrusted cache. I argue there is not much value in a cache where
the hashes have to be verified again.


>>>>> There are also a load of minor things like how we get the head
>>>>> hash, which must be presented and verified ahead of time for
>>>>> each of the above 3.
>>>> 
>>>> Also, changes to a file's content and its tree signature are not
>>>> atomic. If a file is mutable, then there is the period between
>>>> when the file content has changed and when the signature is
>>>> updated. Some discussion of how a client is to behave in those
>>>> situations will be necessary.
>>> 
>>> For IMA, if you write to a checked file, it gets rechecked the next
>>> time the gate (open/exec/mmap) is triggered.  This means you must
>>> complete the update and have the new integrity data in-place before
>>> triggering the check.  I think this could apply equally to a merkel
>>> tree based system.  It's a sort of Doctor, Doctor it hurts when I
>>> do this situation.
>> 
>> I imagine it's a common situation where a "yum update" process is
>> modifying executables while clients are running them. To prevent
>> a read from pulling refreshed content before the new tree root is
>> available, it would have to block temporarily until the verification
>> process succeeds with the updated tree root.
> 
> No ... it's not.  Yum specifically worries about that today because if
> you update running binaries, it causes a crash.  Yum constructs the
> entire new file then atomically links it into place and deletes the old
> inode to prevent these crashes.  It never allows you to get into the
> situation where you can execute something that will be modified. 
> That's also why you have to restart stuff after a yum update because if
> you didn't it would still be attached to the deleted inode.

Fair enough.

--
Chuck Lever
chucklever@...il.com



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ