lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 30 Jan 2018 15:46:37 +0200
From:   Igor Stoppa <igor.stoppa@...wei.com>
To:     Boris Lukashev <blukashev@...pervictus.com>
CC:     Jerome Glisse <jglisse@...hat.com>, Jann Horn <jannh@...gle.com>,
        "Kees Cook" <keescook@...omium.org>,
        Michal Hocko <mhocko@...nel.org>,
        Laura Abbott <labbott@...hat.com>,
        Christoph Hellwig <hch@...radead.org>,
        Matthew Wilcox <willy@...radead.org>,
        Christoph Lameter <cl@...ux.com>,
        linux-security-module <linux-security-module@...r.kernel.org>,
        Linux-MM <linux-mm@...ck.org>,
        kernel list <linux-kernel@...r.kernel.org>,
        "Kernel Hardening" <kernel-hardening@...ts.openwall.com>
Subject: Re: [kernel-hardening] [PATCH 4/6] Protectable Memory

On 26/01/18 18:36, Boris Lukashev wrote:
> I like the idea of making the verification call optional for consumers
> allowing for fast/slow+hard paths depending on their needs.
> Cant see any additional vectors for abuse (other than the original
> ones effecting out-of-band modification) introduced by having
> verify/normal callers, but i've not had enough coffee yet. Any access
> races or things like that come to mind for anyone?

Well, the devil is in the details.
In this case, the question is how to perform the verification in a way
that is sufficiently robust against races.

After thinking about it for a while, I doubt it can be done reliably.
It might work for some small data types, but the typical use case I have
found myself dealing with, is protecting data structures.

That also brings up a separate problem: what would be the size of data
to hash? At one extreme there is a page, but it's probably too much, so
what is the correct size? it cannot be smaller than a specific
allocation, however that would imply looking for the hash related to the
data being accessed, with extra overhead.

And the data being accessed might be a field in a struct, for which we
would not have any hash.
There would be a hash only for the containing struct that was allocated ...


Overall, it seems a good idea in theory, but when I think about its
implementation, it seems like the overhead is so big that it would
discourage its use for almost any practical purpose.

If one really wants to be paranoid could, otoh have redundancy in a
different pool.

--
igor

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ