lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 28 Jan 2021 14:01:06 +0100
From:   Michal Hocko <mhocko@...e.com>
To:     Mike Rapoport <rppt@...nel.org>
Cc:     David Hildenbrand <david@...hat.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Alexander Viro <viro@...iv.linux.org.uk>,
        Andy Lutomirski <luto@...nel.org>,
        Arnd Bergmann <arnd@...db.de>, Borislav Petkov <bp@...en8.de>,
        Catalin Marinas <catalin.marinas@....com>,
        Christopher Lameter <cl@...ux.com>,
        Dan Williams <dan.j.williams@...el.com>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Elena Reshetova <elena.reshetova@...el.com>,
        "H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...hat.com>,
        James Bottomley <jejb@...ux.ibm.com>,
        "Kirill A. Shutemov" <kirill@...temov.name>,
        Matthew Wilcox <willy@...radead.org>,
        Mark Rutland <mark.rutland@....com>,
        Mike Rapoport <rppt@...ux.ibm.com>,
        Michael Kerrisk <mtk.manpages@...il.com>,
        Palmer Dabbelt <palmer@...belt.com>,
        Paul Walmsley <paul.walmsley@...ive.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Rick Edgecombe <rick.p.edgecombe@...el.com>,
        Roman Gushchin <guro@...com>,
        Shakeel Butt <shakeelb@...gle.com>,
        Shuah Khan <shuah@...nel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Tycho Andersen <tycho@...ho.ws>, Will Deacon <will@...nel.org>,
        linux-api@...r.kernel.org, linux-arch@...r.kernel.org,
        linux-arm-kernel@...ts.infradead.org,
        linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, linux-kselftest@...r.kernel.org,
        linux-nvdimm@...ts.01.org, linux-riscv@...ts.infradead.org,
        x86@...nel.org, Hagen Paul Pfeifer <hagen@...u.net>,
        Palmer Dabbelt <palmerdabbelt@...gle.com>
Subject: Re: [PATCH v16 07/11] secretmem: use PMD-size pages to amortize
 direct map fragmentation

On Thu 28-01-21 11:22:59, Mike Rapoport wrote:
> On Tue, Jan 26, 2021 at 01:08:23PM +0100, Michal Hocko wrote:
> > On Tue 26-01-21 12:56:48, David Hildenbrand wrote:
> > > On 26.01.21 12:46, Michal Hocko wrote:
> > > > On Thu 21-01-21 14:27:19, Mike Rapoport wrote:
> > > > > From: Mike Rapoport <rppt@...ux.ibm.com>
> > > > > 
> > > > > Removing a PAGE_SIZE page from the direct map every time such page is
> > > > > allocated for a secret memory mapping will cause severe fragmentation of
> > > > > the direct map. This fragmentation can be reduced by using PMD-size pages
> > > > > as a pool for small pages for secret memory mappings.
> > > > > 
> > > > > Add a gen_pool per secretmem inode and lazily populate this pool with
> > > > > PMD-size pages.
> > > > > 
> > > > > As pages allocated by secretmem become unmovable, use CMA to back large
> > > > > page caches so that page allocator won't be surprised by failing attempt to
> > > > > migrate these pages.
> > > > > 
> > > > > The CMA area used by secretmem is controlled by the "secretmem=" kernel
> > > > > parameter. This allows explicit control over the memory available for
> > > > > secretmem and provides upper hard limit for secretmem consumption.
> > > > 
> > > > OK, so I have finally had a look at this closer and this is really not
> > > > acceptable. I have already mentioned that in a response to other patch
> > > > but any task is able to deprive access to secret memory to other tasks
> > > > and cause OOM killer which wouldn't really recover ever and potentially
> > > > panic the system. Now you could be less drastic and only make SIGBUS on
> > > > fault but that would be still quite terrible. There is a very good
> > > > reason why hugetlb implements is non-trivial reservation system to avoid
> > > > exactly these problems.
> 
> So, if I understand your concerns correct this implementation has two
> issues:
> 1) allocation failure at page fault that causes unrecoverable OOM and
> 2) a possibility for an unprivileged user to deplete secretmem pool and
> cause (1) to others
> 
> I'm not really familiar with OOM internals, but when I simulated an
> allocation failure in my testing only the allocating process and it's
> parent were OOM-killed and then the system continued normally. 

If you kill the allocating process then yes, it would work, but your
process might be the very last to be selected.

> You are right, it would be better if we SIGBUS instead of OOM but I don't
> agree SIGBUS is terrible. As we started to draw parallels with hugetlbfs
> even despite it's complex reservation system, hugetlb_fault() may fail to
> allocate pages from CMA and this still will cause SIGBUS.

This is an unexpected runtime error. Unless you make it an integral part
of the API design.

> And hugetlb pools may be also depleted by anybody by calling
> mmap(MAP_HUGETLB) and there is no any limiting knob for this, while
> secretmem has RLIMIT_MEMLOCK.

Yes it can fail. But it would fail at the mmap time when the reservation
fails. Not during the #PF time which can be at any time.

> That said, simply replacing VM_FAULT_OOM with VM_FAULT_SIGBUS makes
> secretmem at least as controllable and robust than hugeltbfs even without
> complex reservation at mmap() time.

Still sucks huge!

> > > > So unless I am really misreading the code
> > > > Nacked-by: Michal Hocko <mhocko@...e.com>
> > > > 
> > > > That doesn't mean I reject the whole idea. There are some details to
> > > > sort out as mentioned elsewhere but you cannot really depend on
> > > > pre-allocated pool which can fail at a fault time like that.
> > > 
> > > So, to do it similar to hugetlbfs (e.g., with CMA), there would have to be a
> > > mechanism to actually try pre-reserving (e.g., from the CMA area), at which
> > > point in time the pages would get moved to the secretmem pool, and a
> > > mechanism for mmap() etc. to "reserve" from these secretmem pool, such that
> > > there are guarantees at fault time?
> > 
> > yes, reserve at mmap time and use during the fault. But this all sounds
> > like a self inflicted problem to me. Sure you can have a pre-allocated
> > or more dynamic pool to reduce the direct mapping fragmentation but you
> > can always fall back to regular allocatios. In other ways have the pool
> > as an optimization rather than a hard requirement. With a careful access
> > control this sounds like a manageable solution to me.
> 
> I'd really wish we had this discussion for earlier spins of this series,
> but since this didn't happen let's refresh the history a bit.

I am sorry but I am really fighting to find time to watch for all the
moving targets...

> One of the major pushbacks on the first RFC [1] of the concept was about
> the direct map fragmentation. I tried really hard to find data that shows
> what is the performance difference with different page sizes in the direct
> map and I didn't find anything.
> 
> So presuming that large pages do provide advantage the first implementation
> of secretmem used PMD_ORDER allocations to amortise the effect of the
> direct map fragmentation and then handed out 4k pages at each fault. In
> addition there was an option to reserve a finite pool at boot time and
> limit secretmem allocations only to that pool.
> 
> At some point David suggested to use CMA to improve overall flexibility
> [3], so I switched secretmem to use CMA.
> 
> Now, with the data we have at hand (my benchmarks and Intel's report David
> mentioned) I'm even not sure this whole pooling even required.

I would still like to understand whether that data is actually
representative. With some underlying reasoning rather than I have run
these XYZ benchmarks and numbers do not look terrible.

> I like the idea to have a pool as an optimization rather than a hard
> requirement but I don't see why would it need a careful access control. As
> the direct map fragmentation is not necessarily degrades the performance
> (and even sometimes it actually improves it) and even then the degradation
> is small, trying a PMD_ORDER allocation for a pool and then falling back to
> 4K page may be just fine.

Well, as soon as this is a scarce resource then an access control seems
like a first thing to think of. Maybe it is not really necessary but
then this should be really justified.

I am also still not sure why this whole thing is not just a
ramdisk/ramfs which happens to unmap its pages from the direct
map. Wouldn't that be a much more easier model to work with? You would
get an access control for free as well.
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ