lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6c8b30fb-1b6c-d446-0b09-255b79468f7c@redhat.com>
Date:   Wed, 19 Aug 2020 19:45:29 +0200
From:   David Hildenbrand <david@...hat.com>
To:     Mike Rapoport <rppt@...nel.org>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Alexander Viro <viro@...iv.linux.org.uk>,
        Andy Lutomirski <luto@...nel.org>,
        Arnd Bergmann <arnd@...db.de>, Borislav Petkov <bp@...en8.de>,
        Catalin Marinas <catalin.marinas@....com>,
        Christopher Lameter <cl@...ux.com>,
        Dan Williams <dan.j.williams@...el.com>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Elena Reshetova <elena.reshetova@...el.com>,
        "H. Peter Anvin" <hpa@...or.com>, Idan Yaniv <idan.yaniv@....com>,
        Ingo Molnar <mingo@...hat.com>,
        James Bottomley <jejb@...ux.ibm.com>,
        "Kirill A. Shutemov" <kirill@...temov.name>,
        Matthew Wilcox <willy@...radead.org>,
        Mark Rutland <mark.rutland@....com>,
        Mike Rapoport <rppt@...ux.ibm.com>,
        Michael Kerrisk <mtk.manpages@...il.com>,
        Palmer Dabbelt <palmer@...belt.com>,
        Paul Walmsley <paul.walmsley@...ive.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Tycho Andersen <tycho@...ho.ws>, Will Deacon <will@...nel.org>,
        linux-api@...r.kernel.org, linux-arch@...r.kernel.org,
        linux-arm-kernel@...ts.infradead.org,
        linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, linux-nvdimm@...ts.01.org,
        linux-riscv@...ts.infradead.org, x86@...nel.org
Subject: Re: [PATCH v4 6/6] mm: secretmem: add ability to reserve memory at
 boot

On 19.08.20 19:33, Mike Rapoport wrote:
> On Wed, Aug 19, 2020 at 02:10:43PM +0200, David Hildenbrand wrote:
>> On 19.08.20 13:53, Mike Rapoport wrote:
>>> On Wed, Aug 19, 2020 at 12:49:05PM +0200, David Hildenbrand wrote:
>>>> On 18.08.20 16:15, Mike Rapoport wrote:
>>>>> From: Mike Rapoport <rppt@...ux.ibm.com>
>>>>>
>>>>> Taking pages out from the direct map and bringing them back may create
>>>>> undesired fragmentation and usage of the smaller pages in the direct
>>>>> mapping of the physical memory.
>>>>>
>>>>> This can be avoided if a significantly large area of the physical memory
>>>>> would be reserved for secretmem purposes at boot time.
>>>>>
>>>>> Add ability to reserve physical memory for secretmem at boot time using
>>>>> "secretmem" kernel parameter and then use that reserved memory as a global
>>>>> pool for secret memory needs.
>>>>
>>>> Wouldn't something like CMA be the better fit? Just wondering. Then, the
>>>> memory can actually be reused for something else while not needed.
>>>
>>> The memory allocated as secret is removed from the direct map and the
>>> boot time reservation is intended to reduce direct map fragmentatioan
>>> and to avoid splitting 1G pages there. So with CMA I'd still need to
>>> allocate 1G chunks for this and once 1G page is dropped from the direct
>>> map it still cannot be reused for anything else until it is freed.
>>>
>>> I could use CMA to do the boot time reservation, but doing the
>>> reservesion directly seemed simpler and more explicit to me.
>>
>> Well, using CMA would give you the possibility to let the memory be used
>> for other purposes until you decide it's the right time to take it +
>> remove the direct mapping etc.
> 
> I still can't say I follow you here. If I reseve a CMA area as a pool
> for secret memory 1G pages, it is still reserved and it still cannot be
> used for other purposes, right?

So, AFAIK, if you create a CMA pool it can be used for any MOVABLE
allocations (similar to ZONE_MOVABLE) until you actually allocate CMA
memory from that region. Other allocations on that are will then be
migrated away (using alloc_contig_range()).

For example, if you have a 1~GiB CMA area, you could allocate 4~MB pages
from that CMA area on demand (removing the direct mapping, etc ..), and
free when no longer needed (instantiating the direct mapping). The free
memory in that area could used for MOVABLE allocations.

Please let me know if I am missing something important.

-- 
Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ