lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b9452999-27a7-9b71-8496-de636a860be3@arm.com>
Date:   Thu, 20 May 2021 17:30:34 +0530
From:   Anshuman Khandual <anshuman.khandual@....com>
To:     Muchun Song <songmuchun@...edance.com>
Cc:     Will Deacon <will@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        David Hildenbrand <david@...hat.com>,
        "Bodeddula, Balasubramaniam" <bodeddub@...zon.com>,
        Oscar Salvador <osalvador@...e.de>,
        Mike Kravetz <mike.kravetz@...cle.com>,
        David Rientjes <rientjes@...gle.com>,
        linux-arm-kernel@...ts.infradead.org,
        LKML <linux-kernel@...r.kernel.org>,
        Linux Memory Management List <linux-mm@...ck.org>,
        Xiongchun duan <duanxiongchun@...edance.com>,
        fam.zheng@...edance.com, zhengqi.arch@...edance.com
Subject: Re: [External] Re: [PATCH] arm64: mm: hugetlb: add support for free
 vmemmap pages of HugeTLB



On 5/19/21 6:19 PM, Muchun Song wrote:
> On Wed, May 19, 2021 at 7:44 PM Anshuman Khandual
> <anshuman.khandual@....com> wrote:
>> On 5/18/21 2:48 PM, Muchun Song wrote:
>>> The preparation of supporting freeing vmemmap associated with each
>>> HugeTLB page is ready, so we can support this feature for arm64.
>>>
>>> Signed-off-by: Muchun Song <songmuchun@...edance.com>
>>> ---
>>>  arch/arm64/mm/mmu.c | 5 +++++
>>>  fs/Kconfig          | 2 +-
>>>  2 files changed, 6 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>>> index 5d37e461c41f..967b01ce468d 100644
>>> --- a/arch/arm64/mm/mmu.c
>>> +++ b/arch/arm64/mm/mmu.c
>>> @@ -23,6 +23,7 @@
>>>  #include <linux/mm.h>
>>>  #include <linux/vmalloc.h>
>>>  #include <linux/set_memory.h>
>>> +#include <linux/hugetlb.h>
>>>
>>>  #include <asm/barrier.h>
>>>  #include <asm/cputype.h>
>>> @@ -1134,6 +1135,10 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
>>>       pmd_t *pmdp;
>>>
>>>       WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
>>> +
>>> +     if (is_hugetlb_free_vmemmap_enabled() && !altmap)
>>> +             return vmemmap_populate_basepages(start, end, node, altmap);
>>
>> Not considering the fact that this will force the kernel to have only
>> base page size mapping for vmemmap (unless altmap is also requested)
>> which might reduce the performance, it also enables vmemmap mapping to
>> be teared down or build up at runtime which could potentially collide
>> with other kernel page table walkers like ptdump or memory hotremove
>> operation ! How those possible collisions are protected right now ?
> 
> For the ptdump, there seems no problem.  The change of pte seems
> not to affect the ptdump unless I miss something.

The worst case scenario for ptdump would be an wrong information being
dumped. But as mentioned earlier, this must be protected against memory
hot remove operation which could free up entries for vmemmap areas in
the kernel page table.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ