lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 13 Apr 2018 16:10:29 +0530
From:   Anshuman Khandual <khandual@...ux.vnet.ibm.com>
To:     Chintan Pandya <cpandya@...eaurora.org>,
        Anshuman Khandual <khandual@...ux.vnet.ibm.com>,
        vbabka@...e.cz, labbott@...hat.com, catalin.marinas@....com,
        hannes@...xchg.org, f.fainelli@...il.com, xieyisheng1@...wei.com,
        ard.biesheuvel@...aro.org, richard.weiyang@...il.com,
        byungchul.park@....com
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mm: vmalloc: Remove double execution of vunmap_page_range

On 04/13/2018 03:47 PM, Chintan Pandya wrote:
> 
> 
> On 4/13/2018 3:29 PM, Anshuman Khandual wrote:
>> On 04/13/2018 02:46 PM, Chintan Pandya wrote:
>>> Unmap legs do call vunmap_page_range() irrespective of
>>> debug_pagealloc_enabled() is enabled or not. So, remove
>>> redundant check and optional vunmap_page_range() routines.
>>
>> vunmap_page_range() tears down the page table entries and does
>> not really flush related TLB entries normally unless page alloc
>> debug is enabled where it wants to make sure no stale mapping is
>> still around for debug purpose. Deferring TLB flush improves
>> performance. This patch will force TLB flush during each page
>> table tear down and hence not desirable.
>>
> Deferred TLB invalidation will surely improve performance. But force
> flush can help in detecting invalid access right then and there. I

Deferred TLB invalidation was a choice made some time ago with the
commit db64fe02258f1507e ("mm: rewrite vmap layer") as these vmalloc
mappings wont be used other than inside the kernel and TLB gets
flushed when they are reused. This way it can still avail the benefit
of deferred TLB flushing without exposing itself to invalid accesses.

> chose later. May be I should have clean up the vmap tear down code
> as well where it actually does the TLB invalidation.
> 
> Or make TLB invalidation in free_unmap_vmap_area() be dependent upon
> debug_pagealloc_enabled().

Immediate TLB invalidation needs to be dependent on debug_pagealloc_
enabled() and should be done only for debug purpose. Contrary to that
is not desirable.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ