lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 11 Jan 2019 15:28:37 -0800
From:   Mike Kravetz <mike.kravetz@...cle.com>
To:     "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Hugh Dickins <hughd@...gle.com>,
        Michal Hocko <mhocko@...nel.org>,
        Dan Williams <dan.j.williams@...el.com>,
        Matthew Wilcox <willy@...radead.org>,
        Toshi Kani <toshi.kani@....com>,
        Boaz Harrosh <boazh@...app.com>,
        Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [RFC PATCH] mm: align anon mmap for THP

On 1/11/19 1:55 PM, Kirill A. Shutemov wrote:
> On Fri, Jan 11, 2019 at 08:10:03PM +0000, Mike Kravetz wrote:
>> At LPC last year, Boaz Harrosh asked why he had to 'jump through hoops'
>> to get an address returned by mmap() suitably aligned for THP.  It seems
>> that if mmap is asking for a mapping length greater than huge page
>> size, it should align the returned address to huge page size.
>>
>> THP alignment has already been added for DAX, shm and tmpfs.  However,
>> simple anon mappings does not take THP alignment into account.
> 
> In general case, when no hint address provided, all anonymous memory
> requests have tendency to clamp into a single bigger VMA and get you
> better chance having THP, even if a single allocation is too small.
> This patch will *reduce* the effect and I guess the net result will be
> net negative.

Ah!  I forgot about combining like mappings into a single vma.  Increasing
alignment could/would prevent this.

> The patch also effectively reduces bit available for ASLR and increases
> address space fragmentation (increases number of VMA and therefore page
> fault cost).
> 
> I think any change in this direction has to be way more data driven.

Ok, I just wanted to ask the question.  I've seen application code doing
the 'mmap sufficiently large area' then unmap to get desired alignment
trick.  Was wondering if there was something we could do to help.

Thanks
-- 
Mike Kravetz

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ