lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <714E0B73-BE6C-408B-98A6-2A7C82E7BB11@oracle.com>
Date:   Thu, 17 May 2018 11:31:14 -0600
From:   William Kucharski <william.kucharski@...cle.com>
To:     Matthew Wilcox <willy@...radead.org>
Cc:     linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [RFC] mm, THP: Map read-only text segments using large THP pages



> On May 17, 2018, at 9:23 AM, Matthew Wilcox <willy@...radead.org> wrote:
> 
> I'm certain it is.  The other thing I believe is true that we should be
> able to share page tables (my motivation is thousands of processes each
> mapping the same ridiculously-sized file).  I was hoping this prototype
> would have code that would be stealable for that purpose, but you've
> gone in a different direction.  Which is fine for a prototype; you've
> produced useful numbers.

Definitely, and that's why I mentioned integration with the page cache
would be crucial. This prototype allocates pages for each invocation of
the executable, which would never fly on a real system.

> I think the first step is to get variable sized pages in the page cache
> working.  Then the map-around functionality can probably just notice if
> they're big enough to map with a PMD and make that happen.  I don't immediately
> see anything from this PoC that can be used, but it at least gives us a
> good point of comparison for any future work.

Yes, that's the first step to getting actual usable code designed and
working; this prototype was designed just to get something working and
to get a first swag at some performance numbers.

I do think that adding code to map larger pages as a fault_around variant
is a good start as the code is already going to potentially map in
fault_around_bytes from the file to satisfy the fault. It makes sense
to extend that paradigm to be able to tune when large pages might be
read in and/or mapped using large pages extant in the page cache.

Filesystem support becomes more important once writing to large pages
is allowed.

> I think that really tells the story.  We almost entirely eliminate
> dTLB load misses (down to almost 0.1%) and iTLB load misses drop to 4%
> of what they were.  Does this test represent any kind of real world load,
> or is it designed to show the best possible improvement?

It's admittedly designed to thrash the caches pretty hard and doesn't
represent any type of actual workload I'm aware of. It basically calls
various routines within a huge text area while scribbling to automatic
arrays declared at the top of each routine. It wasn't designed as a worst
case scenario, but rather as one that would hopefully show some obvious
degree of difference when large text pages were supported.

Thanks for your comments.

    -- Bill

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ