lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 25 Aug 2016 14:55:43 -0500 (CDT)
From:   Christoph Lameter <cl@...ux.com>
To:     Mel Gorman <mgorman@...e.de>
cc:     Michal Hocko <mhocko@...nel.org>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Aruna Ramakrishna <aruna.ramakrishna@...cle.com>,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Mike Kravetz <mike.kravetz@...cle.com>,
        Pekka Enberg <penberg@...nel.org>,
        David Rientjes <rientjes@...gle.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Jiri Slaby <jslaby@...e.cz>
Subject: Re: what is the purpose of SLAB and SLUB (was: Re: [PATCH v3] mm/slab:
 Improve performance of gathering slabinfo) stats

On Thu, 25 Aug 2016, Mel Gorman wrote:

> Flipping the lid aside, there will always be a need for fast management
> of 4K pages. The primary use case is networking that sometimes uses
> high-order pages to avoid allocator overhead and amortise DMA setup.
> Userspace-mapped pages will always be 4K although fault-around may benefit
> from bulk allocating the pages. That is relatively low hanging fruit that
> would take a few weeks given a free schedule.

Userspace mapped pages can be hugepages as well as giant pages and that
has been there for a long time. Intermediate sizes would be useful too in
order to avoid having to keep lists of 4k pages around and continually
scan them.

> Dirty tracking of pages on a 4K boundary will always be required to avoid IO
> multiplier effects that cannot be side-stepped by increasing the fundamental
> unit of allocation.

Huge pages cannot be dirtied? This is an issue of hardware support. On
x867 you only have one size. I am pretty such that even intel would
support other sizes if needed. The case has been repeatedly made that 64k
pages f.e. would be useful to have on x86.


> Batching of tree_lock during reclaim for large files and swapping is also
> relatively low hanging fruit that also is doable in a week or two.

Ok these are good incremental improvement but they do not address the main
issue going forward.

> A high-order per-cpu cache for SLUB to reduce zone->lock contention is
> also relatively low hanging fruit with the caveat it makes per_cpu_pages
> larger than a cache line.

Would be great to have.

> If you want to rework the VM to use a larger fundamental unit, track
> sub-units where required and deal with the internal fragmentation issues
> then by all means go ahead and deal with it.

Hmmm... The time problem is always there. Tried various approaches over
the last decade. Could be a massive project. We really would need a
larger group of developers to effectively do this.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ