lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 24 Aug 2016 23:01:43 -0500 (CDT)
From:   Christoph Lameter <cl@...ux.com>
To:     Mel Gorman <mgorman@...e.de>
cc:     Michal Hocko <mhocko@...nel.org>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        Aruna Ramakrishna <aruna.ramakrishna@...cle.com>,
        linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Mike Kravetz <mike.kravetz@...cle.com>,
        Pekka Enberg <penberg@...nel.org>,
        David Rientjes <rientjes@...gle.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Jiri Slaby <jslaby@...e.cz>
Subject: Re: what is the purpose of SLAB and SLUB (was: Re: [PATCH v3] mm/slab:
 Improve performance of gathering slabinfo) stats

On Wed, 24 Aug 2016, Mel Gorman wrote:
> If/when I get back to the page allocator, the priority would be a bulk
> API for faster allocs of batches of order-0 pages instead of allocating
> a large page and splitting.
>

OMG. Do we really want to continue this? There are billions of Linux
devices out there that require a reboot at least once a week. This is now
standard with certain Android phones. In our company we reboot all
machines every week because fragmentation degrades performance
significantly. We need to finally face up to it and deal with the issue
instead of continuing to produce more half ass-ed solutions.

Managing memory in 4K chunks is not reasonable if you have
machines with terabytes of memory and thus billions of individual page
structs to manage. I/O devices are throttling because they cannot manage
so much meta data and we get grotesque devices.

The kernel needs an effective way to handle large contiguous memory. It
needs the ability to do effective defragmentation for that. And the way
forward has been clear also for awhile. All objects must be either
movable or be reclaimable so that things can be moved to allow contiguity
to be restored.


We have support for that for the page cache and interestingly enough for
CMA now. So this is gradually developing because it is necessary. We need
to go with that and provide a full fledged implementation in the kernel
that allows effective handling of large objects in the page allocator and
we need general logic in the kernel for effective handling of large
sized chunks of memory.

Lets stop churning tiny 4k segments in the world where even our cell
phones have capacities measured in Gigabytes which certainly then already
means millions of 4k objects whose management one by one is a drag on
performance and makes operating system coding extremely complex. The core
of Linux must support that for the future in which we will see even larger
memory capacities.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ