[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190820222035.GC4949@bombadil.infradead.org>
Date: Tue, 20 Aug 2019 15:20:35 -0700
From: Matthew Wilcox <willy@...radead.org>
To: Nitin Gupta <nigupta@...dia.com>
Cc: akpm@...ux-foundation.org, vbabka@...e.cz,
mgorman@...hsingularity.net, mhocko@...e.com,
dan.j.williams@...el.com, Yu Zhao <yuzhao@...gle.com>,
Qian Cai <cai@....pw>,
Andrey Ryabinin <aryabinin@...tuozzo.com>,
Roman Gushchin <guro@...com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Kees Cook <keescook@...omium.org>,
Jann Horn <jannh@...gle.com>,
Johannes Weiner <hannes@...xchg.org>,
Arun KS <arunks@...eaurora.org>,
Janne Huttunen <janne.huttunen@...ia.com>,
Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [RFC] mm: Proactive compaction
On Fri, Aug 16, 2019 at 02:43:30PM -0700, Nitin Gupta wrote:
> Testing done (on x86):
> - Set /sys/kernel/mm/compaction/order-9/extfrag_{low,high} = {25, 30}
> respectively.
> - Use a test program to fragment memory: the program allocates all memory
> and then for each 2M aligned section, frees 3/4 of base pages using
> munmap.
> - kcompactd0 detects fragmentation for order-9 > extfrag_high and starts
> compaction till extfrag < extfrag_low for order-9.
Your test program is a good idea, but I worry it may produce
unrealistically optimistic outcomes. Page cache is readily reclaimable,
so you're setting up a situation where 2MB pages can once again be
produced.
How about this:
One program which creates a file several times the size of memory (or
several files which total the same amount). Then read the file(s). Maybe
by mmap(), and just do nice easy sequential accesses.
A second program which causes slab allocations. eg
for (;;) {
for (i = 0; i < n * 1000 * 1000; i++) {
char fname[64];
sprintf(fname, "/tmp/missing.%d", i);
open(fname, O_RDWR);
}
}
The first program should thrash the pagecache, causing pages to
continuously be allocated, reclaimed and freed. The second will create
millions of dentries, causing the slab allocator to allocate a lot of
order-0 pages which are harder to free. If you really want to make it
work hard, mix in opening some files whihc actually exist, preventing
the pages which contain those dentries from being evicted.
This feels like it's simulating a more normal workload than your test.
What do you think?
Powered by blists - more mailing lists