[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180711161030.b5ae2f5b1210150c13b1a832@linux-foundation.org>
Date: Wed, 11 Jul 2018 16:10:30 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: David Rientjes <rientjes@...gle.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Davidlohr Bueso <dave@...olabs.net>,
Alexey Dobriyan <adobriyan@...il.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org
Subject: Re: [patch] mm, vmacache: hash addresses based on pmd
On Mon, 9 Jul 2018 18:37:37 -0700 (PDT) David Rientjes <rientjes@...gle.com> wrote:
> > Did you consider LRU-sorting the array instead?
> >
>
> It adds 40 bytes to struct task_struct,
What does? LRU sort? It's a 4-entry array, just do it in place, like
bh_lru_install(). Confused.
> but I'm not sure the least
> recently used is the first preferred check. If I do
> madvise(MADV_DONTNEED) from a malloc implementation where I don't control
> what is free()'d and I'm constantly freeing back to the same hugepages,
> for example, I may always get first slot cache hits with this patch as
> opposed to the 25% chance that the current implementation has (and perhaps
> an lru would as well).
>
> I'm sure that I could construct a workload where LRU would be better and
> could show that the added footprint were worthwhile, but I could also
> construct a workload where the current implementation based on pfn would
> outperform all of these. It simply turns out that on the user-controlled
> workloads that I was profiling that hashing based on pmd was the win.
That leaves us nowhere to go. Zapping the WARN_ON seems a no-brainer
though?
Powered by blists - more mailing lists