[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161027072235.GB6454@dhcp22.suse.cz>
Date: Thu, 27 Oct 2016 09:22:35 +0200
From: Michal Hocko <mhocko@...nel.org>
To: "Leizhen (ThunderTown)" <thunder.leizhen@...wei.com>
Cc: Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will.deacon@....com>,
linux-arm-kernel <linux-arm-kernel@...ts.infradead.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
linux-mm <linux-mm@...ck.org>, Zefan Li <lizefan@...wei.com>,
Xinwei Hu <huxinwei@...wei.com>,
Hanjun Guo <guohanjun@...wei.com>
Subject: Re: [PATCH 1/2] mm/memblock: prepare a capability to support
memblock near alloc
On Thu 27-10-16 10:41:24, Leizhen (ThunderTown) wrote:
>
>
> On 2016/10/26 17:31, Michal Hocko wrote:
> > On Wed 26-10-16 11:10:44, Leizhen (ThunderTown) wrote:
> >>
> >>
> >> On 2016/10/25 21:23, Michal Hocko wrote:
> >>> On Tue 25-10-16 10:59:17, Zhen Lei wrote:
> >>>> If HAVE_MEMORYLESS_NODES is selected, and some memoryless numa nodes are
> >>>> actually exist. The percpu variable areas and numa control blocks of that
> >>>> memoryless numa nodes need to be allocated from the nearest available
> >>>> node to improve performance.
> >>>>
> >>>> Although memblock_alloc_try_nid and memblock_virt_alloc_try_nid try the
> >>>> specified nid at the first time, but if that allocation failed it will
> >>>> directly drop to use NUMA_NO_NODE. This mean any nodes maybe possible at
> >>>> the second time.
> >>>>
> >>>> To compatible the above old scene, I use a marco node_distance_ready to
> >>>> control it. By default, the marco node_distance_ready is not defined in
> >>>> any platforms, the above mentioned functions will work as normal as
> >>>> before. Otherwise, they will try the nearest node first.
> >>>
> >>> I am sorry but it is absolutely unclear to me _what_ is the motivation
> >>> of the patch. Is this a performance optimization, correctness issue or
> >>> something else? Could you please restate what is the problem, why do you
> >>> think it has to be fixed at memblock layer and describe what the actual
> >>> fix is please?
> >>
> >> This is a performance optimization.
> >
> > Do you have any numbers to back the improvements?
>
> I have not collected any performance data, but at least in theory,
> it's beneficial and harmless, except make code looks a bit
> urly.
The whole memoryless area is cluttered with hacks because everybody just
adds pieces here and there to make his particular usecase work IMHO.
Adding more on top for performance reasons which are even not measured
to prove a clear win is a no go. Please step back try to think how this
could be done with an existing infrastructure we have (some cleanups
while doing that would be hugely appreciated) and if that is not
possible then explain why and why it is not feasible to fix that before
you start adding a new API.
Thanks!
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists