[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKgT0UdtgaZv5sjAcSe8-UYsxoji4scbJTRvTECZDpt+TPM+FA@mail.gmail.com>
Date: Wed, 5 Sep 2018 13:35:11 -0700
From: Alexander Duyck <alexander.duyck@...il.com>
To: Pavel.Tatashin@...rosoft.com
Cc: mhocko@...nel.org, linux-mm <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
"Duyck, Alexander H" <alexander.h.duyck@...el.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Ingo Molnar <mingo@...nel.org>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Subject: Re: [PATCH 2/2] mm: Create non-atomic version of SetPageReserved for
init use
On Wed, Sep 5, 2018 at 1:22 PM Pasha Tatashin
<Pavel.Tatashin@...rosoft.com> wrote:
>
>
>
> On 9/5/18 4:18 PM, Alexander Duyck wrote:
> > On Tue, Sep 4, 2018 at 11:24 PM Michal Hocko <mhocko@...nel.org> wrote:
> >>
> >> On Tue 04-09-18 11:33:45, Alexander Duyck wrote:
> >>> From: Alexander Duyck <alexander.h.duyck@...el.com>
> >>>
> >>> It doesn't make much sense to use the atomic SetPageReserved at init time
> >>> when we are using memset to clear the memory and manipulating the page
> >>> flags via simple "&=" and "|=" operations in __init_single_page.
> >>>
> >>> This patch adds a non-atomic version __SetPageReserved that can be used
> >>> during page init and shows about a 10% improvement in initialization times
> >>> on the systems I have available for testing.
> >>
> >> I agree with Dave about a comment is due. I am also quite surprised that
> >> this leads to such a large improvement. Could you be more specific about
> >> your test and machines you were testing on?
> >
> > So my test case has been just initializing 4 3TB blocks of persistent
> > memory with a few trace_printk values added to track total time in
> > move_pfn_range_to_zone.
> >
> > What I have been seeing is that the time needed for the call drops on
> > average from 35-36 seconds down to around 31-32.
>
> Just curious why is there variance? During boot time is usually pretty
> consistent, as there is only one thread and system is in pretty much the
> same state.
>
> A dmesg output in the commit log would be helpful.
>
> Thank you,
> Pavel
The variance has to do with the fact that it is being added via
hot-plug. So in this case the system boots and then after 5 minutes it
then goes about hot-plugging the memory. The memmap_init_zone call
will make regular calls into cond_resched() and it seems like if there
are any other active threads that can end up impacting the timings and
provide a few hundred ms of variation between runs.
In addition there is also NUMA locality that plays a role. I have seen
values as low as 25.5s pre-patch, 23.2 after, and values as high as
39.17 pre-patch, 37.3 after. I am assuming that the lowest values just
happened to luck into being node local, and the highest values end up
being 2 nodes away on the 4 node system I am testing. I'm planning to
try and address the NUMA issues using an approach similar to what the
deferred_init is already doing by trying to start a kernel thread on
the correct node and then probably just waiting on that to complete
outside of the hotplug lock. The solution will end up being a hybrid
probably between the work Dan Williams had submitted a couple months
ago and the existing deferred_init code. But I will be targeting that
for 4.20 at the earliest.
- Alex
Powered by blists - more mailing lists