[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YH0XJYKlBTFQz/4v@yekko.fritz.box>
Date: Mon, 19 Apr 2021 15:37:41 +1000
From: David Gibson <david@...son.dropbear.id.au>
To: Leonardo Bras <leobras.c@...il.com>
Cc: Michael Ellerman <mpe@...erman.id.au>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Paul Mackerras <paulus@...ba.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Sandipan Das <sandipan@...ux.ibm.com>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.ibm.com>,
Logan Gunthorpe <logang@...tatee.com>,
Mike Rapoport <rppt@...nel.org>,
Bharata B Rao <bharata@...ux.ibm.com>,
Dan Williams <dan.j.williams@...el.com>,
Nicholas Piggin <npiggin@...il.com>,
Nathan Lynch <nathanl@...ux.ibm.com>,
David Hildenbrand <david@...hat.com>,
Laurent Dufour <ldufour@...ux.ibm.com>,
Scott Cheloha <cheloha@...ux.ibm.com>,
linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/3] powerpc/mm/hash: Avoid multiple HPT resize-downs on
memory hotunplug
On Fri, Apr 09, 2021 at 12:31:03AM -0300, Leonardo Bras wrote:
> Hello David, thanks for commenting.
>
> On Tue, 2021-03-23 at 10:45 +1100, David Gibson wrote:
> > > @@ -805,6 +808,10 @@ static int resize_hpt_for_hotplug(unsigned long new_mem_size, bool shrinking)
> > > if (shrinking) {
> > >
> > > + /* When batch removing entries, only resizes HPT at the end. */
> > > + if (atomic_read_acquire(&hpt_resize_disable))
> > > + return 0;
> > > +
> >
> > I'm not quite convinced by this locking. Couldn't hpt_resize_disable
> > be set after this point, but while you're still inside
> > resize_hpt_for_hotplug()? Probably better to use an explicit mutex
> > (and mutex_trylock()) to make the critical sections clearer.
>
> Sure, I can do that for v2.
>
> > Except... do we even need the fancy mechanics to suppress the resizes
> > in one place to do them elswhere. Couldn't we just replace the
> > existing resize calls with the batched ones?
>
> How do you think of having batched resizes-down in HPT?
I think it's a good idea. We still have to have the loop to resize
bigger if we can't fit everything into the smallest target size, but
that still only makes the worst case as bad at the always-case is
currently.
> Other than the current approach, I could only think of a way that would
> touch a lot of generic code, and/or duplicate some functions, as
> dlpar_add_lmb() does a lot of other stuff.
>
> > > +void hash_memory_batch_shrink_end(void)
> > > +{
> > > + unsigned long newsize;
> > > +
> > > + /* Re-enables HPT resize-down after hot-unplug */
> > > + atomic_set_release(&hpt_resize_disable, 0);
> > > +
> > > + newsize = memblock_phys_mem_size();
> > > + /* Resize to smallest SHIFT possible */
> > > + while (resize_hpt_for_hotplug(newsize, true) == -ENOSPC) {
> > > + newsize *= 2;
> >
> > As noted earlier, doing this without an explicit cap on the new hpt
> > size (of the existing size) this makes me nervous.
> >
>
> I can add a stop in v2.
>
> > Less so, but doing
> > the calculations on memory size, rather than explictly on HPT size /
> > HPT order also seems kinda clunky.
>
> Agree, but at this point, it would seem kind of a waste to find the
> shift from newsize, then calculate (1 << shift) for each retry of
> resize_hpt_for_hotplug() only to point that we are retrying the order
> value.
Yeah, I see your poiint.
>
> But sure, if you think it looks better, I can change that.
>
> > > +void memory_batch_shrink_begin(void)
> > > +{
> > > + if (!radix_enabled())
> > > + hash_memory_batch_shrink_begin();
> > > +}
> > > +
> > > +void memory_batch_shrink_end(void)
> > > +{
> > > + if (!radix_enabled())
> > > + hash_memory_batch_shrink_end();
> > > +}
> >
> > Again, these wrappers don't seem particularly useful to me.
>
> Options would be add 'if (!radix_enabled())' to hotplug-memory.c
> functions or to hash* functions, which look kind of wrong.
I think the if !radix_enabled in hotplug-memory.c isn't too bad, in
fact possibly helpful as a hint that this is HPT only logic.
>
> > > + memory_batch_shrink_end();
> >
> > remove_by_index only removes a single LMB, so there's no real point to
> > batching here.
>
> Sure, will be fixed for v2.
>
> > > @@ -700,6 +712,7 @@ static int dlpar_memory_add_by_count(u32 lmbs_to_add)
> > > if (lmbs_added != lmbs_to_add) {
> > > pr_err("Memory hot-add failed, removing any added LMBs\n");
> > >
> > > + memory_batch_shrink_begin();
> >
> >
> > The effect of these on the memory grow path is far from clear.
> >
>
> On hotplug, HPT is resized-up before adding LMBs.
> On hotunplug, HPT is resized-down after removing LMBs.
> And each one has it's own mechanism to batch HPT resizes...
>
> I can't understand exactly how using it on hotplug fail path can be any
> different than using it on hotunplug.
> >
>
> Can you please help me understanding this?
>
> Best regards,
> Leonardo Bras
>
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
Download attachment "signature.asc" of type "application/pgp-signature" (834 bytes)
Powered by blists - more mailing lists