[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <88694240-1eea-4f4c-bb7b-80de25f252e7@paulmck-laptop>
Date: Sun, 3 Nov 2024 07:03:41 -0800
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Boqun Feng <boqun.feng@...il.com>
Cc: Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Vlastimil Babka <vbabka@...e.cz>, Marco Elver <elver@...gle.com>,
linux-next@...r.kernel.org, linux-kernel@...r.kernel.org,
kasan-dev@...glegroups.com, linux-mm@...ck.org,
sfr@...b.auug.org.au, longman@...hat.com, cl@...ux.com,
penberg@...nel.org, rientjes@...gle.com, iamjoonsoo.kim@....com,
akpm@...ux-foundation.org, Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH] scftorture: Use workqueue to free scf_check
On Sat, Nov 02, 2024 at 08:35:36PM -0700, Boqun Feng wrote:
> On Fri, Nov 01, 2024 at 04:35:28PM -0700, Paul E. McKenney wrote:
> > On Fri, Nov 01, 2024 at 12:54:38PM -0700, Boqun Feng wrote:
> > > Paul reported an invalid wait context issue in scftorture catched by
> > > lockdep, and the cause of the issue is because scf_handler() may call
> > > kfree() to free the struct scf_check:
> > >
> > > static void scf_handler(void *scfc_in)
> > > {
> > > [...]
> > > } else {
> > > kfree(scfcp);
> > > }
> > > }
> > >
> > > (call chain anlysis from Marco Elver)
> > >
> > > This is problematic because smp_call_function() uses non-threaded
> > > interrupt and kfree() may acquire a local_lock which is a sleepable lock
> > > on RT.
> > >
> > > The general rule is: do not alloc or free memory in non-threaded
> > > interrupt conntexts.
> > >
> > > A quick fix is to use workqueue to defer the kfree(). However, this is
> > > OK only because scftorture is test code. In general the users of
> > > interrupts should avoid giving interrupt handlers the ownership of
> > > objects, that is, users should handle the lifetime of objects outside
> > > and interrupt handlers should only hold references to objects.
> > >
> > > Reported-by: "Paul E. McKenney" <paulmck@...nel.org>
> > > Link: https://lore.kernel.org/lkml/41619255-cdc2-4573-a360-7794fc3614f7@paulmck-laptop/
> > > Signed-off-by: Boqun Feng <boqun.feng@...il.com>
> >
> > Thank you!
> >
> > I was worried that putting each kfree() into a separate workqueue handler
> > would result in freeing not keeping up with allocation for asynchronous
> > testing (for example, scftorture.weight_single=1), but it seems to be
> > doing fine in early testing.
>
> I shared the same worry, so it's why I added the comments before
> queue_work() saying it's only OK because it's test code, it's certainly
> not something recommended for general use.
>
> But glad it turns out OK so far for scftorture ;-)
That said, I have only tried a couple of memory sizes at 64 CPUs, the
default (512M), which OOMs both with and without this fix and 7G, which
is selected by torture.sh, which avoids OOMing either way. It would be
interesting to vary the memory provided between those limits and see if
there is any difference in behavior.
It avoids OOM at the default 512M at 16 CPUs.
Ah, and I did not check throughput, which might have changed. A quick
test on my laptop says that it dropped by almost a factor of two,
from not quite 1M invocations/s to a bit more than 500K invocations/s.
So something more efficient does seem in order. ;-)
tools/testing/selftests/rcutorture/bin/kvm.sh --torture scf --allcpus --configs PREEMPT --duration 30 --bootargs "scftorture.weight_single=1" --trust-make
Thanx, Paul
> Regards,
> Boqun
>
> > So I have queued this in my -rcu tree for review and further testing.
> >
> > Thanx, Paul
> >
> > > ---
> > > kernel/scftorture.c | 14 +++++++++++++-
> > > 1 file changed, 13 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/kernel/scftorture.c b/kernel/scftorture.c
> > > index 44e83a646264..ab6dcc7c0116 100644
> > > --- a/kernel/scftorture.c
> > > +++ b/kernel/scftorture.c
> > > @@ -127,6 +127,7 @@ static unsigned long scf_sel_totweight;
> > >
> > > // Communicate between caller and handler.
> > > struct scf_check {
> > > + struct work_struct work;
> > > bool scfc_in;
> > > bool scfc_out;
> > > int scfc_cpu; // -1 for not _single().
> > > @@ -252,6 +253,13 @@ static struct scf_selector *scf_sel_rand(struct torture_random_state *trsp)
> > > return &scf_sel_array[0];
> > > }
> > >
> > > +static void kfree_scf_check_work(struct work_struct *w)
> > > +{
> > > + struct scf_check *scfcp = container_of(w, struct scf_check, work);
> > > +
> > > + kfree(scfcp);
> > > +}
> > > +
> > > // Update statistics and occasionally burn up mass quantities of CPU time,
> > > // if told to do so via scftorture.longwait. Otherwise, occasionally burn
> > > // a little bit.
> > > @@ -296,7 +304,10 @@ static void scf_handler(void *scfc_in)
> > > if (scfcp->scfc_rpc)
> > > complete(&scfcp->scfc_completion);
> > > } else {
> > > - kfree(scfcp);
> > > + // Cannot call kfree() directly, pass it to workqueue. It's OK
> > > + // only because this is test code, avoid this in real world
> > > + // usage.
> > > + queue_work(system_wq, &scfcp->work);
> > > }
> > > }
> > >
> > > @@ -335,6 +346,7 @@ static void scftorture_invoke_one(struct scf_statistics *scfp, struct torture_ra
> > > scfcp->scfc_wait = scfsp->scfs_wait;
> > > scfcp->scfc_out = false;
> > > scfcp->scfc_rpc = false;
> > > + INIT_WORK(&scfcp->work, kfree_scf_check_work);
> > > }
> > > }
> > > switch (scfsp->scfs_prim) {
> > > --
> > > 2.45.2
> > >
Powered by blists - more mailing lists