[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220712224106.GH1790663@paulmck-ThinkPad-P17-Gen-1>
Date: Tue, 12 Jul 2022 15:41:06 -0700
From: "Paul E. McKenney" <paulmck@...nel.org>
To: Joel Fernandes <joel@...lfernandes.org>
Cc: rcu <rcu@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>,
Rushikesh S Kadam <rushikesh.s.kadam@...el.com>,
"Uladzislau Rezki (Sony)" <urezki@...il.com>,
Neeraj upadhyay <neeraj.iitr10@...il.com>,
Frederic Weisbecker <frederic@...nel.org>,
Steven Rostedt <rostedt@...dmis.org>, vineeth@...byteword.org
Subject: Re: [PATCH v2 6/8] rcuscale: Add test for using call_rcu_lazy() to
emulate kfree_rcu()
On Tue, Jul 12, 2022 at 05:15:23PM -0400, Joel Fernandes wrote:
>
>
> On 7/12/2022 4:58 PM, Paul E. McKenney wrote:
> > On Tue, Jul 12, 2022 at 04:27:05PM -0400, Joel Fernandes wrote:
> >> Ah, with all the threads, I missed this one :(. Sorry about that.
> >
> > I know that feeling...
> >
> >> On Fri, Jul 8, 2022 at 7:06 PM Paul E. McKenney <paulmck@...nel.org> wrote:
> >>
> >>>> Currently I added a test like the following which adds a new torture type, my
> >>>> thought was to stress the new code to make sure nothing crashed or hung the
> >>>> kernel. That is working well except I don't exactly understand the total-gps
> >>>> print showing 0, which the other print shows 1188 GPs. I'll go dig into that
> >>>> tomorrow.. thanks!
> >>>>
> >>>> The print shows
> >>>> TREE11 ------- 1474 GPs (12.2833/s) [rcu_lazy: g0 f0x0 total-gps=0]
> >>>> TREE11 no success message, 7 successful version messages
> >>>
> >>> Nice!!! It is very good to see you correctly using the rcu_torture_ops
> >>> facility correctly!
> >>>
> >>> And this could be good for your own testing, and I am happy to pull it
> >>> in for that purpose (given it being fixed, having a good commit log,
> >>> and so on). After all, TREE10 is quite similar -- not part of CFLIST,
> >>> but useful for certain types of focused testing.
> >>>
> >>> However, it would be very good to get call_rcu_lazy() testing going
> >>> more generally, and in particular in TREE01 where offloading changes
> >>> dynamically. A good way to do this is to add a .call_lazy() component
> >>> to the rcu_torture_ops structure, and check for it in a manner similar
> >>> to that done for the .deferred_free() component. Including adding a
> >>> gp_normal_lazy module parameter. This would allow habitual testing
> >>> on a few scenarios and focused lazy testing on all of them via the
> >>> --bootargs parameter.
> >>
> >> Ok, if you don't mind I will make this particular enhancement to the
> >> torture test in a future patchset, since I kind of decided on doing v3
> >> with just fixes to what I have and more testing. Certainly happy to
> >> enhance these tests in a future version.
> >
> > No need to gate v3 on those tests.
> >
> >>> On the total-gps=0, the usual suspicion would be that the lazy callbacks
> >>> never got invoked. It looks like you were doing about a two-minute run,
> >>> so maybe a longer run? Though weren't they supposed to kick in at 15
> >>> seconds or so? Or did this value of zero come about because this run
> >>> used exactly 300 grace periods?
> >>
> >> It was zero because it required the RCU_FLAVOR torture type, where as
> >> my torture type was lazy. Adding RCU_LAZY_FLAVOR to the list fixed it
> >> :)
> >
> > Heh! Then it didn't actually do any testing. Done that as well!
>
> Sorry to not be clear, I meant the switch-case list below, not the
> torture list in rcutorture.c! It was in the rcutorture.c so was being
> tested, just reporting zero gp_seq as I pointed.
>
> /*
> * Send along grace-period-related data for rcutorture diagnostics.
> */
> void rcutorture_get_gp_data(enum rcutorture_type test_type, int *flags,
> unsigned long *gp_seq)
> {
> switch (test_type) {
> case RCU_FLAVOR:
> case RCU_LAZY_FLAVOR:
> *flags = READ_ONCE(rcu_state.gp_flags);
> *gp_seq = rcu_seq_current(&rcu_state.gp_seq);
> break;
> default:
> break;
> }
> }
> EXPORT_SYMBOL_GPL(rcutorture_get_gp_data);
Ah, that would do it! Thank you for the clarification.
Thanx, Paul
Powered by blists - more mailing lists