[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <DC93C5EB-91A5-4291-A642-8A57179930E4@joelfernandes.org>
Date: Thu, 20 Oct 2022 17:33:37 -0400
From: Joel Fernandes <joel@...lfernandes.org>
To: paulmck@...nel.org
Cc: Zqiang <qiang1.zhang@...el.com>, frederic@...nel.org,
rcu@...r.kernel.org, linux-kernel@...r.kernel.org,
Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [PATCH] rcu: Make call_rcu() lazy only when CONFIG_RCU_LAZY is enabled
> On Oct 20, 2022, at 2:46 PM, Joel Fernandes <joel@...lfernandes.org> wrote:
>>>
>>>>> More comments below:
>>>>>>
>>>>>>>> Looks like though I made every one test the patch without having to enable the config option ;-). Hey, I’m a half glass full kind of guy, why do you ask?
>>>>>>>>
>>>>>>>> Paul, I’ll take a closer look once I’m at the desk, but would you prefer to squash a diff into the existing patch, or want a new patch altogether?
>>>>>>>
>>>>>>> On the other hand, what I’d want is to nuke the config option altogether or make it default y, we want to catch issues sooner than later.
>>>>>>
>>>>>> That might be what we do at some point, but one thing at a time. Let's
>>>>>> not penalize innocent bystanders, at least not just yet.
>>>>>
>>>>> It’s a trade off, I thought that’s why we wanted to have the binary search stuff. If no one reports issue on Linux-next, then that code won’t be put to use in the near future at least.
>>>>
>>>> Well, not to put too fine a point on it, but we currently really are
>>>> exposing -next to lazy call_rcu(). ;-)
>>>
>>> This is true. I think I assumed nobody will enable a default off config option but I probably meant a smaller percentage will.
>>>
>>>>>> I do very strongly encourage the ChromeOS and Android folks to test this
>>>>>> very severely, however.
>>>>>
>>>>> Agreed. Yes that will happen, though I have to make a note for Android folks other than Vlad, to backports these (and enable the config option), carefully! Especially on pre-5.15 kernels. Luckily I had to do this (not so trivial) exercise myself.
>>>>
>>>> And this is another situation in which the binary search stuff may prove
>>>> extremely useful.
>>>
>>> Agreed. Thanks. Very least I owe per-rdp splitting of the hashtable, to that code. Steven and me talked today that probably the hashtable can go into the rcu_segcblist itself, and protect it by the nocb lock.
>>
>> I have to ask...
>>
>> How does this fit in with CPU-hotplug and callback migration?
>
> Yes it will require change and I already thought of that, have to update the hashtable on all such events.
>
>> More to the point, what events would cause us to decide that this is
>> required? For example, shouldn't we give your current binary-search
>> code at least a few chances to save the day?
>
> Totally, if you’re taking the patch as is, I would be very happy. And I’ll continue to improve it with the above. But I was not sure yet if you’re taking it.
>
> I think it’s a worthwhile to take it for mainline in the current state and I’ll also add more data about callbacks to it in future (queuing time of callback, etc) — basically all the stuff I wanted to add to rcu_head.
>
> One reason for the above proposal is I also want to keep it turned on in production, and the current solution cannot be, due to the global locking and is not expected to be kept on in production. But is still a worthwhile addition for debug kernels IMO.
I realized while talking to Steve that the hashtable has to be per CPU if we are to store more than a lazy flag, such as queuing timestamps. This is because you can have multiple callbacks of the same function pointer queued on multiple CPUs. So you can have multiple timestamps to store. Same thing if we stored automata. It’s per callback instance, not per callback function.
Thanks,
- Joel
>
> Thanks,
>
> - Joel
>
>
>> Thanx, Paul
>>
>>>>>>>>> +}
>>>>>>>>> +EXPORT_SYMBOL_GPL(call_rcu);
>>>>>>>>> +#endif
>>>>>>>>>
>>>>>>>>> /* Maximum number of jiffies to wait before draining a batch. */
>>>>>>>>> #define KFREE_DRAIN_JIFFIES (5 * HZ)
>>>>>>>>> --
>>>>>>>>> 2.25.1
>>>>>>>>>
Powered by blists - more mailing lists