lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20230107163142.GC4028633@paulmck-ThinkPad-P17-Gen-1>
Date:   Sat, 7 Jan 2023 08:31:42 -0800
From:   "Paul E. McKenney" <paulmck@...nel.org>
To:     Frederic Weisbecker <frederic@...nel.org>
Cc:     "Zhang, Qiang1" <qiang1.zhang@...el.com>,
        "quic_neeraju@...cinc.com" <quic_neeraju@...cinc.com>,
        "joel@...lfernandes.org" <joel@...lfernandes.org>,
        "rcu@...r.kernel.org" <rcu@...r.kernel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2] rcu: Rework tick dependency setting into
 rcu_exp_handler()

On Fri, Jan 06, 2023 at 11:45:46PM +0100, Frederic Weisbecker wrote:
> On Fri, Jan 06, 2023 at 02:42:59AM +0000, Zhang, Qiang1 wrote:
> > On Thu, Jan 05, 2023 at 11:40:00AM +0800, Zqiang wrote:
> > > Currently, when first find out the expedited grace period is not end 
> > > and timeout occurred, we set tick dependency for CPUs that have not 
> > > yet reported the quiescent state in the rcu_node structure's->expmask 
> > > but need to eliminate races between set and clear tick dependency, 
> > > setting CPU tick dependency need to hold rcu_node structure's->lock.
> > > 
> > > This commit move tick dependency setting into rcu_exp_handler(), set 
> > > tick dependency when fail to report a quiescent state and clear tick 
> > > dependency in rcu_report_exp_rdp(). [from Frederic Weisbecker 
> > > suggestion]
> > > 
> > > Signed-off-by: Zqiang <qiang1.zhang@...el.com>
> > >
> > >First, a big "thank you" to you an Frederic for investigating this approach!
> > >
> > >So which is better, this patch or the one that I already have queued?
> > >
> > >The advantage of the patch of yours that I already have queued is that CPUs that respond in some other way within a millisecond do not get hit with an additional scheduling-clock interrupt.
> > >
> > >On the other hand, if the CPU goes through a quiescent state before the next scheduling-clock interrupt arrives, rcu_report_exp_cpu_mult() will shut down the tick before it happens.  Plus if the CPU waits a full tick before reaching a quiescent state, then the tick_dep_set_cpu() called from
> > >synchronize_rcu_expedited_wait() is going to send along an IPI anyway.
> > 
> > Agreed, this new patch is set tick dependency immediately when we can't report a quiescent state
> > in rcu_exp_handler(), this seems a little too aggressive.
> > 
> > 
> > >
> > >Except that invoking tick_dep_set_cpu() on the CPU itself will also do an IPI from tick_dep_set_cpu() because of IRQ_WORK_INIT_HARD(), right?
> > >Which means that the patch below gets us an extra self-IPI, right?
> > >Or am I misreading the code?
> > 
> > Yes, This looks like it will trigger an additional IPI interrupt.
> > 
> > >
> > >In addition, doesn't tick_dep_clear_cpu() just clear a bit?  Won't that mean that the next scheduling-clock interrupt will happen, just that the one after that won't?  (Give or take kernel-to-user or kernel-to-idle transitions that might happen in the meantime.)
> > 
> > Yes, tick_dep_clear_cpu() just only clear a bit. next scheduling-clock interrupt will happen.
> > 
> > So I also want to know which one is better 😊?
> 
> Right, I may have misled you with this change. I missed the fact that a chance
> is given for 1 jiffy to nohz_full CPUs to report a QS before the tick is forced
> there.
> 
> Sorry about that. Your first patch is still a good fix though!

And I have it queued, with Frederic's Reviewed-by.

And hey, if you don't miss a thing or two once in a while, you are not
reviewing sufficiently challenging code.  ;-)

So I repeat my earlier "thank you" to both of you for looking into this!

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ