lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ec3f99af-902d-4a12-f533-07de97dab310@lge.com>
Date:   Wed, 30 May 2018 22:06:52 +0900
From:   Byungchul Park <byungchul.park@....com>
To:     paulmck@...ux.vnet.ibm.com
Cc:     jiangshanlai@...il.com, josh@...htriplett.org, rostedt@...dmis.org,
        mathieu.desnoyers@...icios.com, linux-kernel@...r.kernel.org,
        kernel-team@....com, joel@...lfernandes.org
Subject: Re: [RFC] rcu: Check the range of jiffies_till_xxx_fqs on setting
 them



On 2018-05-29 21:01, Paul E. McKenney wrote:
> On Tue, May 29, 2018 at 04:23:36PM +0900, Byungchul Park wrote:
>> Hello Paul and folks,
>>
>> I've thought the code should've been like the below since the range
>> checking of jiffies_till_first_fqs and jiffies_till_next_fqs everytime
>> in the loop of rcu_gp_kthread are unnecessary at all. However, it's ok
>> even if you don't think it's worth doing it.
> 
> Nice!
> 
>> Secondly, I also think jiffies_till_first_fqs = 0 is meaningless so
>> added checking and adjusting it as what's done on jiffies_till_next_fqs.
>> Thought?
> 
> Actually, jiffies_till_first_fqs == 0 is very useful for cases where
> at least one CPU is expected to be idle and grace-period latency is
> important.  In this case, doing the first scan immediately gets the
> dyntick-idle state recorded immediately, getting the idle CPUs out of
> the way of the grace period immediately.

Hi Paul~

You might want to handle it through sysfs. Otherwise, we can do it with
force_quiescent_state() IMHO.

> So why not do this scan as part of grace-period initialization?  Because
> doing so consumes extra CPU and results in extra cache misses, which is
> the opposite of what you want on a completely busy system, especially
> one where the CPUs are context switching quickly.  Thus no scan during
> grace-period initialization.

I am sorry I don't understand this paragraph. :(

> But I can see the desire to share code.
> 
> One approach would be to embed the kernel_params_ops structure inside
> another structure containing the limits, then just have two structures.
> Perhaps something like this already exists?  I don't see it right off,
> but then again, I am not exactly an expert on module_param.

It would be much nicer if we can as you said. I will check it.

Thanks a lot Paul.

-- 
Thanks,
Byungchul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ