lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 27 Feb 2013 09:28:26 +0100
From:	Vincent Guittot <vincent.guittot@...aro.org>
To:	Frederic Weisbecker <fweisbec@...il.com>
Cc:	linux-kernel@...r.kernel.org, linaro-dev@...ts.linaro.org,
	peterz@...radead.org, mingo@...nel.org, rostedt@...dmis.org,
	efault@....de
Subject: Re: [PATCH v4] sched: fix init NOHZ_IDLE flag

On 26 February 2013 18:43, Frederic Weisbecker <fweisbec@...il.com> wrote:
> 2013/2/26 Vincent Guittot <vincent.guittot@...aro.org>:
>> On 26 February 2013 14:16, Frederic Weisbecker <fweisbec@...il.com> wrote:
>>> 2013/2/22 Vincent Guittot <vincent.guittot@...aro.org>:
>>>> I wanted to avoid having to use the sd pointer for testing NOHZ_IDLE
>>>> flag because it occurs each time we go into idle but it seems to be
>>>> not easily feasible.
>>>> Another solution could be to add a synchronization step between
>>>> rcu_assign_pointer(dom 1, NULL) and create new domain to ensure that
>>>> all pending access to old sd values, has finished but this will imply
>>>> a potential delay in the rebuild  of sched_domain and i'm not sure
>>>> that it's acceptable
>
> Ah I see what you meant there. Making a synchronize_rcu() after
> setting the dom to NULL, on top of which we could work on preventing
> from any concurrent nohz_flag modification. But cpu hotplug seem to
> become a bit of a performance sensitive path this day.

That's was also my concern

>
> Ok I don't like having a per cpu state in struct sched domain but for
> now I can't find anything better. So my suggestion is that we do this
> and describe well the race, define the issue in the changelog and code
> comments and explain how we are solving it. This way at least the
> issue is identified and known. Then later, on review or after the
> patch is upstream, if somebody with some good taste comes with a
> better idea, we consider it.
>
> What do you think?

I don't have better solution than adding this state in the
sched_domain if we want to keep the exact same behavior. This will be
a bit of waste of mem because we don't need to update all sched_domain
level (1st level is enough).

Vincent
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ