lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 15 Apr 2020 11:36:17 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Steven Rostedt <rostedt@...dmis.org>
Cc:     Qais Yousef <qais.yousef@....com>, Ingo Molnar <mingo@...hat.com>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Yury Norov <yury.norov@...il.com>,
        Paul Turner <pjt@...gle.com>,
        Alexey Dobriyan <adobriyan@...il.com>,
        Josh Don <joshdon@...gle.com>,
        Pavan Kondeti <pkondeti@...eaurora.org>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/4] cpumask: Make cpumask_any() truly random

On Tue, Apr 14, 2020 at 12:19:56PM -0400, Steven Rostedt wrote:

> > +/**
> > + * cpumask_any - pick a "random" cpu from *srcp
> > + * @srcp: the input cpumask
> > + *
> > + * Returns >= nr_cpu_ids if no cpus set.
> > + */
> > +int cpumask_any(const struct cpumask *srcp)
> > +{
> > +	int next, prev;
> > +
> > +	/* NOTE: our first selection will skip 0. */
> > +	prev = __this_cpu_read(distribute_cpu_mask_prev);
> > +
> > +	next = cpumask_next(prev, srcp);
> > +	if (next >= nr_cpu_ids)
> > +		next = cpumask_first(srcp);
> > +
> > +	if (next < nr_cpu_ids)
> > +		__this_cpu_write(distribute_cpu_mask_prev, next);
> 
> Do we care if this gets preempted and migrated to a new CPU where we read
> "prev" from one distribute_cpu_mask_prev on one CPU and write it to another
> CPU?

I don't think we do; that just adds to the randomness ;-), but you do
raise a good point in that __this_cpu_*() ops assume preemption is
already disabled, which is true of the one exiting
cpumask_any_and_distribute() caller, but is no longer true after patch
1, and this patch repeats the mistake.

So either we need to disable preemption across the function or
transition to this_cpu_*() ops.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ