lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200420154317.klwoztvdybmvykwe@e107158-lin.cambridge.arm.com>
Date:   Mon, 20 Apr 2020 16:43:18 +0100
From:   Qais Yousef <qais.yousef@....com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Steven Rostedt <rostedt@...dmis.org>,
        Ingo Molnar <mingo@...hat.com>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Yury Norov <yury.norov@...il.com>,
        Paul Turner <pjt@...gle.com>,
        Alexey Dobriyan <adobriyan@...il.com>,
        Josh Don <joshdon@...gle.com>,
        Pavan Kondeti <pkondeti@...eaurora.org>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/4] cpumask: Make cpumask_any() truly random

On 04/15/20 11:36, Peter Zijlstra wrote:
> On Tue, Apr 14, 2020 at 12:19:56PM -0400, Steven Rostedt wrote:
> 
> > > +/**
> > > + * cpumask_any - pick a "random" cpu from *srcp
> > > + * @srcp: the input cpumask
> > > + *
> > > + * Returns >= nr_cpu_ids if no cpus set.
> > > + */
> > > +int cpumask_any(const struct cpumask *srcp)
> > > +{
> > > +	int next, prev;
> > > +
> > > +	/* NOTE: our first selection will skip 0. */
> > > +	prev = __this_cpu_read(distribute_cpu_mask_prev);
> > > +
> > > +	next = cpumask_next(prev, srcp);
> > > +	if (next >= nr_cpu_ids)
> > > +		next = cpumask_first(srcp);
> > > +
> > > +	if (next < nr_cpu_ids)
> > > +		__this_cpu_write(distribute_cpu_mask_prev, next);
> > 
> > Do we care if this gets preempted and migrated to a new CPU where we read
> > "prev" from one distribute_cpu_mask_prev on one CPU and write it to another
> > CPU?
> 
> I don't think we do; that just adds to the randomness ;-), but you do

Yep we don't care and it should enhance the randomness.

> raise a good point in that __this_cpu_*() ops assume preemption is
> already disabled, which is true of the one exiting
> cpumask_any_and_distribute() caller, but is no longer true after patch
> 1, and this patch repeats the mistake.
> 
> So either we need to disable preemption across the function or
> transition to this_cpu_*() ops.

Sorry wasn't aware about the preemption check in __this_cpu_write().

Transitioning to this_cpu_write() makes sense. Unless Josh comes back it'll
break something he noticed.

Thanks

--
Qais Yousef

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ