lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220331231027.GZ4285@paulmck-ThinkPad-P17-Gen-1>
Date:   Thu, 31 Mar 2022 16:10:27 -0700
From:   "Paul E. McKenney" <paulmck@...nel.org>
To:     Eric Dumazet <edumazet@...gle.com>
Cc:     LKML <linux-kernel@...r.kernel.org>
Subject: Re: [BUG] rcu-tasks : should take care of sparse cpu masks

On Thu, Mar 31, 2022 at 03:54:02PM -0700, Eric Dumazet wrote:
> On Thu, Mar 31, 2022 at 3:42 PM Paul E. McKenney <paulmck@...nel.org> wrote:
> >
> > On Thu, Mar 31, 2022 at 02:45:25PM -0700, Eric Dumazet wrote:
> > > Hi Paul
> > >
> > > It seems you assume per cpu ptr for arbitrary indexes (< nr_cpu_ids) are valid.
> >
> > Gah!  I knew I was forgetting something...
> >
> > But just to check, is this a theoretical problem or something you hit
> > on real hardware?  (For the rest of this email, I am assuming the latter.)
> 
> Code review really...

OK, not yet an emergency, then.  ;-)

If it does become an emergency, one workaround is to set the
rcupdate.rcu_task_enqueue_lim kernel boot parameter to the number of CPUs.

> > > What do you think of the (untested) following patch ?
> >
> > One issue with this patch is that the contention could be unpredictable,
> > or worse, vary among CPU, especially if the cpu_possible_mask was oddly
> > distributed.
> >
> > So might it be better to restrict this to all on CPU 0 on the one hand
> > and completely per-CPU on the other?  (Or all on the boot CPU, in case
> > I am forgetting some misbegotten architecture that can run without a
> > CPU 0.)
> 
> If I understand correctly, cblist_init_generic() could setup
> percpu_enqueue_shift
> to something smaller than order_base_2(nr_cpu_ids)
> 
> Meaning that we could reach a non zero idx in (smp_processor_id() >>
> percpu_enqueue_shift)
> 
> So even if CPU0 is always present (I am not sure this is guaranteed,
> but this seems reasonable),
> we could still attempt a per_cpu_ptr(PTR,  not_present_cpu), and get garbage.

My thought is to replace the shift with a boolean.  One setting just
always uses CPU 0's cblist (or that of the boot CPU, if there is some
architecture that is completely innocent of CPU 0), and the other setting
always uses that of the currently running CPU.  This does rule out the
current possibility of using (say) 16 cblists on a 64-CPU system, but
I suspect that this would be just fine.

Neeraj's and my proposed memory-diet patches for SRCU do something like
this.  These are on branch srcu.2022.03.29a on -rcu.

Thoughts?

							Thanx, Paul

> > > Thanks.
> > >
> > > diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
> > > index 99cf3a13954cfb17828fbbeeb884f11614a526a9..df3785be4022e903d9682dd403464aa9927aa5c2
> > > 100644
> > > --- a/kernel/rcu/tasks.h
> > > +++ b/kernel/rcu/tasks.h
> > > @@ -273,13 +273,17 @@ static void call_rcu_tasks_generic(struct
> > > rcu_head *rhp, rcu_callback_t func,
> > >         bool needadjust = false;
> > >         bool needwake;
> > >         struct rcu_tasks_percpu *rtpcp;
> > > +       int ideal_cpu, chosen_cpu;
> > >
> > >         rhp->next = NULL;
> > >         rhp->func = func;
> > >         local_irq_save(flags);
> > >         rcu_read_lock();
> > > -       rtpcp = per_cpu_ptr(rtp->rtpcpu,
> > > -                           smp_processor_id() >>
> > > READ_ONCE(rtp->percpu_enqueue_shift));
> > > +
> > > +       ideal_cpu = smp_processor_id() >> READ_ONCE(rtp->percpu_enqueue_shift);
> > > +       chosen_cpu = cpumask_next(ideal_cpu - 1, cpu_online_mask);
> > > +
> > > +       rtpcp = per_cpu_ptr(rtp->rtpcpu, chosen_cpu);
> > >         if (!raw_spin_trylock_rcu_node(rtpcp)) { // irqs already disabled.
> > >                 raw_spin_lock_rcu_node(rtpcp); // irqs already disabled.
> > >                 j = jiffies;

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ