[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANn89iJj_VR0L7g3-0=aZpKbXfVo7=BG0tsb8rhiTBc4zi_EtQ@mail.gmail.com>
Date: Mon, 4 Sep 2023 13:29:57 +0200
From: Eric Dumazet <edumazet@...gle.com>
To: Hillf Danton <hdanton@...a.com>
Cc: Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>, Netdev <netdev@...r.kernel.org>,
"Paul E. McKenney" <paulmck@...nel.org>, Linus Torvalds <torvalds@...ux-foundation.org>,
Naresh Kamboju <naresh.kamboju@...aro.org>, LKML <linux-kernel@...r.kernel.org>
Subject: Re: selftests: net: pmtu.sh: Unable to handle kernel paging request
at virtual address
On Sun, Sep 3, 2023 at 5:57 AM Hillf Danton <hdanton@...a.com> wrote:
>
> On Thu, 31 Aug 2023 15:12:30 +0200 Eric Dumazet <edumazet@...gle.com>
> > On Thu, Aug 31, 2023 at 2:17=E2=80=AFPM Hillf Danton <hdanton@...a.com>
> > > On Wed, 30 Aug 2023 21:44:57 +0900 Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
> > > >On 2023/08/30 20:26, Hillf Danton wrote:
> > > >>> <4>[ 399.014716] Call trace:
> > > >>> <4>[ 399.015702] percpu_counter_add_batch+0x28/0xd0
> > > >>> <4>[ 399.016399] dst_destroy+0x44/0x1e4
> > > >>> <4>[ 399.016681] dst_destroy_rcu+0x14/0x20
> > > >>> <4>[ 399.017009] rcu_core+0x2d0/0x5e0
> > > >>> <4>[ 399.017311] rcu_core_si+0x10/0x1c
> > > >>> <4>[ 399.017609] __do_softirq+0xd4/0x23c
> > > >>> <4>[ 399.017991] ____do_softirq+0x10/0x1c
> > > >>> <4>[ 399.018320] call_on_irq_stack+0x24/0x4c
> > > >>> <4>[ 399.018723] do_softirq_own_stack+0x1c/0x28
> > > >>> <4>[ 399.022639] __irq_exit_rcu+0x6c/0xcc
> > > >>> <4>[ 399.023434] irq_exit_rcu+0x10/0x1c
> > > >>> <4>[ 399.023962] el1_interrupt+0x8c/0xc0
> > > >>> <4>[ 399.024810] el1h_64_irq_handler+0x18/0x24
> > > >>> <4>[ 399.025324] el1h_64_irq+0x64/0x68
> > > >>> <4>[ 399.025612] _raw_spin_lock_bh+0x0/0x6c
> > > >>> <4>[ 399.026102] cleanup_net+0x280/0x45c
> > > >>> <4>[ 399.026403] process_one_work+0x1d4/0x310
> > > >>> <4>[ 399.027140] worker_thread+0x248/0x470
> > > >>> <4>[ 399.027621] kthread+0xfc/0x184
> > > >>> <4>[ 399.028068] ret_from_fork+0x10/0x20
> > > >>
> > > >> static void cleanup_net(struct work_struct *work)
> > > >> {
> > > >> ...
> > > >>
> > > >> synchronize_rcu();
> > > >>
> > > >> /* Run all of the network namespace exit methods */
> > > >> list_for_each_entry_reverse(ops, &pernet_list, list)
> > > >> ops_exit_list(ops, &net_exit_list);
> > > >> ...
> > > >>
> > > >> Why did the RCU sync above fail to work in this report, Eric?
> > > >
> > > > Why do you assume that synchronize_rcu() failed to work?
> > >
> > > In the ipv6 pernet_operations [1] for instance, dst_entries_destroy() is
> > > invoked after RCU sync to ensure that nobody is using the exiting net,
> > > but this report shows that protection falls apart.
> >
> > Because synchronize_rcu() is not the same than rcu_barrier()
> >
> > The dst_entries_add()/ percpu_counter_add_batch() call should not
> > happen after an rcu grace period.
>
> cpu2 cpu3
> ==== ====
> cleanup_net() rcu_read_lock();
> it is safe to use either netns or dst
> rcu_read_unlock();
> synchronize_rcu();
> unsafe to access anyone now
> >
> > Something like this (untested) patch
> >
> > diff --git a/net/core/dst.c b/net/core/dst.c
> > index 980e2fd2f013b3e50cc47ed0666ee5f24f50444b..f02fdd1da6066a4d56c2a0aa8038eca76d62f8bd
> > 100644
> > --- a/net/core/dst.c
> > +++ b/net/core/dst.c
> > @@ -163,8 +163,13 @@ EXPORT_SYMBOL(dst_dev_put);
> >
> > void dst_release(struct dst_entry *dst)
> > {
> > - if (dst && rcuref_put(&dst->__rcuref))
> > + if (dst && rcuref_put(&dst->__rcuref)) {
> > + if (!(dst->flags & DST_NOCOUNT)) {
> > + dst->flags |=3D DST_NOCOUNT;
> > + dst_entries_add(dst->ops, -1);
>
> Could this add happen after the rcu sync above?
>
I do not think so. All dst_release() should happen before netns removal.
Powered by blists - more mailing lists