lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 21 Jul 2022 16:04:57 +0200
From:   Eric Dumazet <edumazet@...gle.com>
To:     Taehee Yoo <ap420073@...il.com>
Cc:     David Miller <davem@...emloft.net>,
        Jakub Kicinski <kuba@...nel.org>,
        Paolo Abeni <pabeni@...hat.com>,
        Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>,
        David Ahern <dsahern@...nel.org>,
        netdev <netdev@...r.kernel.org>
Subject: Re: [PATCH net] net: mld: do not use system_wq in the mld

On Thu, Jul 21, 2022 at 2:03 PM Taehee Yoo <ap420073@...il.com> wrote:
>
> mld works are supposed to be executed in mld_wq.
> But mld_{query | report}_work() calls schedule_delayed_work().
> schedule_delayed_work() internally uses system_wq.
> So, this would cause the reference count leak.

I do not think the changelog is accurate.
At least I do not understand it yet.

We can not unload the ipv6 module, so destroy_workqueue(mld_wq) is never used.



>
> splat looks like:
>  unregister_netdevice: waiting for br1 to become free. Usage count = 2
>  leaked reference.
>   ipv6_add_dev+0x3a5/0x1070
>   addrconf_notify+0x4f3/0x1760
>   notifier_call_chain+0x9e/0x180
>   register_netdevice+0xd10/0x11e0
>   br_dev_newlink+0x27/0x100 [bridge]
>   __rtnl_newlink+0xd85/0x14e0
>   rtnl_newlink+0x5f/0x90
>   rtnetlink_rcv_msg+0x335/0x9a0
>   netlink_rcv_skb+0x121/0x350
>   netlink_unicast+0x439/0x710
>   netlink_sendmsg+0x75f/0xc00
>   ____sys_sendmsg+0x694/0x860
>   ___sys_sendmsg+0xe9/0x160
>   __sys_sendmsg+0xbe/0x150
>   do_syscall_64+0x3b/0x90
>   entry_SYSCALL_64_after_hwframe+0x63/0xcd
>
> Fixes: f185de28d9ae ("mld: add new workqueues for process mld events")
> Signed-off-by: Taehee Yoo <ap420073@...il.com>
> ---
>  net/ipv6/mcast.c | 14 ++++++++------
>  1 file changed, 8 insertions(+), 6 deletions(-)
>
> diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c
> index 7f695c39d9a8..87c699d57b36 100644
> --- a/net/ipv6/mcast.c
> +++ b/net/ipv6/mcast.c
> @@ -1522,7 +1522,6 @@ static void mld_query_work(struct work_struct *work)
>
>                 if (++cnt >= MLD_MAX_QUEUE) {
>                         rework = true;
> -                       schedule_delayed_work(&idev->mc_query_work, 0);
>                         break;
>                 }
>         }
> @@ -1533,8 +1532,10 @@ static void mld_query_work(struct work_struct *work)
>                 __mld_query_work(skb);
>         mutex_unlock(&idev->mc_lock);
>
> -       if (!rework)
> -               in6_dev_put(idev);
> +       if (rework && queue_delayed_work(mld_wq, &idev->mc_query_work, 0))

It seems the 'real issue' was that
schedule_delayed_work(&idev->mc_query_work, 0) could be a NOP
because the work queue was already scheduled ?



> +               return;
> +
> +       in6_dev_put(idev);
>  }
>
>  /* called with rcu_read_lock() */
> @@ -1624,7 +1625,6 @@ static void mld_report_work(struct work_struct *work)
>
>                 if (++cnt >= MLD_MAX_QUEUE) {
>                         rework = true;
> -                       schedule_delayed_work(&idev->mc_report_work, 0);
>                         break;
>                 }
>         }
> @@ -1635,8 +1635,10 @@ static void mld_report_work(struct work_struct *work)
>                 __mld_report_work(skb);
>         mutex_unlock(&idev->mc_lock);
>
> -       if (!rework)
> -               in6_dev_put(idev);
> +       if (rework && queue_delayed_work(mld_wq, &idev->mc_report_work, 0))
> +               return;
> +
> +       in6_dev_put(idev);
>  }
>
>  static bool is_in(struct ifmcaddr6 *pmc, struct ip6_sf_list *psf, int type,
> --
> 2.17.1
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ