lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bac390f1-ef6d-317f-a5e1-1c0c5e4e4535@gmail.com>
Date: Thu, 18 Jan 2024 16:24:52 +0900
From: Taehee Yoo <ap420073@...il.com>
To: Hangbin Liu <liuhangbin@...il.com>,
 Nikita Zhandarovich <n.zhandarovich@...tech.ru>
Cc: "David S. Miller" <davem@...emloft.net>, David Ahern
 <dsahern@...nel.org>, Eric Dumazet <edumazet@...gle.com>,
 Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>,
 netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
 syzbot+a9400cabb1d784e49abf@...kaller.appspotmail.com
Subject: Re: [PATCH net] ipv6: mcast: fix data-race in ipv6_mc_down /
 mld_ifc_work



On 1/18/24 11:45, Hangbin Liu wrote:

Hi Hangbin,

 > On Wed, Jan 17, 2024 at 09:21:02AM -0800, Nikita Zhandarovich wrote:
 >> idev->mc_ifc_count can be written over without proper locking.
 >>
 >> Originally found by syzbot [1], fix this issue by encapsulating calls
 >> to mld_ifc_stop_work() (and mld_gq_stop_work() for good measure) with
 >> mutex_lock() and mutex_unlock() accordingly as these functions
 >> should only be called with mc_lock per their declarations.
 >>
 >> [1]
 >> BUG: KCSAN: data-race in ipv6_mc_down / mld_ifc_work
 >>
 >> write to 0xffff88813a80c832 of 1 bytes by task 3771 on cpu 0:
 >> mld_ifc_stop_work net/ipv6/mcast.c:1080 [inline]
 >> ipv6_mc_down+0x10a/0x280 net/ipv6/mcast.c:2725
 >> addrconf_ifdown+0xe32/0xf10 net/ipv6/addrconf.c:3949
 >> addrconf_notify+0x310/0x980
 >> notifier_call_chain kernel/notifier.c:93 [inline]
 >> raw_notifier_call_chain+0x6b/0x1c0 kernel/notifier.c:461
 >> __dev_notify_flags+0x205/0x3d0
 >> dev_change_flags+0xab/0xd0 net/core/dev.c:8685
 >> do_setlink+0x9f6/0x2430 net/core/rtnetlink.c:2916
 >> rtnl_group_changelink net/core/rtnetlink.c:3458 [inline]
 >> __rtnl_newlink net/core/rtnetlink.c:3717 [inline]
 >> rtnl_newlink+0xbb3/0x1670 net/core/rtnetlink.c:3754
 >> rtnetlink_rcv_msg+0x807/0x8c0 net/core/rtnetlink.c:6558
 >> netlink_rcv_skb+0x126/0x220 net/netlink/af_netlink.c:2545
 >> rtnetlink_rcv+0x1c/0x20 net/core/rtnetlink.c:6576
 >> netlink_unicast_kernel net/netlink/af_netlink.c:1342 [inline]
 >> netlink_unicast+0x589/0x650 net/netlink/af_netlink.c:1368
 >> netlink_sendmsg+0x66e/0x770 net/netlink/af_netlink.c:1910
 >> ...
 >>
 >> write to 0xffff88813a80c832 of 1 bytes by task 22 on cpu 1:
 >> mld_ifc_work+0x54c/0x7b0 net/ipv6/mcast.c:2653
 >> process_one_work kernel/workqueue.c:2627 [inline]
 >> process_scheduled_works+0x5b8/0xa30 kernel/workqueue.c:2700
 >> worker_thread+0x525/0x730 kernel/workqueue.c:2781
 >> ...
 >>
 >> Fixes: 2d9a93b4902b ("mld: convert from timer to delayed work")
 >> Reported-by: syzbot+a9400cabb1d784e49abf@...kaller.appspotmail.com
 >> Link: 
https://lore.kernel.org/all/000000000000994e09060ebcdffb@google.com/
 >> Signed-off-by: Nikita Zhandarovich <n.zhandarovich@...tech.ru>
 >> ---
 >> net/ipv6/mcast.c | 4 ++++
 >> 1 file changed, 4 insertions(+)
 >>
 >> diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c
 >> index b75d3c9d41bb..bc6e0a0bad3c 100644
 >> --- a/net/ipv6/mcast.c
 >> +++ b/net/ipv6/mcast.c
 >> @@ -2722,8 +2722,12 @@ void ipv6_mc_down(struct inet6_dev *idev)
 >> synchronize_net();
 >> mld_query_stop_work(idev);
 >> mld_report_stop_work(idev);
 >> +
 >> + mutex_lock(&idev->mc_lock);
 >> mld_ifc_stop_work(idev);
 >> mld_gq_stop_work(idev);
 >> + mutex_unlock(&idev->mc_lock);
 >> +
 >> mld_dad_stop_work(idev);
 >> }
 >>
 >
 > I saw mld_process_v1() also cancel these works when changing to v1 mode.
 > Should we also add lock there?

I think mld_process_v1() doesn't have a problem.
Because mld_process_v1() is always called under mc_lock by mld_query_work().

mld_query_work()
    mutex_lock(&idev->mc_lock);
     __mld_query_work();
        mld_process_v1();
    mutex_unlock(&idev->mc_lock);

 >
 > Thanks
 > Hangbin

Thanks a lot,
Taehee Yoo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ