[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20171106105113.20476-1-fw@strlen.de>
Date: Mon, 6 Nov 2017 11:51:05 +0100
From: Florian Westphal <fw@...len.de>
To: <netdev@...r.kernel.org>
Cc: peterz@...radead.org
Subject: rtnetlink: fix dump+module unload races, take 2
Peter Zijlstra reported:
----------
I just ran across commit:
019a316992ee ("rtnetlink: add reference counting to prevent module unload while dump is in progress")
And that commit is _completely_ broken.
1) it not in fact a refcount, so using refcount_t is silly
2) there is a distinct lack of memory barriers, so we can easily
observe the decrement while the msg_handler is still in progress.
3) waiting with a schedule()/yield() loop is complete crap and subject
life-locks, imagine doing that rtnl_unregister_all() from a RT task.
----------
Moreover, the commit doesn't work.
Problem is that netlink keeps the dump function address in the control
block to resume dumps, i.e. when netlink_dump_start() returns then
we may not be done with the dump, it can be restarted on next recv.
netlink already has a mechanism to prevent module unload while
dumps are in progress: netlink_dump_control struct contains a pointer
to struct module, and netlink will take care of module_get/put as needed
if that is set.
To make use of this however we need to store pointer to module that
registered the particular dump function. This series does that.
First, revert the broken commit to start from scratch.
Second patch adds new rtnl_register_module() that takes pointer to
the owning module.
Rest of patches convert affected modules.
IPv6 is not converted as ipv6 module cannot be unloaded.
One alternative to this series would be to decrement
rtnl_msg_handlers_ref count via netlink_dump_control->done(), but that
doesn't address any of Peters points.
I am submitting this for -next for 2 reasons:
1. This problem has existed for so long its evidently not a big deal
2. I think its a bad idea to cram this in at last second in current
development cycle.
ps: I will be travelling, replies will be delayed.
Powered by blists - more mailing lists