[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAD=FV=W0E2_2PXHedyiTeXbK323gDiSSJyx9n6CMX5D9oR7CNg@mail.gmail.com>
Date: Fri, 2 Oct 2020 11:04:46 -0700
From: Doug Anderson <dianders@...omium.org>
To: Manivannan Sadhasivam <manivannan.sadhasivam@...aro.org>
Cc: David Miller <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Bjorn Andersson <bjorn.andersson@...aro.org>,
netdev <netdev@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>,
Alex Elder <elder@...aro.org>
Subject: Re: [PATCH v2] net: qrtr: ns: Fix the incorrect usage of rcu_read_lock()
Hi,
On Fri, Oct 2, 2020 at 10:06 AM Manivannan Sadhasivam
<manivannan.sadhasivam@...aro.org> wrote:
>
> The rcu_read_lock() is not supposed to lock the kernel_sendmsg() API
> since it has the lock_sock() in qrtr_sendmsg() which will sleep. Hence,
> fix it by excluding the locking for kernel_sendmsg().
>
> While at it, let's also use radix_tree_deref_retry() to confirm the
> validity of the pointer returned by radix_tree_deref_slot() and use
> radix_tree_iter_resume() to resume iterating the tree properly before
> releasing the lock as suggested by Doug.
>
> Fixes: a7809ff90ce6 ("net: qrtr: ns: Protect radix_tree_deref_slot() using rcu read locks")
> Reported-by: Doug Anderson <dianders@...omium.org>
> Tested-by: Alex Elder <elder@...aro.org>
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@...aro.org>
> ---
>
> Changes in v2:
>
> * Used radix_tree_deref_retry() and radix_tree_iter_resume() as
> suggested by Doug.
>
> net/qrtr/ns.c | 63 ++++++++++++++++++++++++++++++++++++++++++++++-----
> 1 file changed, 57 insertions(+), 6 deletions(-)
>
> diff --git a/net/qrtr/ns.c b/net/qrtr/ns.c
> index 934999b56d60..dadbe2885be2 100644
> --- a/net/qrtr/ns.c
> +++ b/net/qrtr/ns.c
> @@ -203,15 +203,24 @@ static int announce_servers(struct sockaddr_qrtr *sq)
> /* Announce the list of servers registered in this node */
> radix_tree_for_each_slot(slot, &node->servers, &iter, 0) {
> srv = radix_tree_deref_slot(slot);
> + if (!srv)
> + continue;
> + if (radix_tree_deref_retry(srv)) {
> + slot = radix_tree_iter_retry(&iter);
> + continue;
> + }
> + slot = radix_tree_iter_resume(slot, &iter);
> + rcu_read_unlock();
>
> ret = service_announce_new(sq, srv);
> if (ret < 0) {
> pr_err("failed to announce new service\n");
> - goto err_out;
> + return ret;
> }
> +
> + rcu_read_lock();
> }
>
> -err_out:
> rcu_read_unlock();
>
> return ret;
nit: you can go back to "return 0" and get rid of the init of "ret =
0" at the beginning of the function. The need to "return ret" and
init to 0 was introduced by your previous change because of the "goto
err_out" which you no longer have.
...this is true for all your functions, I believe.
> @@ -571,16 +605,33 @@ static int ctrl_cmd_new_lookup(struct sockaddr_qrtr *from,
> rcu_read_lock();
> radix_tree_for_each_slot(node_slot, &nodes, &node_iter, 0) {
> node = radix_tree_deref_slot(node_slot);
> + if (!node)
> + continue;
> + if (radix_tree_deref_retry(node)) {
> + node_slot = radix_tree_iter_retry(&node_iter);
> + continue;
> + }
> + node_slot = radix_tree_iter_resume(node_slot, &node_iter);
>
> radix_tree_for_each_slot(srv_slot, &node->servers,
> &srv_iter, 0) {
> struct qrtr_server *srv;
>
> srv = radix_tree_deref_slot(srv_slot);
> + if (!srv)
> + continue;
> + if (radix_tree_deref_retry(srv)) {
> + srv_slot = radix_tree_iter_retry(&srv_iter);
> + continue;
> + }
> + srv_slot = radix_tree_iter_resume(srv_slot, &srv_iter);
> +
> if (!server_match(srv, &filter))
> continue;
>
nit: move the "srv_slot = radix_tree_iter_resume(srv_slot,
&srv_iter);" line here (after the !server_match() test) so you only
call it if you're doing the unlock?
I'm not too worried about the nits, though it'd be nice to fix them.
Thus, I'll add:
Reviewed-by: Douglas Anderson <dianders@...omium.org>
...though I'll remind you that I'm a self-professed clueless person
about RCU and radix trees).
I haven't stress tested anything, but at least I no longer get any
warnings at bootup and my WiFi and modem still probe, so I guess:
Tested-by: Douglas Anderson <dianders@...omium.org>
Powered by blists - more mailing lists