[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <fa80bc9f24e40e1a7a7fa1452330b7f0b7d6e1fe.1528194606.git.pabeni@redhat.com>
Date: Tue, 5 Jun 2018 12:32:33 +0200
From: Paolo Abeni <pabeni@...hat.com>
To: netdev@...r.kernel.org
Cc: "David S. Miller" <davem@...emloft.net>,
Tom Herbert <tom@...ntonium.net>,
Kirill Tkhai <ktkhai@...tuozzo.com>
Subject: [RFC PATCH] kcm: hold rx mux lock when updating the receive queue.
Currently kcm holds both the RX mux lock and the socket lock when
updating the sk receive queue, except in some notable cases:
- kcm_rfree holds only the RX mux lock
- kcm_recvmsg holds only the socket lock
has results there are possible races which cause receive queue
corruption, as reported by the syzbot.
Since we can't acquire the socket lock in kcm_rfree, let's use
the RX mux lock to protect the receive queue update in kcm_recvmsg,
too. Also, let's add some commit noting which is the locking schema in use.
Fixes: ab7ac4eb9832 ("kcm: Kernel Connection Multiplexor module")
Reported-and-tested-by: syzbot+278279efdd2730dd14bf@...kaller.appspotmail.com
Signed-off-by: Paolo Abeni <pabeni@...hat.com>
---
This is an RFC, since I'm really new to this area, anyway the syzport
reported success in testing the proposed fix.
This is very likely a scenario where the hopefully upcoming
skb->prev,next->list_head conversion would have helped a lot, thanks to
list poisoning and list debug
---
net/kcm/kcmsock.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
index d3601d421571..95e1d95ab24a 100644
--- a/net/kcm/kcmsock.c
+++ b/net/kcm/kcmsock.c
@@ -188,6 +188,7 @@ static void kcm_rfree(struct sk_buff *skb)
}
}
+/* RX mux lock held */
static int kcm_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
{
struct sk_buff_head *list = &sk->sk_receive_queue;
@@ -1157,7 +1158,9 @@ static int kcm_recvmsg(struct socket *sock, struct msghdr *msg,
/* Finished with message */
msg->msg_flags |= MSG_EOR;
KCM_STATS_INCR(kcm->stats.rx_msgs);
+ spin_lock_bh(&kcm->mux->rx_lock);
skb_unlink(skb, &sk->sk_receive_queue);
+ spin_unlock_bh(&kcm->mux->rx_lock);
kfree_skb(skb);
}
}
--
2.17.1
Powered by blists - more mailing lists