[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231206105251.GA7219@ubuntu>
Date: Wed, 6 Dec 2023 02:52:51 -0800
From: Hyunwoo Kim <v4bel@...ori.io>
To: Eric Dumazet <edumazet@...gle.com>
Cc: ralf@...ux-mips.org, imv4bel@...il.com, davem@...emloft.net,
kuba@...nel.org, pabeni@...hat.com, linux-hams@...r.kernel.org,
netdev@...r.kernel.org, v4bel@...ori.io
Subject: Re: [PATCH v2] net/rose: Fix Use-After-Free in rose_ioctl
Dear,
On Wed, Dec 06, 2023 at 11:33:15AM +0100, Eric Dumazet wrote:
> On Wed, Dec 6, 2023 at 5:13 AM Hyunwoo Kim <v4bel@...ori.io> wrote:
> >
> > Because rose_ioctl() accesses sk->sk_receive_queue
> > without holding a sk->sk_receive_queue.lock, it can
> > cause a race with rose_accept().
> > A use-after-free for skb occurs with the following flow.
> > ```
> > rose_ioctl() -> skb_peek()
> > rose_accept() -> skb_dequeue() -> kfree_skb()
> > ```
> > Add sk->sk_receive_queue.lock to rose_ioctl() to fix this issue.
> >
>
> Please add a Fixes: tag
>
> > Signed-off-by: Hyunwoo Kim <v4bel@...ori.io>
> > ---
> > v1 -> v2: Use sk->sk_receive_queue.lock instead of lock_sock.
> > ---
> > net/rose/af_rose.c | 2 ++
> > 1 file changed, 2 insertions(+)
> >
> > diff --git a/net/rose/af_rose.c b/net/rose/af_rose.c
> > index 0cc5a4e19900..841c238de222 100644
> > --- a/net/rose/af_rose.c
> > +++ b/net/rose/af_rose.c
> > @@ -1316,8 +1316,10 @@ static int rose_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
> > struct sk_buff *skb;
> > long amount = 0L;
> > /* These two are safe on a single CPU system as only user tasks fiddle here */
> > + spin_lock(&sk->sk_receive_queue.lock);
>
> You need interrupt safety here.
>
> sk_receive_queue can be fed from interrupt, that would potentially deadlock.
I want to change spin_lock to spin_lock_irqsave, is this okay?
Regards,
Hyunwoo Kim
>
> > if ((skb = skb_peek(&sk->sk_receive_queue)) != NULL)
> > amount = skb->len;
> > + spin_unlock(&sk->sk_receive_queue.lock);
> > return put_user(amount, (unsigned int __user *) argp);
> > }
> >
> > --
> > 2.25.1
> >
Powered by blists - more mailing lists