lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220822051907.104443-1-yin31149@gmail.com>
Date:   Mon, 22 Aug 2022 13:19:07 +0800
From:   Hawkins Jiawei <yin31149@...il.com>
To:     khalid.masum.92@...il.com
Cc:     davem@...emloft.net, dhowells@...hat.com, edumazet@...gle.com,
        kuba@...nel.org, linux-afs@...ts.infradead.org,
        linux-kernel-mentees@...ts.linuxfoundation.org,
        linux-kernel@...r.kernel.org, marc.dionne@...istor.com,
        netdev@...r.kernel.org, pabeni@...hat.com, paskripkin@...il.com,
        syzbot+7f0483225d0c94cb3441@...kaller.appspotmail.com,
        syzkaller-bugs@...glegroups.com, yin31149@...il.com
Subject: Re: [PATCH] rxrpc: fix bad unlock balance in rxrpc_do_sendmsg

On Mon, 22 Aug 2022 at 00:42, Khalid Masum <khalid.masum.92@...il.com> wrote:
>
> On Sun, Aug 21, 2022 at 9:58 PM Khalid Masum <khalid.masum.92@...il.com> wrote:
> >
> > On Sun, Aug 21, 2022 at 6:58 PM Hawkins Jiawei <yin31149@...il.com> wrote:
> > >
> > The interruptible version fails to acquire the lock. So why is it okay to
> > force it to acquire the mutex_lock since we are in the interrupt context?
>
> Sorry, I mean, won't the function lose its ability of being interruptible?
> Since we are forcing it to acquire the lock.
> > >                         return sock_intr_errno(*timeo);
> > > +               }
> > >         }
> > >  }
> >
> > thanks,
> >   -- Khalid Masum
Hi, Khalid

In my opinion, _intr in rxrpc_wait_for_tx_window_intr() seems referring
that, the loop in function should be interrupted when a signal
arrives(Please correct me if I am wrong):
> /*
>  * Wait for space to appear in the Tx queue or a signal to occur.
>  */
> static int rxrpc_wait_for_tx_window_intr(struct rxrpc_sock *rx,
> 					 struct rxrpc_call *call,
> 					 long *timeo)
> {
> 	for (;;) {
> 		set_current_state(TASK_INTERRUPTIBLE);
> 		if (rxrpc_check_tx_space(call, NULL))
> 			return 0;
> 
> 		if (call->state >= RXRPC_CALL_COMPLETE)
> 			return call->error;
> 
> 		if (signal_pending(current))
> 			return sock_intr_errno(*timeo);
> 
> 		trace_rxrpc_transmit(call, rxrpc_transmit_wait);
> 		mutex_unlock(&call->user_mutex);
> 		*timeo = schedule_timeout(*timeo);
> 		if (mutex_lock_interruptible(&call->user_mutex) < 0)
> 			return sock_intr_errno(*timeo);
> 	}
> }

To be more specific, when a signal arrives,
rxrpc_wait_for_tx_window_intr() should know when executing
mutex_lock_interruptible() and get a non-zero value. Then
rxrpc_wait_for_tx_window_intr() should be interrupted, which means
function should be returned.

So I think, acquiring mutex_lock() seems won't effect its ability
of being interruptible.(Please correct me if I am wrong).

What's more, when the kernel return from
rxrpc_wait_for_tx_window_intr(), it will only handles the error case
before unlocking the call->user_mutex, which won't cost a long time.
So I think it seems Ok to acquire the call->user_mutex when
rxrpc_wait_for_tx_window_intr() is interrupted by a signal.


On Mon, 22 Aug 2022 at 03:18, Khalid Masum <khalid.masum.92@...il.com> wrote:
>
> Maybe we do not need to lock since no other timer_schedule needs
> it.
>
> Test if this fixes the issue.
> ---
> diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c
> index 1d38e279e2ef..640e2ab2cc35 100644
> --- a/net/rxrpc/sendmsg.c
> +++ b/net/rxrpc/sendmsg.c
> @@ -51,10 +51,8 @@ static int rxrpc_wait_for_tx_window_intr(struct rxrpc_sock *rx,
>                         return sock_intr_errno(*timeo);
>
>                 trace_rxrpc_transmit(call, rxrpc_transmit_wait);
> -               mutex_unlock(&call->user_mutex);
>                 *timeo = schedule_timeout(*timeo);
> -               if (mutex_lock_interruptible(&call->user_mutex) < 0)
> -                       return sock_intr_errno(*timeo);
> +               return sock_intr_errno(*timeo);
>         }
>  }
>
> --
> 2.37.1
>

If it is still improper to patch this bug by acquiring the
call->user_mutex, I wonder if it is better to check before unlocking the lock
in rxrpc_do_sendmsg(), because kernel will always unlocking the call->user_mutex
in the end of the rxrpc_do_sendmsg():
> int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len)
> 	__releases(&rx->sk.sk_lock.slock)
> 	__releases(&call->user_mutex)
> {
> 	...
> out_put_unlock:
> 	mutex_unlock(&call->user_mutex);
> error_put:
> 	rxrpc_put_call(call, rxrpc_call_put);
> 	_leave(" = %d", ret);
> 	return ret;
> 
> error_release_sock:
> 	release_sock(&rx->sk);
> 	return ret;
> }

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ