lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <7846613b-f7c7-430e-a453-e0023a1f5667@grimberg.me>
Date: Sun, 21 Jul 2024 14:46:34 +0300
From: Sagi Grimberg <sagi@...mberg.me>
To: Hannes Reinecke <hare@...e.de>, Hannes Reinecke <hare@...nel.org>,
 Christoph Hellwig <hch@....de>, netdev@...r.kernel.org
Cc: Keith Busch <kbusch@...nel.org>, linux-nvme@...ts.infradead.org
Subject: Re: [PATCH 6/8] nvme-tcp: reduce callback lock contention




On 18/07/2024 9:42, Hannes Reinecke wrote:
> On 7/17/24 23:19, Sagi Grimberg wrote:
>>
>>
>> On 16/07/2024 10:36, Hannes Reinecke wrote:
>>> From: Hannes Reinecke <hare@...e.de>
>>>
>>> We have heavily queued tx and rx flows, so callbacks might happen
>>> at the same time. As the callbacks influence the state machine we
>>> really should remove contention here to not impact I/O performance.
>>>
>>> Signed-off-by: Hannes Reinecke <hare@...nel.org>
>>> ---
>>>   drivers/nvme/host/tcp.c | 14 ++++++++------
>>>   1 file changed, 8 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
>>> index a758fbb3f9bb..9634c16d7bc0 100644
>>> --- a/drivers/nvme/host/tcp.c
>>> +++ b/drivers/nvme/host/tcp.c
>>> @@ -1153,28 +1153,28 @@ static void nvme_tcp_data_ready(struct sock 
>>> *sk)
>>>       trace_sk_data_ready(sk);
>>> -    read_lock_bh(&sk->sk_callback_lock);
>>> -    queue = sk->sk_user_data;
>>> +    rcu_read_lock();
>>> +    queue = rcu_dereference_sk_user_data(sk);
>>>       if (likely(queue && queue->rd_enabled) &&
>>>           !test_bit(NVME_TCP_Q_POLLING, &queue->flags)) {
>>>           queue_work_on(queue->io_cpu, nvme_tcp_wq, &queue->io_work);
>>>           queue->data_ready_cnt++;
>>>       }
>>> -    read_unlock_bh(&sk->sk_callback_lock);
>>> +    rcu_read_unlock();
>>
>> Umm, this looks dangerous...
>>
>> Please give a concrete (numeric) justification for this change, and 
>> preferably a big fat comment
>> on why it is safe to do (for either .data_ready or .write_space).
>>
>> Is there any precedence of another tcp ulp that does this? I'd like 
>> to have the netdev folks review this change. CC'ing netdev.
>>
> Reasoning here is that the queue itself (and with that, the workqueue
> element) will _not_ be deleted once we set 'sk_user_data' to NULL.
>
> The shutdown sequence is:
>
>         kernel_sock_shutdown(queue->sock, SHUT_RDWR);
>         nvme_tcp_restore_sock_ops(queue);
>         cancel_work_sync(&queue->io_work);
>
> So first we're shutting down the socket (which cancels all I/O
> calls in io_work), then restore the socket callbacks.
> As these are rcu protected I'm calling synchronize_rcu() to
> ensure all callbacks have left the rcu-critical section on
> exit.
> At a final step we are cancelling all work, ie ensuring that
> any action triggered by the callbacks have completed.
>
> But sure, comment is fine.

I suggest that you audit all the accessors of this lock in the 
networking subsystem
before determining that it can be safely converted to rcu read critical 
section. I suspect that
the underlying network stack assumes that this lock is taken when the 
callback is invoked.

$ grep -rIn sk_callback_lock net/ | wc -l
122
$ grep -rIn sk_callback_lock kernel/  | wc -l
15

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ