lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a5473f69-5404-4c38-85d9-ca91c5160361@suse.de>
Date: Thu, 18 Jul 2024 08:42:31 +0200
From: Hannes Reinecke <hare@...e.de>
To: Sagi Grimberg <sagi@...mberg.me>, Hannes Reinecke <hare@...nel.org>,
 Christoph Hellwig <hch@....de>, netdev@...r.kernel.org
Cc: Keith Busch <kbusch@...nel.org>, linux-nvme@...ts.infradead.org
Subject: Re: [PATCH 6/8] nvme-tcp: reduce callback lock contention

On 7/17/24 23:19, Sagi Grimberg wrote:
> 
> 
> On 16/07/2024 10:36, Hannes Reinecke wrote:
>> From: Hannes Reinecke <hare@...e.de>
>>
>> We have heavily queued tx and rx flows, so callbacks might happen
>> at the same time. As the callbacks influence the state machine we
>> really should remove contention here to not impact I/O performance.
>>
>> Signed-off-by: Hannes Reinecke <hare@...nel.org>
>> ---
>>   drivers/nvme/host/tcp.c | 14 ++++++++------
>>   1 file changed, 8 insertions(+), 6 deletions(-)
>>
>> diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
>> index a758fbb3f9bb..9634c16d7bc0 100644
>> --- a/drivers/nvme/host/tcp.c
>> +++ b/drivers/nvme/host/tcp.c
>> @@ -1153,28 +1153,28 @@ static void nvme_tcp_data_ready(struct sock *sk)
>>       trace_sk_data_ready(sk);
>> -    read_lock_bh(&sk->sk_callback_lock);
>> -    queue = sk->sk_user_data;
>> +    rcu_read_lock();
>> +    queue = rcu_dereference_sk_user_data(sk);
>>       if (likely(queue && queue->rd_enabled) &&
>>           !test_bit(NVME_TCP_Q_POLLING, &queue->flags)) {
>>           queue_work_on(queue->io_cpu, nvme_tcp_wq, &queue->io_work);
>>           queue->data_ready_cnt++;
>>       }
>> -    read_unlock_bh(&sk->sk_callback_lock);
>> +    rcu_read_unlock();
> 
> Umm, this looks dangerous...
> 
> Please give a concrete (numeric) justification for this change, and 
> preferably a big fat comment
> on why it is safe to do (for either .data_ready or .write_space).
> 
> Is there any precedence of another tcp ulp that does this? I'd like to 
> have the netdev folks review this change. CC'ing netdev.
> 
Reasoning here is that the queue itself (and with that, the workqueue
element) will _not_ be deleted once we set 'sk_user_data' to NULL.

The shutdown sequence is:

         kernel_sock_shutdown(queue->sock, SHUT_RDWR);
         nvme_tcp_restore_sock_ops(queue);
         cancel_work_sync(&queue->io_work);

So first we're shutting down the socket (which cancels all I/O
calls in io_work), then restore the socket callbacks.
As these are rcu protected I'm calling synchronize_rcu() to
ensure all callbacks have left the rcu-critical section on
exit.
At a final step we are cancelling all work, ie ensuring that
any action triggered by the callbacks have completed.

But sure, comment is fine.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@...e.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ