lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <07f7e770-f78c-b272-f077-c238f1dc030b@huawei.com>
Date: Wed, 6 Nov 2024 15:05:02 +0800
From: Li Qiang <liqiang64@...wei.com>
To: <dust.li@...ux.alibaba.com>, <wenjia@...ux.ibm.com>, <jaka@...ux.ibm.com>,
	<alibuda@...ux.alibaba.com>, <tonylu@...ux.alibaba.com>,
	<guwen@...ux.alibaba.com>, <kuba@...nel.org>
CC: <linux-s390@...r.kernel.org>, <netdev@...r.kernel.org>,
	<linux-kernel@...r.kernel.org>, <luanjianhai@...wei.com>,
	<zhangxuzhou4@...wei.com>, <dengguangxing@...wei.com>, <gaochao24@...wei.com>
Subject: Re: [PATCH v2 net-next] net/smc: Optimize the search method of reused
 buf_desc



在 2024/11/5 22:44, Dust Li 写道:
> On 2024-11-05 11:19:38, liqiang wrote:
>> [...]
>> I tested the time-consuming comparison of this function
>> under multiple connections based on redis-benchmark
>> (test in smc loopback-ism mode):
>> The function 'smc_buf_get_slot' takes less time when a
>> new SMC link is established:
>> 1. 5us->100ns (when there are 200 active links);
>> 2. 30us->100ns (when there are 1000 active links).
>> [...]
>> -/* try to reuse a sndbuf or rmb description slot for a certain
>> - * buffer size; if not available, return NULL
>> - */
>> -static struct smc_buf_desc *smc_buf_get_slot(int compressed_bufsize,
>> -					     struct rw_semaphore *lock,
>> -					     struct list_head *buf_list)
>> +/* use lock less list to save and find reuse buf desc */
>> +static struct smc_buf_desc *smc_buf_get_slot_free(struct llist_head *buf_llist)
>> {
>> -	struct smc_buf_desc *buf_slot;
>> +	struct smc_buf_desc *buf_free;
>> +	struct llist_node *llnode;
>>
>> -	down_read(lock);
>> -	list_for_each_entry(buf_slot, buf_list, list) {
>> -		if (cmpxchg(&buf_slot->used, 0, 1) == 0) {
>> -			up_read(lock);
>> -			return buf_slot;
>> -		}
>> -	}
>> -	up_read(lock);
>> -	return NULL;
>> +	/* lock-less link list don't need an lock */
>> +	llnode = llist_del_first(buf_llist);
>> +	if (!llnode)
>> +		return NULL;
>> +	buf_free = llist_entry(llnode, struct smc_buf_desc, llist);
>> +	WRITE_ONCE(buf_free->used, 1);
>> +	return buf_free;
> 
> Sorry for the late reply.
> 
> It looks this is not right here.
> 
> The rw_semaphore here is not used to protect against adding/deleting
> the buf_list since we don't even add/remove elements on the buf_list.
> The cmpxchg already makes sure only one will get an unused smc_buf_desc.

I first came up with the idea of ​​​​optimizing because this function needs to
traverse all rmbs/sndbufs, which includes all active links and is a waste
of time and unnecessary.

Changing to an llist linked list implementation can ensure that a free buf_slot
with a 'used' mark of 0 can be directly obtained every time, without the need
to start traversing from the first element of the rmbs/sndbufs linked list.

> 
> Removing the down_read()/up_read() would cause mapping/unmapping link
> on the link group race agains the buf_slot alloc/free here. For exmaple
> _smcr_buf_map_lgr() take the write lock of the rw_semaphore.

Read from the relevant code, here only a buf_slot is found in the
down_read/up_read and 'used' is set to 0, while the 'used' is read
in other down_write/up_write code.

so I have two questions:

1. Is the read lock of rw_semaphore necessary here? (The read lock here
   is mutually exclusive with the write lock elsewhere, what is guaranteed
   should be that in the critical section of the write lock, all
   'smc_buf_desc->used' statuses in rmbs/sndbufs will not change.)
2. If is necessary, can we add it in new implement of this patch, like this?

```
{
        struct smc_buf_desc *buf_free;
        struct llist_node *llnode;

        /* lock-less link list don't need an lock */
        llnode = llist_del_first(buf_llist);
        if (!llnode)
                return NULL;
        buf_free = llist_entry(llnode, struct smc_buf_desc, llist);
	up_read(lock);
        WRITE_ONCE(buf_free->used, 1);
	down_read(lock);
        return buf_free;
}
```
I think this can also ensure that all 'used' marks remain unchanged during
the write lock process.
Anyway, use llist to manage free rmbs/sndbufs is a better choice than traverse,
and it doesn't conflict with useing or not using rw_semaphore. :)

> 
> But I agree the lgr->rmbs_lock/sndbufs_lock should be improved. Would
> you like digging into it and improve the usage of the lock here ?
> 

Maybe I can try it, I need to spend some time to read in detail the
code used in every place of this lock.

Thanks for taking the time to read this email. ;-)

-- 
Best regards,
Li Qiang

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ