lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 1 Mar 2022 18:53:32 +0800
From:   "dust.li" <dust.li@...ux.alibaba.com>
To:     Leon Romanovsky <leon@...nel.org>
Cc:     Karsten Graul <kgraul@...ux.ibm.com>,
        Tony Lu <tonylu@...ux.alibaba.com>,
        Guangguan Wang <guangguan.wang@...ux.alibaba.com>,
        davem@...emloft.net, kuba@...nel.org, netdev@...r.kernel.org,
        linux-s390@...r.kernel.org, linux-rdma@...r.kernel.org
Subject: Re: [PATCH net-next 6/7] net/smc: don't req_notify until all CQEs
 drained

On Tue, Mar 01, 2022 at 12:14:15PM +0200, Leon Romanovsky wrote:
>On Tue, Mar 01, 2022 at 05:44:01PM +0800, Dust Li wrote:
>> When we are handling softirq workload, enable hardirq may
>> again interrupt the current routine of softirq, and then
>> try to raise softirq again. This only wastes CPU cycles
>> and won't have any real gain.
>> 
>> Since IB_CQ_REPORT_MISSED_EVENTS already make sure if
>> ib_req_notify_cq() returns 0, it is safe to wait for the
>> next event, with no need to poll the CQ again in this case.
>> 
>> This patch disables hardirq during the processing of softirq,
>> and re-arm the CQ after softirq is done. Somehow like NAPI.
>> 
>> Co-developed-by: Guangguan Wang <guangguan.wang@...ux.alibaba.com>
>> Signed-off-by: Guangguan Wang <guangguan.wang@...ux.alibaba.com>
>> Signed-off-by: Dust Li <dust.li@...ux.alibaba.com>
>> ---
>>  net/smc/smc_wr.c | 49 +++++++++++++++++++++++++++---------------------
>>  1 file changed, 28 insertions(+), 21 deletions(-)
>> 
>> diff --git a/net/smc/smc_wr.c b/net/smc/smc_wr.c
>> index 24be1d03fef9..34d616406d51 100644
>> --- a/net/smc/smc_wr.c
>> +++ b/net/smc/smc_wr.c
>> @@ -137,25 +137,28 @@ static void smc_wr_tx_tasklet_fn(struct tasklet_struct *t)
>>  {
>>  	struct smc_ib_device *dev = from_tasklet(dev, t, send_tasklet);
>>  	struct ib_wc wc[SMC_WR_MAX_POLL_CQE];
>> -	int i = 0, rc;
>> -	int polled = 0;
>> +	int i, rc;
>>  
>>  again:
>> -	polled++;
>>  	do {
>>  		memset(&wc, 0, sizeof(wc));
>>  		rc = ib_poll_cq(dev->roce_cq_send, SMC_WR_MAX_POLL_CQE, wc);
>> -		if (polled == 1) {
>> -			ib_req_notify_cq(dev->roce_cq_send,
>> -					 IB_CQ_NEXT_COMP |
>> -					 IB_CQ_REPORT_MISSED_EVENTS);
>> -		}
>> -		if (!rc)
>> -			break;
>>  		for (i = 0; i < rc; i++)
>>  			smc_wr_tx_process_cqe(&wc[i]);
>> +		if (rc < SMC_WR_MAX_POLL_CQE)
>> +			/* If < SMC_WR_MAX_POLL_CQE, the CQ should have been
>> +			 * drained, no need to poll again. --Guangguan Wang
>
>1. Please remove "--Guangguan Wang".
>2. We already discussed that. SMC should be changed to use RDMA CQ pool API
>drivers/infiniband/core/cq.c.
>ib_poll_handler() has much better implementation (tracing, IRQ rescheduling,
>proper error handling) than this SMC variant.

OK, I'll remove this patch in the next version.

Thanks

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ