[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <F2FB1034-BFB6-45ED-878E-6FADD756157D@oracle.com>
Date: Tue, 20 Feb 2018 16:47:31 -0500
From: Chuck Lever <chuck.lever@...cle.com>
To: Bart Van Assche <Bart.VanAssche@....com>
Cc: "jgg@...pe.ca" <jgg@...pe.ca>, "arnd@...db.de" <arnd@...db.de>,
"dledford@...hat.com" <dledford@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"leonro@...lanox.com" <leonro@...lanox.com>,
"linux-rdma@...r.kernel.org" <linux-rdma@...r.kernel.org>,
"sagi@...mberg.me" <sagi@...mberg.me>
Subject: Re: [PATCH] RDMA/core: reduce IB_POLL_BATCH constant
> On Feb 20, 2018, at 4:14 PM, Bart Van Assche <Bart.VanAssche@....com> wrote:
>
> On Tue, 2018-02-20 at 21:59 +0100, Arnd Bergmann wrote:
>> /* # of WCs to poll for with a single call to ib_poll_cq */
>> -#define IB_POLL_BATCH 16
>> +#define IB_POLL_BATCH 8
>
> The purpose of batch polling is to minimize contention on the cq spinlock.
> Reducing the IB_POLL_BATCH constant may affect performance negatively. Has
> the performance impact of this change been verified for all affected drivers
> (ib_srp, ib_srpt, ib_iser, ib_isert, NVMeOF, NVMeOF target, SMB Direct, NFS
> over RDMA, ...)?
Only the users of the DIRECT polling method use an on-stack
array of ib_wc's. This is only the SRP drivers.
The other two modes have use of a dynamically allocated array
of ib_wc's that hangs off the ib_cq. These shouldn't need any
reduction in the size of this array, and they are the common
case.
IMO a better solution would be to change ib_process_cq_direct
to use a smaller on-stack array, and leave IB_POLL_BATCH alone.
--
Chuck Lever
Powered by blists - more mailing lists