lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20151118140057.GC18820@lst.de>
Date:	Wed, 18 Nov 2015 15:00:57 +0100
From:	Christoph Hellwig <hch@....de>
To:	Bart Van Assche <bart.vanassche@...disk.com>
Cc:	linux-rdma@...r.kernel.org, sagig@....mellanox.co.il, axboe@...com,
	linux-scsi@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/9] IB: add a proper completion queue abstraction

On Tue, Nov 17, 2015 at 09:52:58AM -0800, Bart Van Assche wrote:
> On 11/13/2015 05:46 AM, Christoph Hellwig wrote:
>> + * context and does not ask from completion interrupts from the HCA.
>                                ^^^^
> Should this perhaps be changed into "for" ?

Yes.

>
>> + */
>> +void ib_process_cq_direct(struct ib_cq *cq)
>> +{
>> +	WARN_ON_ONCE(cq->poll_ctx != IB_POLL_DIRECT);
>> +
>> +	__ib_process_cq(cq, INT_MAX);
>> +}
>> +EXPORT_SYMBOL(ib_process_cq_direct);
>
> My proposal is to drop this function and to export __ib_process_cq() 
> instead (with or without renaming). That will allow callers of this 
> function to compare the poll budget with the number of completions that 
> have been processed and use that information to decide whether or not to 
> call this function again.

I'd like to keep the WARN_ON, but we can export the same signature.

Then again my preference would be to remove the direct mode entirely.

>> +static void ib_cq_completion_workqueue(struct ib_cq *cq, void *private)
>> +{
>> +	queue_work(ib_comp_wq, &cq->work);
>> +}
>
> The above code will cause all polling to occur on the context of the CPU 
> that received the completion interrupt. This approach is not powerful 
> enough. For certain workloads throughput is higher if work completions are 
> processed by another CPU core on the same CPU socket. Has it been 
> considered to make the CPU core on which work completions are processed 
> configurable ?

It's an unbound workqueue, so it's not tied to the specific CPU.  However
we'll only run the work_struct once so it's still tied to a single CPU
at a time, but that's not different from the kthread use previously.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ