lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <79413407294645f0e1252112c3435a29@codeaurora.org>
Date:   Mon, 17 Jul 2017 19:07:00 -0400
From:   okaya@...eaurora.org
To:     Keith Busch <keith.busch@...el.com>
Cc:     linux-nvme@...ts.infradead.org, timur@...eaurora.org,
        linux-arm-msm@...r.kernel.org,
        linux-arm-kernel@...ts.infradead.org, Jens Axboe <axboe@...com>,
        Christoph Hellwig <hch@....de>,
        Sagi Grimberg <sagi@...mberg.me>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] nvme: Acknowledge completion queue on each iteration

On 2017-07-17 18:56, Keith Busch wrote:
> On Mon, Jul 17, 2017 at 06:46:11PM -0400, Sinan Kaya wrote:
>> Hi Keith,
>> 
>> On 7/17/2017 6:45 PM, Keith Busch wrote:
>> > On Mon, Jul 17, 2017 at 06:36:23PM -0400, Sinan Kaya wrote:
>> >> Code is moving the completion queue doorbell after processing all completed
>> >> events and sending callbacks to the block layer on each iteration.
>> >>
>> >> This is causing a performance drop when a lot of jobs are queued towards
>> >> the HW. Move the completion queue doorbell on each loop instead and allow new
>> >> jobs to be queued by the HW.
>> >
>> > That doesn't make sense. Aggregating doorbell writes should be much more
>> > efficient for high depth workloads.
>> >
>> 
>> Problem is that code is throttling the HW as HW cannot queue more 
>> completions until
>> SW get a chance to clear it.
>> 
>> As an example:
>> 
>> for each in N
>> (
>> 	blk_layer()
>> )
>> ring door bell
>> 
>> HW cannot queue new job until N x blk_layer operations are processed 
>> and queue
>> element ownership is passed to the HW after the loop. HW is just 
>> sitting idle
>> there if no queue entries are available.
> 
> If no completion queue entries are available, then there can't possibly
> be any submission queue entries for the HW to work on either.

Maybe, I need to understand the design better. I was curious why 
completion and submission queues were protected by a single lock causing 
lock contention.

I was treating each queue independently. I have seen slightly better 
performance by an early doorbell. That was my explanation.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ