[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <637ca8a0-d116-dbea-5949-2462502df4bb@kernel.dk>
Date: Mon, 23 Mar 2020 20:31:36 -0600
From: Jens Axboe <axboe@...nel.dk>
To: Pavel Begunkov <asml.silence@...il.com>, io-uring@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] io-wq: handle hashed writes in chains
On 3/23/20 1:57 PM, Pavel Begunkov wrote:
> We always punt async buffered writes to an io-wq helper, as the core
> kernel does not have IOCB_NOWAIT support for that. Most buffered async
> writes complete very quickly, as it's just a copy operation. This means
> that doing multiple locking roundtrips on the shared wqe lock for each
> buffered write is wasteful. Additionally, buffered writes are hashed
> work items, which means that any buffered write to a given file is
> serialized.
>
> Keep identicaly hashed work items contiguously in @wqe->work_list, and
> track a tail for each hash bucket. On dequeue of a hashed item, splice
> all of the same hash in one go using the tracked tail. Until the batch
> is done, the caller doesn't have to synchronize with the wqe or worker
> locks again.
Looks good to me, and also passes testing. I've applied this for 5.7.
Next we can start looking into cases where it'd be an improvement
to kick off another worker. Say if we still have work after grabbing
a chain, would probably not be a bad idea.
--
Jens Axboe
Powered by blists - more mailing lists