lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bc774cca-00ae-1a31-1f34-d223f45455e8@gmail.com>
Date:   Wed, 18 Dec 2019 12:25:15 +0300
From:   Pavel Begunkov <asml.silence@...il.com>
To:     Jens Axboe <axboe@...nel.dk>, io-uring@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] io_uring: batch getting pcpu references

On 12/18/2019 2:31 AM, Jens Axboe wrote:
> On 12/17/19 4:21 PM, Jens Axboe wrote:
>> On 12/17/19 3:28 PM, Pavel Begunkov wrote:
>>> percpu_ref_tryget() has its own overhead. Instead getting a reference
>>> for each request, grab a bunch once per io_submit_sqes().
>>>
>>> basic benchmark with submit and wait 128 non-linked nops showed ~5%
>>> performance gain. (7044 KIOPS vs 7423)
>>>
>>> Signed-off-by: Pavel Begunkov <asml.silence@...il.com>
>>> ---
>>>
>>> For notice: it could be done without @extra_refs variable,
>>> but looked too tangled because of gotos.
>>>
>>>
>>>  fs/io_uring.c | 11 ++++++++---
>>>  1 file changed, 8 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/fs/io_uring.c b/fs/io_uring.c
>>> index cf4138f0e504..6c85dfc62224 100644
>>> --- a/fs/io_uring.c
>>> +++ b/fs/io_uring.c
>>> @@ -845,9 +845,6 @@ static struct io_kiocb *io_get_req(struct io_ring_ctx *ctx,
>>>  	gfp_t gfp = GFP_KERNEL | __GFP_NOWARN;
>>>  	struct io_kiocb *req;
>>>  
>>> -	if (!percpu_ref_tryget(&ctx->refs))
>>> -		return NULL;
>>> -
>>>  	if (!state) {
>>>  		req = kmem_cache_alloc(req_cachep, gfp);
>>>  		if (unlikely(!req))
>>> @@ -3929,6 +3926,7 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr,
>>>  	struct io_submit_state state, *statep = NULL;
>>>  	struct io_kiocb *link = NULL;
>>>  	int i, submitted = 0;
>>> +	unsigned int extra_refs;
>>>  	bool mm_fault = false;
>>>  
>>>  	/* if we have a backlog and couldn't flush it all, return BUSY */
>>> @@ -3941,6 +3939,10 @@ static int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr,
>>>  		statep = &state;
>>>  	}
>>>  
>>> +	if (!percpu_ref_tryget_many(&ctx->refs, nr))
>>> +		return -EAGAIN;
>>> +	extra_refs = nr;
>>> +
>>>  	for (i = 0; i < nr; i++) {
>>>  		struct io_kiocb *req = io_get_req(ctx, statep);
>>>  
> 
> This also needs to come before the submit_state_start().
> 
Good catch, forgot to handle this. Thanks

> I forgot to mention that I really like the idea, no point in NOT batching
> the refs when we know exactly how many refs we need.
> 

-- 
Pavel Begunkov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ