lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 31 Mar 2023 21:04:48 -0300
From:   Gabriel Krisman Bertazi <krisman@...e.de>
To:     Pavel Begunkov <asml.silence@...il.com>
Cc:     io-uring@...r.kernel.org, Jens Axboe <axboe@...nel.dk>,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 10/11] io_uring/rsrc: cache struct io_rsrc_node

Pavel Begunkov <asml.silence@...il.com> writes:

> On 3/31/23 15:09, Gabriel Krisman Bertazi wrote:
>> Pavel Begunkov <asml.silence@...il.com> writes:
>> 
>>> Add allocation cache for struct io_rsrc_node, it's always allocated and
>>> put under ->uring_lock, so it doesn't need any extra synchronisation
>>> around caches.
>> Hi Pavel,
>> I'm curious if you considered using kmem_cache instead of the custom
>> cache for this case?  I'm wondering if this provokes visible difference in
>> performance in your benchmark.
>
> I didn't try it, but kmem_cache vs kmalloc, IIRC, doesn't bring us
> much, definitely doesn't spare from locking, and the overhead
> definitely wasn't satisfactory for requests before.

There is no locks in the fast path of slub, as far as I know.  it has a
per-cpu cache that is refilled once empty, quite similar to the fastpath
of this cache.  I imagine the performance hit in slub comes from the
barrier and atomic operations?

kmem_cache works fine for most hot paths of the kernel.  I think this
custom cache makes sense for the request cache, where objects are
allocated at an incredibly high rate.  but is this level of update
frequency a valid use case here?

If it is indeed a significant performance improvement, I guess it is
fine to have another user of the cache. But I'd be curious to know how
much of the performance improvement you mentioned in the cover letter is
due to this patch!

-- 
Gabriel Krisman Bertazi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ