| lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
|
Open Source and information security mailing list archives
| ||
|
Message-ID: <aS9px07mgnNjSu8e@devvm11784.nha0.facebook.com> Date: Tue, 2 Dec 2025 14:35:51 -0800 From: Bobby Eshleman <bobbyeshleman@...il.com> To: "David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, Simon Horman <horms@...nel.org> Cc: netdev@...r.kernel.org, linux-kernel@...r.kernel.org, Mina Almasry <almasrymina@...gle.com>, Stanislav Fomichev <sdf@...ichev.me>, asml.silence@...il.com, Bobby Eshleman <bobbyeshleman@...a.com> Subject: Re: [PATCH net-next v2] net: devmem: convert binding refcount to percpu_ref On Tue, Dec 02, 2025 at 11:34:17AM -0800, Bobby Eshleman wrote: > From: Bobby Eshleman <bobbyeshleman@...a.com> > > Convert net_devmem_dmabuf_binding refcount from refcount_t to percpu_ref > to optimize common-case reference counting on the hot path. > > The typical devmem workflow involves binding a dmabuf to a queue > (acquiring the initial reference on binding->ref), followed by > high-volume traffic where every skb fragment acquires a reference. > Eventually traffic stops and the unbind operation releases the initial > reference. Additionally, the high traffic hot path is often multi-core. > This access pattern is ideal for percpu_ref as the first and last > reference during bind/unbind normally book-ends activity in the hot > path. > > __net_devmem_dmabuf_binding_free becomes the percpu_ref callback invoked > when the last reference is dropped. > My apologies for sending this out after net-next closed. Won't happen again. Best, Bobby
Powered by blists - more mailing lists