lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <CAHS8izNZ1pJAFqa-3FPiUdMWEPE_md2vP1-6t-KPT6CPbO03+g@mail.gmail.com> Date: Thu, 17 Aug 2023 15:18:35 -0700 From: Mina Almasry <almasrymina@...gle.com> To: Pavel Begunkov <asml.silence@...il.com> Cc: David Ahern <dsahern@...nel.org>, netdev@...r.kernel.org, linux-media@...r.kernel.org, dri-devel@...ts.freedesktop.org, "David S. Miller" <davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, Jesper Dangaard Brouer <hawk@...nel.org>, Ilias Apalodimas <ilias.apalodimas@...aro.org>, Arnd Bergmann <arnd@...db.de>, Willem de Bruijn <willemdebruijn.kernel@...il.com>, Sumit Semwal <sumit.semwal@...aro.org>, Christian König <christian.koenig@....com>, Jason Gunthorpe <jgg@...pe.ca>, Hari Ramakrishnan <rharix@...gle.com>, Dan Williams <dan.j.williams@...el.com>, Andy Lutomirski <luto@...nel.org>, stephen@...workplumber.org, sdf@...gle.com, David Wei <dw@...idwei.uk> Subject: Re: [RFC PATCH v2 00/11] Device Memory TCP On Thu, Aug 17, 2023 at 11:04 AM Pavel Begunkov <asml.silence@...il.com> wrote: > > On 8/14/23 02:12, David Ahern wrote: > > On 8/9/23 7:57 PM, Mina Almasry wrote: > >> Changes in RFC v2: > >> ------------------ > ... > >> ** Test Setup > >> > >> Kernel: net-next with this RFC and memory provider API cherry-picked > >> locally. > >> > >> Hardware: Google Cloud A3 VMs. > >> > >> NIC: GVE with header split & RSS & flow steering support. > > > > This set seems to depend on Jakub's memory provider patches and a netdev > > driver change which is not included. For the testing mentioned here, you > > must have a tree + branch with all of the patches. Is it publicly available? > > > > It would be interesting to see how well (easy) this integrates with > > io_uring. Besides avoiding all of the syscalls for receiving the iov and > > releasing the buffers back to the pool, io_uring also brings in the > > ability to seed a page_pool with registered buffers which provides a > > means to get simpler Rx ZC for host memory. > > The patchset sounds pretty interesting. I've been working with David Wei > (CC'ing) on io_uring zc rx (prototype polishing stage) all that is old > similar approaches based on allocating an rx queue. It targets host > memory and device memory as an extra feature, uapi is different, lifetimes > are managed/bound to io_uring. Completions/buffers are returned to user via > a separate queue instead of cmsg, and pushed back granularly to the kernel > via another queue. I'll leave it to David to elaborate > > It sounds like we have space for collaboration here, if not merging then > reusing internals as much as we can, but we'd need to look into the > details deeper. > I'm happy to look at your implementation and collaborate on something that works for both use cases. Feel free to share unpolished prototype so I can start having a general idea if possible. > > Overall I like the intent and possibilities for extensions, but a lot of > > details are missing - perhaps some are answered by seeing an end-to-end > > implementation. > > -- > Pavel Begunkov -- Thanks, Mina
Powered by blists - more mailing lists