lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230719105711.448f8cad@hermes.local>
Date:   Wed, 19 Jul 2023 10:57:11 -0700
From:   Stephen Hemminger <stephen@...workplumber.org>
To:     Mina Almasry <almasrymina@...gle.com>
Cc:     Jakub Kicinski <kuba@...nel.org>, David Ahern <dsahern@...nel.org>,
        Jason Gunthorpe <jgg@...pe.ca>,
        Andy Lutomirski <luto@...nel.org>,
        linux-kernel@...r.kernel.org, linux-media@...r.kernel.org,
        dri-devel@...ts.freedesktop.org, linaro-mm-sig@...ts.linaro.org,
        netdev@...r.kernel.org, linux-arch@...r.kernel.org,
        linux-kselftest@...r.kernel.org,
        Sumit Semwal <sumit.semwal@...aro.org>,
        Christian König <christian.koenig@....com>,
        "David S. Miller" <davem@...emloft.net>,
        Eric Dumazet <edumazet@...gle.com>,
        Paolo Abeni <pabeni@...hat.com>,
        Jesper Dangaard Brouer <hawk@...nel.org>,
        Ilias Apalodimas <ilias.apalodimas@...aro.org>,
        Arnd Bergmann <arnd@...db.de>,
        Willem de Bruijn <willemdebruijn.kernel@...il.com>,
        Shuah Khan <shuah@...nel.org>
Subject: Re: [RFC PATCH 00/10] Device Memory TCP

On Wed, 19 Jul 2023 08:10:58 -0700
Mina Almasry <almasrymina@...gle.com> wrote:

> On Tue, Jul 18, 2023 at 3:45 PM Jakub Kicinski <kuba@...nel.org> wrote:
> >
> > On Tue, 18 Jul 2023 16:35:17 -0600 David Ahern wrote:  
> > > I do not see how 1 RSS context (or more specifically a h/w Rx queue) can
> > > be used properly with memory from different processes (or dma-buf
> > > references).  
> 
> Right, my experience with dma-bufs from GPUs are that they're
> allocated from the userspace and owned by the process that allocated
> the backing GPU memory and generated the dma-buf from it. I.e., we're
> limited to 1 dma-buf per RX queue. If we enable binding multiple
> dma-bufs to the same RX queue, we have a problem, because AFAIU the
> NIC can't decide which dma-buf to put the packet into (it hasn't
> parsed the packet's destination yet).
> 
> > > When the process dies, that memory needs to be flushed from
> > > the H/W queues. Queues with interlaced submissions make that more
> > > complicated.  
> >  
> 
> When the process dies, do we really want to flush the memory from the
> hardware queues? The drivers I looked at don't seem to have a function
> to flush the rx queues alone, they usually do an entire driver reset
> to achieve that. Not sure if that's just convenience or there is some
> technical limitation there. Do we really want  to trigger a driver
> reset at the event a userspace process crashes?

Naive idea.
Would it be possible for process to use mmap() on the GPU memory and then
do zero copy TCP receive some how? Or is this what is being proposed.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ