lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 24 Jun 2021 07:22:06 -0700
From:   John Fastabend <john.fastabend@...il.com>
To:     David Ahern <dsahern@...il.com>,
        John Fastabend <john.fastabend@...il.com>,
        Lorenzo Bianconi <lorenzo@...nel.org>, bpf@...r.kernel.org,
        netdev@...r.kernel.org
Cc:     lorenzo.bianconi@...hat.com, davem@...emloft.net, kuba@...nel.org,
        ast@...nel.org, daniel@...earbox.net, shayagr@...zon.com,
        sameehj@...zon.com, dsahern@...nel.org, brouer@...hat.com,
        echaudro@...hat.com, jasowang@...hat.com,
        alexander.duyck@...il.com, saeed@...nel.org,
        maciej.fijalkowski@...el.com, magnus.karlsson@...el.com,
        tirthendu.sarkar@...el.com
Subject: Re: [PATCH v9 bpf-next 00/14] mvneta: introduce XDP multi-buffer
 support

David Ahern wrote:
> On 6/22/21 11:48 PM, John Fastabend wrote:
> > David Ahern wrote:
> >> On 6/22/21 5:18 PM, John Fastabend wrote:
> >>> At this point I don't think we can have a partial implementation. At
> >>> the moment we have packet capture applications and protocol parsers
> >>> running in production. If we allow this to go in staged we are going
> >>> to break those applications that make the fundamental assumption they
> >>> have access to all the data in the packet.
> >>
> >> What about cases like netgpu where headers are accessible but data is
> >> not (e.g., gpu memory)? If the API indicates limited buffer access, is
> >> that sufficient?
> > 
> > I never consider netgpus and I guess I don't fully understand the
> > architecture to say. But, I would try to argue that an XDP API
> > should allow XDP to reach into the payload of these GPU packets as well.
> > Of course it might be slow.
> 
> AIUI S/W on the host can not access gpu memory, so that is not a
> possibility at all.

interesting.

> 
> Another use case is DDP and ZC. Mellanox has a proposal for NVME (with
> intentions to extend to iscsi) to do direct data placement. This is
> really just an example of zerocopy (and netgpu has morphed into zctap
> with current prototype working for host memory) which will become more
> prominent. XDP programs accessing memory already mapped to user space
> will be racy.

Its racy in the sense that if the application is reading data before
the driver flips some bit to tell the application new data is available
XDP could write old data or read application changed data? I think
it would still "work" same as AF_XDP? If you allow DDP then you lose
ability to l7 security as far as I can tell. But, thats a general
comment not specific to XDP.

> 
> To me these proposals suggest a trend and one that XDP APIs should be
> ready to handle - like indicating limited access or specifying length
> that can be accessed.

I still think the only case is this net-gpu which we don't have in
kernel at the moment right? I think a bit or size or ... would make
sense if we had this hardware. And then for the other  DDP/ZC case
the system owner would need to know what they are doing when they
turn on DDP or whatever.

.John

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ