lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c6c6ca98-8793-5510-ad24-583e25403e35@redhat.com>
Date:   Thu, 28 Nov 2019 12:18:15 +0800
From:   Jason Wang <jasowang@...hat.com>
To:     Alexei Starovoitov <alexei.starovoitov@...il.com>,
        Jakub Kicinski <jakub.kicinski@...ronome.com>
Cc:     Song Liu <songliubraving@...com>,
        Jesper Dangaard Brouer <hawk@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        "Michael S . Tsirkin" <mst@...hat.com>, qemu-devel@...gnu.org,
        netdev@...r.kernel.org, John Fastabend <john.fastabend@...il.com>,
        Alexei Starovoitov <ast@...nel.org>,
        Martin KaFai Lau <kafai@...com>,
        Prashant Bhole <prashantbhole.linux@...il.com>,
        kvm@...r.kernel.org, Yonghong Song <yhs@...com>,
        Andrii Nakryiko <andriin@...com>,
        "David S . Miller" <davem@...emloft.net>
Subject: Re: [RFC net-next 00/18] virtio_net XDP offload


On 2019/11/28 上午11:32, Alexei Starovoitov wrote:
> On Tue, Nov 26, 2019 at 12:35:14PM -0800, Jakub Kicinski wrote:
>> I'd appreciate if others could chime in.
> The performance improvements are quite appealing.
> In general offloading from higher layers into lower layers is necessary long term.
>
> But the approach taken by patches 15 and 17 is a dead end. I don't see how it
> can ever catch up with the pace of bpf development.


This applies for any hardware offloading features, isn't it?


>   As presented this approach
> works for the most basic programs and simple maps. No line info, no BTF, no
> debuggability. There are no tail_calls either.


If I understand correctly, neither of above were implemented in NFP. We 
can collaborate to find solution for all of those.


>   I don't think I've seen a single
> production XDP program that doesn't use tail calls.


It looks to me we can manage to add this support.


> Static and dynamic linking
> is coming. Wraping one bpf feature at a time with virtio api is never going to
> be complete.


It's a common problem for hardware that want to implement eBPF 
offloading, not a virtio specific one.


> How FDs are going to be passed back? OBJ_GET_INFO_BY_FD ?
> OBJ_PIN/GET ? Where bpffs is going to live ?


If we want pinning work in the case of virt, it should live in both host 
and guest probably.


>   Any realistic XDP application will
> be using a lot more than single self contained XDP prog with hash and array
> maps.


It's possible if we want to use XDP offloading to accelerate VNF which 
often has simple logic.


> It feels that the whole sys_bpf needs to be forwarded as a whole from
> guest into host. In case of true hw offload the host is managing HW. So it
> doesn't forward syscalls into the driver. The offload from guest into host is
> different. BPF can be seen as a resource that host provides and guest kernel
> plus qemu would be forwarding requests between guest user space and host
> kernel. Like sys_bpf(BPF_MAP_CREATE) can passthrough into the host directly.
> The FD that hosts sees would need a corresponding mirror FD in the guest. There
> are still questions about bpffs paths, but the main issue of
> one-feature-at-a-time will be addressed in such approach.


We try to follow what NFP did by starting from a fraction of the whole 
eBPF features. It would be very hard to have all eBPF features 
implemented from the start.  It would be helpful to clarify what's the 
minimal set of features that you want to have from the start.


> There could be other
> solutions, of course.
>
>

Suggestions are welcomed.

Thanks

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ