[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7a63ca2a-7814-5dce-ce8b-4954326bb661@redhat.com>
Date: Wed, 16 Jun 2021 18:21:56 +0800
From: Jason Wang <jasowang@...hat.com>
To: Willem de Bruijn <willemdebruijn.kernel@...il.com>
Cc: Alexei Starovoitov <alexei.starovoitov@...il.com>,
Tanner Love <tannerlove.kernel@...il.com>,
Network Development <netdev@...r.kernel.org>,
David Miller <davem@...emloft.net>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>,
Eric Dumazet <edumazet@...gle.com>,
Petar Penkov <ppenkov@...gle.com>,
Jakub Kicinski <kuba@...nel.org>,
"Michael S . Tsirkin" <mst@...hat.com>,
Tanner Love <tannerlove@...gle.com>
Subject: Re: [PATCH net-next v4 2/3] virtio_net: add optional flow dissection
in virtio_net_hdr_to_skb
在 2021/6/15 下午10:47, Willem de Bruijn 写道:
>>>> Isn't virtio_net_hdr a virito-net specific metadata?
>>> I don't think it is ever since it was also adopted for tun, tap,
>>> pf_packet and uml. Basically, all callers of virtio_net_hdr_to_skb.
>>
>> For tun/tap it was used to serve the backend of the virtio-net datapath
>> which is still virtio-net specific.
>>
>> For pf_packet and uml, they looks more like a byproduct or shortcut of
>> the virtio-net backend (vhost has code to use af_packet, but I'm not
>> whether or not it still work), so still kind of.
>>
>> But the most important thing is that, the semantic of vnet header is
>> defined by virtio spec.
> The packet socket and tuntap interfaces have use cases unrelated
> to virtio. virtio_net_hdr is just a way to egress packets with offloads.
>
> So yes and no. The current definition of struct virtio_net_hdr is part of
> the Linux ABI as is, and used outside virtio.
>
> If virtio changes its spec, let's say to add SO_TXTIME to give an
> example, we may or may not bother to add a pf_packet setsockopt
> to optionally support that.
Right.
>
>>>>> The only metadata that can be passed with tuntap, pf_packet et al is
>>>>> virtio_net_hdr.
>>>>>
>>>>> I quite don't understand where xen-netfront et al come in.
>>>> The problem is, what kind of issue you want to solve. If you want to
>>>> solve virtio specific issue, why do you need to do that in the general
>>>> flow dissector?
>>>>
>>>> If you want to solve the general issue of the packet metadata validation
>>>> from untrusted source, you need validate not only virtio-net but all the
>>>> others. Netfront is another dodgy source and there're a lot of implied
>>>> untrusted source in the case of virtualization environment.
>>> Ah understood.
>>>
>>> Yes, the goal is to protect the kernel from untrusted packets coming
>>> from userspace in general. There are only a handful such injection
>>> APIs.
>> Userspace is not necessarily the source: it could come from a device or
>> BPF.
>>
>> Theoretically, in the case of virtualization/smart NIC, they could be
>> other type of untrusted source. Do we need to validate/mitigate the issues
> Adding packets coming from NICs or BPF programs greatly expands
> the scope.
>
> The purpose of this patch series is to protect the kernel against packets
> inserted from userspace, by adding validation at the entry point.
>
> Agreed that BPF programs can do unspeakable things to packets, but
> that is a different issue (with a different trust model), and out of scope.
Ok, I think I understand your point, so basically we had two types of
untrusted sources for virtio_net_hdr_to_skb():
1) packet injected from userspace: tun, tap, packet
2) packet received from a NIC: virtio-net, uml
If I understand correctly, you only care about case 1). But the method
could also be used by case 2).
For 1) the proposal should work, we only need to care about csum/gso
stuffs instead of virtio specific attributes like num_buffers.
And 2) is the one that I want to make sure it works as expected since it
requires more context to validate and we have other untrusted NICs
>
>> 1) per source/device like what this patch did, looks like a lot of work
>> 2) validate via the general eBPF facility like XDP which requires
>> multi-buffer and gso/csum support (which we know will happen for sure)
>>
>> ?
> Are you thinking of the XDP support in virtio-net, which is specific
> to that device and does not capture these other uses cases of
> struct virtio_net_hdr_to_skb?
Just to clarify, currently when using XDP, virtio-net will think the
vnet header is invalid since there's no way to access the vnet header
and XDP doesn't support s/g, csum and offloads. But I believe in the
future it will get all those support, and then XDP should have a way to
access the device specific packet header like vnet header. Assuming all
of these are ready, virtio_net_hdr_to_skb() is not necessarily called in
the RX path. This means, we may lose to track some packets if they were
go via XDP which bypass the validation hooks in virtio_net_hdr_skb().
>
> Or are you okay with the validation hook, but suggesting using
> an XDP program type instead of a flow dissector program type?
Basically, I think the virtio_net_hdr_to_skb() is probably not the best
place (see above).
I wonder whether XDP is the best place to do that. But XDP has its own
issues e.g it can't work for packet and macvtap.
Or maybe we can simply tweak this patch to make sure the hooks works
even if XDP is used.
>
>>> I have not had to deal with netfront as much as the virtio_net_hdr
>>> users. I'll take a look at that and netvsc.
>>
>> For xen, it's interesting that dodgy was set by netfront but not netback.
>>
>>
>>> I cannot recall seeing as
>>> many syzkaller reports about those, but that may not necessarily imply
>>> that they are more robust -- it could just be that syzkaller has no
>>> easy way to exercise them, like with packet sockets.
>>
>> Yes.
>>
>>
>>>>>> That's why I suggest to use something more
>>>>>> generic like XDP from the start. Yes, GSO stuffs was disabled by
>>>>>> virtio-net on XDP but it's not something that can not be fixed. If the
>>>>>> GSO and s/g support can not be done in short time
>>>>> An alternative interface does not address that we already have this
>>>>> interface and it is already causing problems.
>>>> What problems did you meant here?
>>> The long list of syzkaller bugs that required fixes either directly in
>>> virtio_net_hdr_to_skb or deeper in the stack, e.g.,
>>>
>>> 924a9bc362a5 net: check if protocol extracted by
>>> virtio_net_hdr_set_proto is correct
>>> 6dd912f82680 net: check untrusted gso_size at kernel entry
>>> 9274124f023b net: stricter validation of untrusted gso packets
>>> d2aa125d6290 net: Don't set transport offset to invalid value
>>> d5be7f632bad net: validate untrusted gso packets without csum offload
>>> 9d2f67e43b73 net/packet: fix packet drop as of virtio gso
>>>
>>> 7c68d1a6b4db net: qdisc_pkt_len_init() should be more robust
>>> cb9f1b783850 ip: validate header length on virtual device xmit
>>> 418e897e0716 gso: validate gso_type on ipip style tunnels
>>> 121d57af308d gso: validate gso_type in GSO handlers
>>>
>>> This is not necessarily an exhaustive list.
>>>
>>> And others not directly due to gso/csum metadata, but malformed
>>> packets from userspace nonetheless, such as
>>>
>>> 76c0ddd8c3a6 ip6_tunnel: be careful when accessing the inner header
>>
>> I see. So if I understand correctly, introducing SKB_GSO_DODGY is a
>> balance between security and performance. That is to say, defer the
>> gso/header check until just before they are used. But a lot of codes
>> were not wrote under this assumption (forget to deal with dodgy packets)
>> which breaks things.
> DODGY and requiring robust kernel stack code is part of it, but
>
> Making the core kernel stack robust against bad packets incurs cost on
> every packet for the relatively rare case of these packets coming from
> outside the stack. Ideally, this cost is incurred on such packets
> alone. That's why I prefer validation at the source.
>
> This patch series made it *optional* validation at the source to
> address your concerns about performance regressions. I.e., if you know
> that the path packets will take is robust, such as a simple bridging
> case between VMs, it is perhaps fine to forgo the checks. But egress
> over a physical NIC or even through the kernel egress path (or
> loopback to ingress, which has a different set of expectations) has
> higher integrity requirements.
>
> Not all concerns are GSO, and thus captured by DODGY. Such as truncated hdrlen.
Yes, what I understand that the major use case is the mitigation and we
still need to make sure the stack is robust.
>
>> So the problem is that, the NIC driver is wrote under the assumption
>> that the gso metadata is correct. I remember there's a kernel CVE
>> (divide by zero) because of the gso_size or other fields several years
>> ago for a driver.
> Right, and more: both NICs and the software stack itself have various
> expectations on input. Such as whether protocol headers are in the
> linear section, or that the csum offset is within bounds.
>
>>> In the case where they are used along with some ABI through which data
>>> can be inserted into the kernel,
>>
>> CPUMAP is something that the data can be inserted into the kernel.
>>
>>
>>> such as AF_XDP, it is relevant. And I
>>> agree that then the XDP program can do the validation directly.
>>>
>>> That just does not address validating of legacy interfaces. Those
>>> interfaces generally require CAP_NET_RAW to setup, but they are often
>>> used after setup to accept untrusted contents, so I don't think they
>>> should be considered "root, so anything goes".
>>
>> I'm not sure I understand the meaning of "legacy" interface here.
> the existing interfaces to insert packets from userspace:
> tuntap/uml/pf_packet/..
Ok.
>
>>
>>>> Thanks
>>> Thanks for the discussion. As said, we'll start by looking at the
>>> other similar netfront interfaces.
>>
>> Thanks, I'm not objecting the goal, but I want to figure out whether
>> it's the best to choice to do that via flow dissector.
> Understood. My main question at this point is whether you are
> suggesting a different bpf program type at the same hooks, or are
> arguing that a XDP-based interface will make all this obsolete.
Ideally, I think XDP is probably the best. It is designed to do such
early packet dropping per device. But it misses some important features
which makes it can't work today.
The method proposed in this patch should work for userspace injected
packets, but it may not work very well in the case of XDP in the future.
Using another bpf program type may only solve the issue of vnet header
coupling with vnet header.
I wonder whether or not we can do something in the middle:
1) make sure the flow dissector works even for the case of XDP (note
that tun support XDP)
2) use some general fields instead of virtio-net specific fields, e.g
using device header instead of vnet header in the flow keys strcuture
Or if the maintainers are happy with the current method, I don't want to
block it.
Thanks
>
Powered by blists - more mailing lists