lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <ZDuYbUatimaNsELh@bullseye> Date: Sun, 16 Apr 2023 06:40:45 +0000 From: Bobby Eshleman <bobbyeshleman@...il.com> To: Stefan Hajnoczi <stefanha@...hat.com> Cc: Cong Wang <xiyou.wangcong@...il.com>, Cong Wang <cong.wang@...edance.com>, Bobby Eshleman <bobby.eshleman@...edance.com>, kvm@...r.kernel.org, netdev@...r.kernel.org, virtualization@...ts.linux-foundation.org Subject: Re: [Patch net] vsock: improve tap delivery accuracy On Wed, May 03, 2023 at 09:39:13AM -0400, Stefan Hajnoczi wrote: > On Sun, Apr 16, 2023 at 04:49:00AM +0000, Bobby Eshleman wrote: > > On Tue, May 02, 2023 at 04:14:18PM -0400, Stefan Hajnoczi wrote: > > > On Tue, May 02, 2023 at 10:44:04AM -0700, Cong Wang wrote: > > > > From: Cong Wang <cong.wang@...edance.com> > > > > > > > > When virtqueue_add_sgs() fails, the skb is put back to send queue, > > > > we should not deliver the copy to tap device in this case. So we > > > > need to move virtio_transport_deliver_tap_pkt() down after all > > > > possible failures. > > > > > > > > Fixes: 82dfb540aeb2 ("VSOCK: Add virtio vsock vsockmon hooks") > > > > Cc: Stefan Hajnoczi <stefanha@...hat.com> > > > > Cc: Stefano Garzarella <sgarzare@...hat.com> > > > > Cc: Bobby Eshleman <bobby.eshleman@...edance.com> > > > > Signed-off-by: Cong Wang <cong.wang@...edance.com> > > > > --- > > > > net/vmw_vsock/virtio_transport.c | 5 ++--- > > > > 1 file changed, 2 insertions(+), 3 deletions(-) > > > > > > > > diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c > > > > index e95df847176b..055678628c07 100644 > > > > --- a/net/vmw_vsock/virtio_transport.c > > > > +++ b/net/vmw_vsock/virtio_transport.c > > > > @@ -109,9 +109,6 @@ virtio_transport_send_pkt_work(struct work_struct *work) > > > > if (!skb) > > > > break; > > > > > > > > - virtio_transport_deliver_tap_pkt(skb); > > > > - reply = virtio_vsock_skb_reply(skb); > > > > - > > > > sg_init_one(&hdr, virtio_vsock_hdr(skb), sizeof(*virtio_vsock_hdr(skb))); > > > > sgs[out_sg++] = &hdr; > > > > if (skb->len > 0) { > > > > @@ -128,6 +125,8 @@ virtio_transport_send_pkt_work(struct work_struct *work) > > > > break; > > > > } > > > > > > > > + virtio_transport_deliver_tap_pkt(skb); > > > > + reply = virtio_vsock_skb_reply(skb); > > > > > > I don't remember the reason for the ordering, but I'm pretty sure it was > > > deliberate. Probably because the payload buffers could be freed as soon > > > as virtqueue_add_sgs() is called. > > > > > > If that's no longer true with Bobby's skbuff code, then maybe it's safe > > > to monitor packets after they have been sent. > > > > > > Stefan > > > > Hey Stefan, > > > > Unfortunately, skbuff doesn't change that behavior. > > > > If I understand correctly, the problem flow you are describing > > would be something like this: > > > > Thread 0 Thread 1 > > guest:virtqueue_add_sgs()[@send_pkt_work] > > > > host:vhost_vq_get_desc()[@handle_tx_kick] > > host:vhost_add_used() > > host:vhost_signal() > > guest:virtqueue_get_buf()[@tx_work] > > guest:consume_skb() > > > > guest:deliver_tap_pkt()[@send_pkt_work] > > ^ use-after-free > > > > Which I guess is possible because the receiver can consume the new > > scatterlist during the processing kicked off for a previous batch? > > (doesn't have to wait for the subsequent kick) > > Yes, drivers must assume that the device completes request before > virtqueue_add_sgs() returns. For example, the device is allowed to poll > the virtqueue memory and may see the new descriptors immediately. > > I haven't audited the current vsock code path to determine whether it's > possible to reach consume_skb() before deliver_tap_pkt() returns, so I > can't say whether it's safe or not. > I see, thanks for the clarification. Best, Bobby
Powered by blists - more mailing lists