lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <20220415161602.GB47428@anparri> Date: Fri, 15 Apr 2022 18:16:02 +0200 From: Andrea Parri <parri.andrea@...il.com> To: "Michael Kelley (LINUX)" <mikelley@...rosoft.com> Cc: KY Srinivasan <kys@...rosoft.com>, Haiyang Zhang <haiyangz@...rosoft.com>, Stephen Hemminger <sthemmin@...rosoft.com>, Wei Liu <wei.liu@...nel.org>, Dexuan Cui <decui@...rosoft.com>, Stefano Garzarella <sgarzare@...hat.com>, David Miller <davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>, "linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>, "virtualization@...ts.linux-foundation.org" <virtualization@...ts.linux-foundation.org>, "netdev@...r.kernel.org" <netdev@...r.kernel.org>, "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org> Subject: Re: [RFC PATCH 4/6] hv_sock: Initialize send_buf in hvs_stream_enqueue() > > > All fields are explicitly initialized, and in the data > > > array, only the populated bytes are copied to the ring buffer. There should not > > > be any uninitialized values sent to the host. Zeroing the memory ahead of > > > time certainly provides an extra protection (particularly against padding bytes, > > > but there can't be any since the layout of the data is part of the protocol with > > > Hyper-V). > > > > Rather than keeping checking that... > > The extra protection might be obtained by just zero'ing the header (i.e., the > bytes up to the 16 Kbyte data array). I don't have a strong preference either > way, so up to you. A main reason behind this RFC is that I don't have either. IIUC, you're suggesting something like (the compiled only): diff --git a/net/vmw_vsock/hyperv_transport.c b/net/vmw_vsock/hyperv_transport.c index 092cadc2c866d..200f12c432863 100644 --- a/net/vmw_vsock/hyperv_transport.c +++ b/net/vmw_vsock/hyperv_transport.c @@ -234,7 +234,8 @@ static int __hvs_send_data(struct vmbus_channel *chan, { hdr->pkt_type = 1; hdr->data_size = to_write; - return vmbus_sendpacket(chan, hdr, sizeof(*hdr) + to_write, + return vmbus_sendpacket(chan, hdr, + offsetof(struct hvs_send_buf, data) + to_write, 0, VM_PKT_DATA_INBAND, 0); } @@ -658,6 +659,7 @@ static ssize_t hvs_stream_enqueue(struct vsock_sock *vsk, struct msghdr *msg, send_buf = kmalloc(sizeof(*send_buf), GFP_KERNEL); if (!send_buf) return -ENOMEM; + memset(send_buf, 0, offsetof(struct hvs_send_buf, data)); /* Reader(s) could be draining data from the channel as we write. * Maximize bandwidth, by iterating until the channel is found to be -- Let me queue this for further testing/review... Thanks, Andrea
Powered by blists - more mailing lists