lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 20 Jan 2020 16:43:53 +0800 From: Jason Wang <jasowang@...hat.com> To: Shahaf Shuler <shahafs@...lanox.com>, Rob Miller <rob.miller@...adcom.com> Cc: "Michael S. Tsirkin" <mst@...hat.com>, "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, "kvm@...r.kernel.org" <kvm@...r.kernel.org>, "virtualization@...ts.linux-foundation.org" <virtualization@...ts.linux-foundation.org>, Netdev <netdev@...r.kernel.org>, "Bie, Tiwei" <tiwei.bie@...el.com>, Jason Gunthorpe <jgg@...lanox.com>, "maxime.coquelin@...hat.com" <maxime.coquelin@...hat.com>, "Liang, Cunming" <cunming.liang@...el.com>, "Wang, Zhihong" <zhihong.wang@...el.com>, "Wang, Xiao W" <xiao.w.wang@...el.com>, "haotian.wang@...ive.com" <haotian.wang@...ive.com>, "Zhu, Lingshan" <lingshan.zhu@...el.com>, "eperezma@...hat.com" <eperezma@...hat.com>, "lulu@...hat.com" <lulu@...hat.com>, Parav Pandit <parav@...lanox.com>, "Tian, Kevin" <kevin.tian@...el.com>, "stefanha@...hat.com" <stefanha@...hat.com>, "rdunlap@...radead.org" <rdunlap@...radead.org>, "hch@...radead.org" <hch@...radead.org>, Ariel Adam <aadam@...hat.com>, "jakub.kicinski@...ronome.com" <jakub.kicinski@...ronome.com>, Jiri Pirko <jiri@...lanox.com>, "hanand@...inx.com" <hanand@...inx.com>, "mhabets@...arflare.com" <mhabets@...arflare.com> Subject: Re: [PATCH 3/5] vDPA: introduce vDPA bus On 2020/1/19 下午5:07, Shahaf Shuler wrote: > Friday, January 17, 2020 4:13 PM, Rob Miller: > Subject: Re: [PATCH 3/5] vDPA: introduce vDPA bus >>> On 2020/1/17 下午8:13, Michael S. Tsirkin wrote: >>>> On Thu, Jan 16, 2020 at 08:42:29PM +0800, Jason Wang wrote: > [...] > >>>> + * @set_map: Set device memory mapping, optional >>>> + * and only needed for device that using >>>> + * device specific DMA translation >>>> + * (on-chip IOMMU) >>>> + * @vdev: vdpa device >>>> + * @iotlb: vhost memory mapping to be >>>> + * used by the vDPA >>>> + * Returns integer: success (0) or error (< 0) >>> OK so any change just swaps in a completely new mapping? >>> Wouldn't this make minor changes such as memory hotplug >>> quite expensive? > What is the concern? Traversing the rb tree or fully replace the on-chip IOMMU translations? > If the latest, then I think we can take such optimization on the driver level (i.e. to update only the diff between the two mapping). This is similar to the design of platform IOMMU part of vhost-vdpa. We decide to send diffs to platform IOMMU there. If it's ok to do that in driver, we can replace set_map with incremental API like map()/unmap(). Then driver need to maintain rbtree itself. > If the first one, then I think memory hotplug is a heavy flow regardless. Do you think the extra cycles for the tree traverse will be visible in any way? I think if the driver can pause the DMA during the time for setting up new mapping, it should be fine. > > >> My understanding is that the incremental updating of the on chip IOMMU >> may degrade the performance. So vendor vDPA drivers may want to know >> all the mappings at once. > Yes exact. For Mellanox case for instance many optimization can be performed on a given memory layout. > >> Technically, we can keep the incremental API >> here and let the vendor vDPA drivers to record the full mapping >> internally which may slightly increase the complexity of vendor driver. > What will be the trigger for the driver to know it received the last mapping on this series and it can now push it to the on-chip IOMMU? For GPA->HVA(HPA) mapping, we can have flag for this. But for GIOVA_>HVA(HPA) mapping which could be changed by guest, it looks to me there's no concept of "last mapping" there. I guess in this case, mappings needs to be set from the ground. This could be expensive but consider most application uses static mappings (e.g dpdk in guest). It should be ok. Thanks > >> We need more inputs from vendors here. >> >> Thanks >
Powered by blists - more mailing lists