[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181123080242.GK157308@Turing-Arch-b>
Date: Fri, 23 Nov 2018 16:02:42 +0800
From: Kenneth Lee <liguozhu@...ilicon.com>
To: Jason Gunthorpe <jgg@...pe.ca>
CC: Leon Romanovsky <leon@...nel.org>,
Kenneth Lee <nek.in.cn@...il.com>,
"Tim Sell" <timothy.sell@...sys.com>, <linux-doc@...r.kernel.org>,
"Alexander Shishkin" <alexander.shishkin@...ux.intel.com>,
Zaibo Xu <xuzaibo@...wei.com>, <zhangfei.gao@...mail.com>,
<linuxarm@...wei.com>, <haojian.zhuang@...aro.org>,
Christoph Lameter <cl@...ux.com>,
Hao Fang <fanghao11@...wei.com>,
Gavin Schenk <g.schenk@...elmann.de>,
"RDMA mailing list" <linux-rdma@...r.kernel.org>,
Zhou Wang <wangzhou1@...ilicon.com>,
"Doug Ledford" <dledford@...hat.com>,
Uwe Kleine-König
<u.kleine-koenig@...gutronix.de>,
David Kershner <david.kershner@...sys.com>,
Johan Hovold <johan@...nel.org>,
Cyrille Pitchen <cyrille.pitchen@...e-electrons.com>,
Sagar Dharia <sdharia@...eaurora.org>,
Jens Axboe <axboe@...nel.dk>, <guodong.xu@...aro.org>,
linux-netdev <netdev@...r.kernel.org>,
Randy Dunlap <rdunlap@...radead.org>,
<linux-kernel@...r.kernel.org>, Vinod Koul <vkoul@...nel.org>,
<linux-crypto@...r.kernel.org>,
Philippe Ombredanne <pombredanne@...b.com>,
Sanyog Kale <sanyog.r.kale@...el.com>,
"David S. Miller" <davem@...emloft.net>,
<linux-accelerators@...ts.ozlabs.org>
Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce
On Wed, Nov 21, 2018 at 07:58:40PM -0700, Jason Gunthorpe wrote:
> Date: Wed, 21 Nov 2018 19:58:40 -0700
> From: Jason Gunthorpe <jgg@...pe.ca>
> To: Kenneth Lee <liguozhu@...ilicon.com>
> CC: Leon Romanovsky <leon@...nel.org>, Kenneth Lee <nek.in.cn@...il.com>,
> Tim Sell <timothy.sell@...sys.com>, linux-doc@...r.kernel.org, Alexander
> Shishkin <alexander.shishkin@...ux.intel.com>, Zaibo Xu
> <xuzaibo@...wei.com>, zhangfei.gao@...mail.com, linuxarm@...wei.com,
> haojian.zhuang@...aro.org, Christoph Lameter <cl@...ux.com>, Hao Fang
> <fanghao11@...wei.com>, Gavin Schenk <g.schenk@...elmann.de>, RDMA mailing
> list <linux-rdma@...r.kernel.org>, Zhou Wang <wangzhou1@...ilicon.com>,
> Doug Ledford <dledford@...hat.com>, Uwe Kleine-König
> <u.kleine-koenig@...gutronix.de>, David Kershner
> <david.kershner@...sys.com>, Johan Hovold <johan@...nel.org>, Cyrille
> Pitchen <cyrille.pitchen@...e-electrons.com>, Sagar Dharia
> <sdharia@...eaurora.org>, Jens Axboe <axboe@...nel.dk>,
> guodong.xu@...aro.org, linux-netdev <netdev@...r.kernel.org>, Randy Dunlap
> <rdunlap@...radead.org>, linux-kernel@...r.kernel.org, Vinod Koul
> <vkoul@...nel.org>, linux-crypto@...r.kernel.org, Philippe Ombredanne
> <pombredanne@...b.com>, Sanyog Kale <sanyog.r.kale@...el.com>, "David S.
> Miller" <davem@...emloft.net>, linux-accelerators@...ts.ozlabs.org
> Subject: Re: [RFCv3 PATCH 1/6] uacce: Add documents for WarpDrive/uacce
> User-Agent: Mutt/1.9.4 (2018-02-28)
> Message-ID: <20181122025840.GB19938@...pe.ca>
>
> On Wed, Nov 21, 2018 at 02:08:05PM +0800, Kenneth Lee wrote:
>
> > > But considering Jean's SVA stuff seems based on mmu notifiers, I have
> > > a hard time believing that it has any different behavior from RDMA's
> > > ODP, and if it does have different behavior, then it is probably just
> > > a bug in the ODP implementation.
> >
> > As Jean has explained, his solution is based on page table sharing. I think ODP
> > should also consider this new feature.
>
> Shared page tables would require the HW to walk the page table format
> of the CPU directly, not sure how that would be possible for ODP?
>
> Presumably the implementation for ARM relies on the IOMMU hardware
> doing this?
Yes, that is the idea. And since Jean is merging the AMD and Intel solution
together, I assume they can do the same. This is also the reason I want to solve
my problem on top of IOMMU directly. But anyway, let me try to see if I can
merge the logic with ODP.
>
> > > > > If all your driver needs is to mmap some PCI bar space, route
> > > > > interrupts and do DMA mapping then mediated VFIO is probably a good
> > > > > choice.
> > > >
> > > > Yes. That is what is done in our RFCv1/v2. But we accepted Jerome's opinion and
> > > > try not to add complexity to the mm subsystem.
> > >
> > > Why would a mediated VFIO driver touch the mm subsystem? Sounds like
> > > you don't have a VFIO driver if it needs to do stuff like that...
> >
> > VFIO has no ODP-like solution, and if we want to solve the fork problem, we have
> > to make some change to iommu and the fork procedure. Further, VFIO takes every
> > queue as a independent device. This create a lot of trouble on resource
> > management. For example, you will need a manager process to withdraw the unused
> > device and you need to let the user process know about PASID of the queue, and
> > so on.
>
> Well, I would think you'd add SVA support to the VFIO driver as a
> generic capability - it seems pretty useful for any VFIO user as it
> avoids all the kernel upcalls to do memory pinning and DMA address
> translation.
It is already part of Jean's patchset. And that's why I built my solution on
VFIO in the first place. But I think the concept of SVA and PASID is not
compatible with the original VFIO concept space. You would not share your whole
address space to a device at all in a virtual machine manager, wouldn't you? And
if you can manage to have a separated mdev for your virtual machine, why bother
to set a PASID to it? The answer to those problem, I think, will be Intel's
Scalable IO Virtualization. For accelerator, the requirement is simply: getting
a handle to device, attaching the process's mm with the handle by sharing the
process's page table with its iommu indexed by PASID, and start the
communication...
>
> Once the VFIO driver knows about this as a generic capability then the
> device it exposes to userspace would use CPU addresses instead of DMA
> addresses.
>
> The question is if your driver needs much more than the device
> agnostic generic services VFIO provides.
>
> I'm not sure what you have in mind with resource management.. It is
> hard to revoke resources from userspace, unless you are doing
> kernel syscalls, but then why do all this?
Say, I have 1024 queues in my accelerator. I can get one by opening the device
and attach it with the fd. If the process exit by any means, the queue can be
returned with the release of the fd. But if it is mdev, it will still be there
and some one should tell the allocator it is available again. This is not easy
to design in user space.
>
> Jason
--
Powered by blists - more mailing lists