[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0868d209d7424942a46d1238674cf75d@hisilicon.com>
Date: Tue, 9 Feb 2021 03:01:42 +0000
From: "Song Bao Hua (Barry Song)" <song.bao.hua@...ilicon.com>
To: Jason Gunthorpe <jgg@...pe.ca>
CC: David Hildenbrand <david@...hat.com>,
"Wangzhou (B)" <wangzhou1@...ilicon.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"linux-api@...r.kernel.org" <linux-api@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Alexander Viro <viro@...iv.linux.org.uk>,
"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
"kevin.tian@...el.com" <kevin.tian@...el.com>,
"jean-philippe@...aro.org" <jean-philippe@...aro.org>,
"eric.auger@...hat.com" <eric.auger@...hat.com>,
"Liguozhu (Kenneth)" <liguozhu@...ilicon.com>,
"zhangfei.gao@...aro.org" <zhangfei.gao@...aro.org>,
"chensihang (A)" <chensihang1@...ilicon.com>
Subject: RE: [RFC PATCH v3 1/2] mempinfd: Add new syscall to provide memory
pin
> -----Original Message-----
> From: Jason Gunthorpe [mailto:jgg@...pe.ca]
> Sent: Tuesday, February 9, 2021 10:30 AM
> To: Song Bao Hua (Barry Song) <song.bao.hua@...ilicon.com>
> Cc: David Hildenbrand <david@...hat.com>; Wangzhou (B)
> <wangzhou1@...ilicon.com>; linux-kernel@...r.kernel.org;
> iommu@...ts.linux-foundation.org; linux-mm@...ck.org;
> linux-arm-kernel@...ts.infradead.org; linux-api@...r.kernel.org; Andrew
> Morton <akpm@...ux-foundation.org>; Alexander Viro <viro@...iv.linux.org.uk>;
> gregkh@...uxfoundation.org; kevin.tian@...el.com; jean-philippe@...aro.org;
> eric.auger@...hat.com; Liguozhu (Kenneth) <liguozhu@...ilicon.com>;
> zhangfei.gao@...aro.org; chensihang (A) <chensihang1@...ilicon.com>
> Subject: Re: [RFC PATCH v3 1/2] mempinfd: Add new syscall to provide memory
> pin
>
> On Mon, Feb 08, 2021 at 08:35:31PM +0000, Song Bao Hua (Barry Song) wrote:
> >
> >
> > > From: Jason Gunthorpe [mailto:jgg@...pe.ca]
> > > Sent: Tuesday, February 9, 2021 7:34 AM
> > > To: David Hildenbrand <david@...hat.com>
> > > Cc: Wangzhou (B) <wangzhou1@...ilicon.com>; linux-kernel@...r.kernel.org;
> > > iommu@...ts.linux-foundation.org; linux-mm@...ck.org;
> > > linux-arm-kernel@...ts.infradead.org; linux-api@...r.kernel.org; Andrew
> > > Morton <akpm@...ux-foundation.org>; Alexander Viro
> <viro@...iv.linux.org.uk>;
> > > gregkh@...uxfoundation.org; Song Bao Hua (Barry Song)
> > > <song.bao.hua@...ilicon.com>; kevin.tian@...el.com;
> > > jean-philippe@...aro.org; eric.auger@...hat.com; Liguozhu (Kenneth)
> > > <liguozhu@...ilicon.com>; zhangfei.gao@...aro.org; chensihang (A)
> > > <chensihang1@...ilicon.com>
> > > Subject: Re: [RFC PATCH v3 1/2] mempinfd: Add new syscall to provide memory
> > > pin
> > >
> > > On Mon, Feb 08, 2021 at 09:14:28AM +0100, David Hildenbrand wrote:
> > >
> > > > People are constantly struggling with the effects of long term pinnings
> > > > under user space control, like we already have with vfio and RDMA.
> > > >
> > > > And here we are, adding yet another, easier way to mess with core MM in
> the
> > > > same way. This feels like a step backwards to me.
> > >
> > > Yes, this seems like a very poor candidate to be a system call in this
> > > format. Much too narrow, poorly specified, and possibly security
> > > implications to allow any process whatsoever to pin memory.
> > >
> > > I keep encouraging people to explore a standard shared SVA interface
> > > that can cover all these topics (and no, uaccel is not that
> > > interface), that seems much more natural.
> > >
> > > I still haven't seen an explanation why DMA is so special here,
> > > migration and so forth jitter the CPU too, environments that care
> > > about jitter have to turn this stuff off.
> >
> > This paper has a good explanation:
> > https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7482091
> >
> > mainly because page fault can go directly to the CPU and we have
> > many CPUs. But IO Page Faults go a different way, thus mean much
> > higher latency 3-80x slower than page fault:
> > events in hardware queue -> Interrupts -> cpu processing page fault
> > -> return events to iommu/device -> continue I/O.
>
> The justifications for this was migration scenarios and migration is
> short. If you take a fault on what you are migrating only then does it
> slow down the CPU.
I agree this can slow down CPU, but not as much as IO page fault.
On the other hand, wouldn't it be the benefit of hardware accelerators
to have a lower and more stable latency zip/encryption than CPU?
>
> Are you also working with HW where the IOMMU becomes invalidated after
> a migration and doesn't reload?
>
> ie not true SVA but the sort of emulated SVA we see in a lot of
> places?
Yes. It is true SVA not emulated SVA.
>
> It would be much better to work improve that to have closer sync with the
> CPU page table than to use pinning.
Absolutely I agree improving IOPF and making IOPF catch up with the
performance of page fault is the best way. but it would take much
long time to optimize both HW and SW. While waiting for them to
mature, probably some way which can minimize IOPF should be used to
take the responsivity.
>
> Jason
Thanks
Barry
Powered by blists - more mailing lists