[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b4e2acc237e44ffe916135e96ad3ef20@hisilicon.com>
Date: Mon, 8 Feb 2021 02:27:06 +0000
From: "Song Bao Hua (Barry Song)" <song.bao.hua@...ilicon.com>
To: Matthew Wilcox <willy@...radead.org>
CC: "Wangzhou (B)" <wangzhou1@...ilicon.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"linux-api@...r.kernel.org" <linux-api@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Alexander Viro <viro@...iv.linux.org.uk>,
"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
"jgg@...pe.ca" <jgg@...pe.ca>,
"kevin.tian@...el.com" <kevin.tian@...el.com>,
"jean-philippe@...aro.org" <jean-philippe@...aro.org>,
"eric.auger@...hat.com" <eric.auger@...hat.com>,
"Liguozhu (Kenneth)" <liguozhu@...ilicon.com>,
"zhangfei.gao@...aro.org" <zhangfei.gao@...aro.org>,
"chensihang (A)" <chensihang1@...ilicon.com>
Subject: RE: [RFC PATCH v3 1/2] mempinfd: Add new syscall to provide memory
pin
> -----Original Message-----
> From: owner-linux-mm@...ck.org [mailto:owner-linux-mm@...ck.org] On Behalf Of
> Matthew Wilcox
> Sent: Monday, February 8, 2021 2:31 PM
> To: Song Bao Hua (Barry Song) <song.bao.hua@...ilicon.com>
> Cc: Wangzhou (B) <wangzhou1@...ilicon.com>; linux-kernel@...r.kernel.org;
> iommu@...ts.linux-foundation.org; linux-mm@...ck.org;
> linux-arm-kernel@...ts.infradead.org; linux-api@...r.kernel.org; Andrew
> Morton <akpm@...ux-foundation.org>; Alexander Viro <viro@...iv.linux.org.uk>;
> gregkh@...uxfoundation.org; jgg@...pe.ca; kevin.tian@...el.com;
> jean-philippe@...aro.org; eric.auger@...hat.com; Liguozhu (Kenneth)
> <liguozhu@...ilicon.com>; zhangfei.gao@...aro.org; chensihang (A)
> <chensihang1@...ilicon.com>
> Subject: Re: [RFC PATCH v3 1/2] mempinfd: Add new syscall to provide memory
> pin
>
> On Sun, Feb 07, 2021 at 10:24:28PM +0000, Song Bao Hua (Barry Song) wrote:
> > > > In high-performance I/O cases, accelerators might want to perform
> > > > I/O on a memory without IO page faults which can result in dramatically
> > > > increased latency. Current memory related APIs could not achieve this
> > > > requirement, e.g. mlock can only avoid memory to swap to backup device,
> > > > page migration can still trigger IO page fault.
> > >
> > > Well ... we have two requirements. The application wants to not take
> > > page faults. The system wants to move the application to a different
> > > NUMA node in order to optimise overall performance. Why should the
> > > application's desires take precedence over the kernel's desires? And why
> > > should it be done this way rather than by the sysadmin using numactl to
> > > lock the application to a particular node?
> >
> > NUMA balancer is just one of many reasons for page migration. Even one
> > simple alloc_pages() can cause memory migration in just single NUMA
> > node or UMA system.
> >
> > The other reasons for page migration include but are not limited to:
> > * memory move due to CMA
> > * memory move due to huge pages creation
> >
> > Hardly we can ask users to disable the COMPACTION, CMA and Huge Page
> > in the whole system.
>
> You're dodging the question. Should the CMA allocation fail because
> another application is using SVA?
>
> I would say no.
I would say no as well.
While IOMMU is enabled, CMA almost has one user only: IOMMU driver
as other drivers will depend on iommu to use non-contiguous memory
though they are still calling dma_alloc_coherent().
In iommu driver, dma_alloc_coherent is called during initialization
and there is no new allocation afterwards. So it wouldn't cause
runtime impact on SVA performance. Even there is new allocations,
CMA will fall back to general alloc_pages() and iommu drivers are
almost allocating small memory for command queues.
So I would say general compound pages, huge pages, especially
transparent huge pages, would be bigger concerns than CMA for
internal page migration within one NUMA.
Not like CMA, general alloc_pages() can get memory by moving
pages other than those pinned.
And there is no guarantee we can always bind the memory of
SVA applications to single one NUMA, so NUMA balancing is
still a concern.
But I agree we need a way to make CMA success while the userspace
pages are pinned. Since pin has been viral in many drivers, I
assume there is a way to handle this. Otherwise, APIs like
V4L2_MEMORY_USERPTR[1] will possibly make CMA fail as there
is no guarantee that usersspace will allocate unmovable memory
and there is no guarantee the fallback path- alloc_pages() can
succeed while allocating big memory.
Will investigate more.
> The application using SVA should take the one-time
> performance hit from having its memory moved around.
Sometimes I also feel SVA is doomed to suffer from performance
impact due to page migration. But we are still trying to
extend its use cases to high-performance I/O.
[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/media/v4l2-core/videobuf-dma-sg.c
Thanks
Barry
Powered by blists - more mailing lists