[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f4b2d7db8a1047d9952cbbfaf9e27824@hisilicon.com>
Date: Sun, 7 Feb 2021 22:24:28 +0000
From: "Song Bao Hua (Barry Song)" <song.bao.hua@...ilicon.com>
To: Matthew Wilcox <willy@...radead.org>,
"Wangzhou (B)" <wangzhou1@...ilicon.com>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"linux-api@...r.kernel.org" <linux-api@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Alexander Viro <viro@...iv.linux.org.uk>,
"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
"jgg@...pe.ca" <jgg@...pe.ca>,
"kevin.tian@...el.com" <kevin.tian@...el.com>,
"jean-philippe@...aro.org" <jean-philippe@...aro.org>,
"eric.auger@...hat.com" <eric.auger@...hat.com>,
"Liguozhu (Kenneth)" <liguozhu@...ilicon.com>,
"zhangfei.gao@...aro.org" <zhangfei.gao@...aro.org>,
"chensihang (A)" <chensihang1@...ilicon.com>
Subject: RE: [RFC PATCH v3 1/2] mempinfd: Add new syscall to provide memory
pin
> -----Original Message-----
> From: Matthew Wilcox [mailto:willy@...radead.org]
> Sent: Monday, February 8, 2021 10:34 AM
> To: Wangzhou (B) <wangzhou1@...ilicon.com>
> Cc: linux-kernel@...r.kernel.org; iommu@...ts.linux-foundation.org;
> linux-mm@...ck.org; linux-arm-kernel@...ts.infradead.org;
> linux-api@...r.kernel.org; Andrew Morton <akpm@...ux-foundation.org>;
> Alexander Viro <viro@...iv.linux.org.uk>; gregkh@...uxfoundation.org; Song
> Bao Hua (Barry Song) <song.bao.hua@...ilicon.com>; jgg@...pe.ca;
> kevin.tian@...el.com; jean-philippe@...aro.org; eric.auger@...hat.com;
> Liguozhu (Kenneth) <liguozhu@...ilicon.com>; zhangfei.gao@...aro.org;
> chensihang (A) <chensihang1@...ilicon.com>
> Subject: Re: [RFC PATCH v3 1/2] mempinfd: Add new syscall to provide memory
> pin
>
> On Sun, Feb 07, 2021 at 04:18:03PM +0800, Zhou Wang wrote:
> > SVA(share virtual address) offers a way for device to share process virtual
> > address space safely, which makes more convenient for user space device
> > driver coding. However, IO page faults may happen when doing DMA
> > operations. As the latency of IO page fault is relatively big, DMA
> > performance will be affected severely when there are IO page faults.
> > >From a long term view, DMA performance will be not stable.
> >
> > In high-performance I/O cases, accelerators might want to perform
> > I/O on a memory without IO page faults which can result in dramatically
> > increased latency. Current memory related APIs could not achieve this
> > requirement, e.g. mlock can only avoid memory to swap to backup device,
> > page migration can still trigger IO page fault.
>
> Well ... we have two requirements. The application wants to not take
> page faults. The system wants to move the application to a different
> NUMA node in order to optimise overall performance. Why should the
> application's desires take precedence over the kernel's desires? And why
> should it be done this way rather than by the sysadmin using numactl to
> lock the application to a particular node?
NUMA balancer is just one of many reasons for page migration. Even one
simple alloc_pages() can cause memory migration in just single NUMA
node or UMA system.
The other reasons for page migration include but are not limited to:
* memory move due to CMA
* memory move due to huge pages creation
Hardly we can ask users to disable the COMPACTION, CMA and Huge Page
in the whole system.
On the other hand, numactl doesn't always bind memory to single NUMA
node, sometimes, while applications require many cpu, it could bind
more than one memory node.
Thanks
Barry
Powered by blists - more mailing lists