[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2527b4ac8df14fa1b427bef65dace719@hisilicon.com>
Date: Tue, 9 Feb 2021 22:22:47 +0000
From: "Song Bao Hua (Barry Song)" <song.bao.hua@...ilicon.com>
To: Jason Gunthorpe <jgg@...pe.ca>
CC: David Hildenbrand <david@...hat.com>,
"Wangzhou (B)" <wangzhou1@...ilicon.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"iommu@...ts.linux-foundation.org" <iommu@...ts.linux-foundation.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
"linux-api@...r.kernel.org" <linux-api@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Alexander Viro <viro@...iv.linux.org.uk>,
"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
"kevin.tian@...el.com" <kevin.tian@...el.com>,
"jean-philippe@...aro.org" <jean-philippe@...aro.org>,
"eric.auger@...hat.com" <eric.auger@...hat.com>,
"Liguozhu (Kenneth)" <liguozhu@...ilicon.com>,
"zhangfei.gao@...aro.org" <zhangfei.gao@...aro.org>,
"chensihang (A)" <chensihang1@...ilicon.com>
Subject: RE: [RFC PATCH v3 1/2] mempinfd: Add new syscall to provide memory
pin
> -----Original Message-----
> From: Jason Gunthorpe [mailto:jgg@...pe.ca]
> Sent: Wednesday, February 10, 2021 2:54 AM
> To: Song Bao Hua (Barry Song) <song.bao.hua@...ilicon.com>
> Cc: David Hildenbrand <david@...hat.com>; Wangzhou (B)
> <wangzhou1@...ilicon.com>; linux-kernel@...r.kernel.org;
> iommu@...ts.linux-foundation.org; linux-mm@...ck.org;
> linux-arm-kernel@...ts.infradead.org; linux-api@...r.kernel.org; Andrew
> Morton <akpm@...ux-foundation.org>; Alexander Viro <viro@...iv.linux.org.uk>;
> gregkh@...uxfoundation.org; kevin.tian@...el.com; jean-philippe@...aro.org;
> eric.auger@...hat.com; Liguozhu (Kenneth) <liguozhu@...ilicon.com>;
> zhangfei.gao@...aro.org; chensihang (A) <chensihang1@...ilicon.com>
> Subject: Re: [RFC PATCH v3 1/2] mempinfd: Add new syscall to provide memory
> pin
>
> On Tue, Feb 09, 2021 at 03:01:42AM +0000, Song Bao Hua (Barry Song) wrote:
>
> > On the other hand, wouldn't it be the benefit of hardware accelerators
> > to have a lower and more stable latency zip/encryption than CPU?
>
> No, I don't think so.
Fortunately or unfortunately, I think my people have this target to have
a lower-latency and more stable zip/encryption by using accelerators,
otherwise, they are going to use CPU directly if there is no advantage
of accelerators.
>
> If this is an important problem then it should apply equally to CPU
> and IO jitter.
>
> Honestly I find the idea that occasional migration jitters CPU and DMA
> to not be very compelling. Such specialized applications should
> allocate special pages to avoid this, not adding an API to be able to
> lock down any page
That is exactly what we have done to provide a hugeTLB pool so that
applications can allocate memory from this pool.
+-------------------------------------------+
| |
|applications using accelerators |
+-------------------------------------------+
alloc from pool free to pool
+ ++
| |
| |
| |
| |
| |
| |
| |
+----------+-----------------------+---------+
| |
| |
| HugeTLB memory pool |
| |
| |
+--------------------------------------------+
The problem is that SVA declares we can use any memory of a process
to do I/O. And in real scenarios, we are unable to customize most
applications to make them use the pool. So we are looking for some
extension generically for applications such as Nginx, Ceph.
I am also thinking about leveraging vm.compact_unevictable_allowed
which David suggested and making an extension on it, for example,
permit users to disable compaction and numa balancing on unevictable
pages of SVA process, which might be a smaller deal.
>
> Jason
Thanks
Barry
Powered by blists - more mailing lists