[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220110222725.paug7n5oznicceck@oracle.com>
Date: Mon, 10 Jan 2022 17:27:25 -0500
From: Daniel Jordan <daniel.m.jordan@...cle.com>
To: Jason Gunthorpe <jgg@...dia.com>
Cc: Alexander Duyck <alexanderduyck@...com>,
Alex Williamson <alex.williamson@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Ben Segall <bsegall@...gle.com>,
Cornelia Huck <cohuck@...hat.com>,
Dan Williams <dan.j.williams@...el.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Herbert Xu <herbert@...dor.apana.org.au>,
Ingo Molnar <mingo@...hat.com>,
Johannes Weiner <hannes@...xchg.org>,
Josh Triplett <josh@...htriplett.org>,
Michal Hocko <mhocko@...e.com>, Nico Pache <npache@...hat.com>,
Pasha Tatashin <pasha.tatashin@...een.com>,
Peter Zijlstra <peterz@...radead.org>,
Steffen Klassert <steffen.klassert@...unet.com>,
Steve Sistare <steven.sistare@...cle.com>,
Tejun Heo <tj@...nel.org>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
linux-mm@...ck.org, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-crypto@...r.kernel.org
Subject: Re: [RFC 00/16] padata, vfio, sched: Multithreaded VFIO page pinning
On Fri, Jan 07, 2022 at 01:12:48PM -0400, Jason Gunthorpe wrote:
> > The cuts aren't arbitrary, padata controls where they happen.
>
> Well, they are, you picked a PMD alignment if I recall.
>
> If hugetlbfs is using PUD pages then this is the wrong alignment,
> right?
>
> I suppose it could compute the cuts differently to try to maximize
> alignment at the cutpoints..
Yes, this is what I was suggesting, increase the alignment.
> > size. If cuts in per-thread ranges are an issue, I *think* userspace
> > has the same problem?
>
> Userspace should know what it has done, if it is using hugetlbfs it
> knows how big the pages are.
Right, what I mean is both user and kernel threads can end up splitting
a physically contiguous range of pages, however large the page size.
> > Pinning itself, the only thing being optimized, improves 8.5x in that
> > experiment, bringing the time from 1.8 seconds to .2 seconds. That's a
> > significant savings IMHO
>
> And here is where I suspect we'd get similar results from folio's
> based on the unpin performance uplift we already saw.
>
> As long as PUP doesn't have to COW its work is largely proportional to
> the number of struct pages it processes, so we should be expecting an
> upper limit of 512x gains on the PUP alone with foliation.
>
> This is in line with what we saw with the prior unpin work.
"in line with what we saw" Not following. The unpin work had two
optimizations, I think, 4.5x and 3.5x which together give 16x. Why is
that in line with the potential gains from pup?
Overall I see what you're saying, just curious what you meant here.
> The other optimization that would help a lot here is to use
> pin_user_pages_fast(), something like:
>
> if (current->mm != remote_mm)
> mmap_lock()
> pin_user_pages_remote(..)
> mmap_unlock()
> else
> pin_user_pages_fast(..)
>
> But you can't get that gain with kernel-size parallization, right?
>
> (I haven't dug into if gup_fast relies on current due to IPIs or not,
> maybe pin_user_pages_remote_fast can exist?)
Yeah, not sure. I'll have a look.
> > But, I'm skeptical that singlethreaded optimization alone will remove
> > the bottleneck with the enormous memory sizes we use.
>
> I think you can get the 1.2x at least.
>
> > scaling up the times from the unpin results with both optimizations (the
> > IB specific one too, which would need to be done for vfio),
>
> Oh, I did the IB one already in iommufd...
Ahead of the curve!
> > a 1T guest would still take almost 2 seconds to pin/unpin.
>
> Single threaded?
Yes.
> Isn't that excellent
Depends on who you ask, I guess.
> and completely dwarfed by the populate overhead?
Well yes, but here we all are optimizing gup anyway :-)
> > If people feel strongly that we should try optimizing other ways first,
> > ok, but I think these are complementary approaches. I'm coming at this
> > problem this way because this is fundamentally a memory-intensive
> > operation where more bandwidth can help, and there are other kernel
> > paths we and others want this infrastructure for.
>
> At least here I would like to see an apples to apples at least before
> we have this complexity. Full user threading vs kernel auto threading.
>
> Saying multithreaded kernel gets 8x over single threaded userspace is
> nice, but sort of irrelevant because we can have multithreaded
> userspace, right?
One of my assumptions was that doing this in the kernel would benefit
all vfio users, avoiding duplicating the same sort of multithreading
logic across applications, including ones that didn't prefault. Calling
it irrelevant seems a bit strong. Parallelizing in either layer has its
upsides and downsides.
My assumption going into this series was that multithreading VFIO page
pinning in the kernel was a viable way forward given the positive
feedback I got from the VFIO maintainer last time I posted this, which
was admittedly a while ago, and I've since been focused on the other
parts of this series rather than what's been happening in the mm lately.
Anyway, your arguments are reasonable, so I'll go take a look at some of
these optimizations and see where I get.
Daniel
Powered by blists - more mailing lists