[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKgT0Ud==Z1BAF-ja-ZtGR5Dxj+7dE3YEpB-D-Wk4A9U1Yooew@mail.gmail.com>
Date: Tue, 11 Feb 2020 17:19:15 -0800
From: Alexander Duyck <alexander.duyck@...il.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Alexander Duyck <alexander.h.duyck@...ux.intel.com>,
kvm list <kvm@...r.kernel.org>,
David Hildenbrand <david@...hat.com>,
"Michael S. Tsirkin" <mst@...hat.com>,
LKML <linux-kernel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>,
Mel Gorman <mgorman@...hsingularity.net>,
Yang Zhang <yang.zhang.wz@...il.com>,
Pankaj Gupta <pagupta@...hat.com>,
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
Nitesh Narayan Lal <nitesh@...hat.com>,
Rik van Riel <riel@...riel.com>,
Matthew Wilcox <willy@...radead.org>, lcapitulino@...hat.com,
Dave Hansen <dave.hansen@...el.com>,
"Wang, Wei W" <wei.w.wang@...el.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Dan Williams <dan.j.williams@...el.com>,
Michal Hocko <mhocko@...nel.org>,
Vlastimil Babka <vbabka@...e.cz>,
Oscar Salvador <osalvador@...e.de>
Subject: Re: [PATCH v17 0/9] mm / virtio: Provide support for free page reporting
On Tue, Feb 11, 2020 at 4:19 PM Andrew Morton <akpm@...ux-foundation.org> wrote:
>
> On Tue, 11 Feb 2020 15:55:31 -0800 Alexander Duyck <alexander.h.duyck@...ux.intel.com> wrote:
>
> > On the host I just have to monitor /proc/meminfo and I can see the
> > difference. I get the following results on the host, in the enabled case
> > it takes about 30 seconds for it to settle into the final state since I
> > only report page a bit at a time:
> > Baseline/Applied
> > MemTotal: 131963012 kB
> > MemFree: 95189740 kB
> >
> > Enabled:
> > MemTotal: 131963012 kB
> > MemFree: 126459472 kB
> >
> > This is what I was referring to with the comment above. I had a test I was
> > running back around the first RFC that consisted of bringing up enough VMs
> > so that there was a bit of memory overcommit and then having the VMs in
> > turn run memhog. As I recall the difference between the two was something
> > like a couple minutes to run through all the VMs as the memhog would take
> > up to 40+ seconds for one that was having to pull from swap while it took
> > only 5 to 7 seconds for the VMs that were all running the page hinting.
> >
> > I had referenced it here in the RFC:
> > https://lore.kernel.org/lkml/20190204181118.12095.38300.stgit@localhost.localdomain/
> >
> > I have been verifying the memory has been getting freed but didn't feel
> > like the test added much value so I haven't added it to the cover page for
> > a while since the time could vary widely and is dependent on things like
> > the disk type used for the host swap since my SSD is likely faster than
> > spinning rust, but may not be as fast as other SSDs on the market. Since
> > the disk speed can play such a huge role I wasn't comfortable posting
> > numbers since the benefits could vary so widely.
>
> OK, thanks. I'll add the patches to the mm pile. The new
> mm/page_reporting.c is unreviewed afaict, so I guess you own that for
> now ;)
I will see what I can do to get some additional review of those
patches. There has been some review, but I rewrote that block after
suggestions as I had to split it out over several patches to account
for the gains from the changes in patches 7 and 8.
> It would be very nice to get some feedback from testers asserting "yes,
> this really helped my workload" but I understand this sort of testing
> is hard to obtain at this stage.
Without the QEMU patches applied there isn't much that this patch set
can do on its own, so that is another piece I have to work on. Yet
another reason to make sure it does no harm if it is not enabled.
So far the one that surprised me the most is that somebody from Huawei
was working to add device pass-thru support to it already.
https://lore.kernel.org/lkml/1578408399-20092-1-git-send-email-weiqi4@huawei.com/
Thanks.
- Alex
Powered by blists - more mailing lists