lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YEEC4EjxKB5+zl6t@google.com>
Date:   Thu, 4 Mar 2021 07:55:12 -0800
From:   Minchan Kim <minchan@...nel.org>
To:     David Hildenbrand <david@...hat.com>
Cc:     Michal Hocko <mhocko@...e.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        linux-mm <linux-mm@...ck.org>,
        LKML <linux-kernel@...r.kernel.org>, joaodias@...gle.com,
        surenb@...gle.com, cgoldswo@...eaurora.org, willy@...radead.org,
        vbabka@...e.cz, linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH 1/2] mm: disable LRU pagevec during the migration
 temporarily

On Thu, Mar 04, 2021 at 09:07:28AM +0100, David Hildenbrand wrote:
> On 03.03.21 21:23, Minchan Kim wrote:
> > On Wed, Mar 03, 2021 at 01:49:36PM +0100, Michal Hocko wrote:
> > > On Tue 02-03-21 13:09:48, Minchan Kim wrote:
> > > > LRU pagevec holds refcount of pages until the pagevec are drained.
> > > > It could prevent migration since the refcount of the page is greater
> > > > than the expection in migration logic. To mitigate the issue,
> > > > callers of migrate_pages drains LRU pagevec via migrate_prep or
> > > > lru_add_drain_all before migrate_pages call.
> > > > 
> > > > However, it's not enough because pages coming into pagevec after the
> > > > draining call still could stay at the pagevec so it could keep
> > > > preventing page migration. Since some callers of migrate_pages have
> > > > retrial logic with LRU draining, the page would migrate at next trail
> > > > but it is still fragile in that it doesn't close the fundamental race
> > > > between upcoming LRU pages into pagvec and migration so the migration
> > > > failure could cause contiguous memory allocation failure in the end.
> > > > 
> > > > To close the race, this patch disables lru caches(i.e, pagevec)
> > > > during ongoing migration until migrate is done.
> > > > 
> > > > Since it's really hard to reproduce, I measured how many times
> > > > migrate_pages retried with force mode below debug code.
> > > > 
> > > > int migrate_pages(struct list_head *from, new_page_t get_new_page,
> > > > 			..
> > > > 			..
> > > > 
> > > > if (rc && reason == MR_CONTIG_RANGE && pass > 2) {
> > > >         printk(KERN_ERR, "pfn 0x%lx reason %d\n", page_to_pfn(page), rc);
> > > >         dump_page(page, "fail to migrate");
> > > > }
> > > > 
> > > > The test was repeating android apps launching with cma allocation
> > > > in background every five seconds. Total cma allocation count was
> > > > about 500 during the testing. With this patch, the dump_page count
> > > > was reduced from 400 to 30.
> > > 
> > > Have you seen any improvement on the CMA allocation success rate?
> > 
> > Unfortunately, the cma alloc failure rate with reasonable margin
> > of error is really hard to reproduce under real workload.
> > That's why I measured the soft metric instead of direct cma fail
> > under real workload(I don't want to make some adhoc artificial
> > benchmark and keep tunes system knobs until it could show
> > extremly exaggerated result to convice patch effect).
> > 
> > Please say if you belive this work is pointless unless there is
> > stable data under reproducible scenario. I am happy to drop it.
> 
> Do you have *some* application that triggers such a high retry count?

I have no idea what the specific appliction could trigger the high
retry count since the LRUs(the VM LRU and buffer_head LRU) are
common place everybody could use and every process could trigger.

> 
> I'd love to run it along with virtio-mem and report the actual allocation
> success rate / necessary retries. That could give an indication of how
> helpful your work would be.

If it could give stable report, that would be very helpful.

> 
> Anything that improves the reliability of alloc_contig_range() is of high
> interest to me. If it doesn't increase the reliability but merely does some
> internal improvements (less retries), it might still be valuable, but not
> that important.

less retrial is good but I'd like to put more effort to close the race
I mentioned completely since the cma allocation failure for our usecases
are critical for user experience.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ