lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191023061511.GA754@dhcp22.suse.cz>
Date:   Wed, 23 Oct 2019 08:15:11 +0200
From:   Michal Hocko <mhocko@...nel.org>
To:     Andrew Morton <akpm@...ux-foundation.org>,
        Vlastimil Babka <vbabka@...e.cz>, Mel Gorman <mgorman@...e.de>
Cc:     Waiman Long <longman@...hat.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, Johannes Weiner <hannes@...xchg.org>,
        Roman Gushchin <guro@...com>,
        Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
        Jann Horn <jannh@...gle.com>, Song Liu <songliubraving@...com>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Rafael Aquini <aquini@...hat.com>
Subject: Re: [PATCH] mm/vmstat: Reduce zone lock hold time when reading
 /proc/pagetypeinfo

On Tue 22-10-19 14:59:02, Andrew Morton wrote:
> On Tue, 22 Oct 2019 12:21:56 -0400 Waiman Long <longman@...hat.com> wrote:
[...]
> > -	for (mtype = 0; mtype < MIGRATE_TYPES; mtype++) {
> > -		seq_printf(m, "Node %4d, zone %8s, type %12s ",
> > -					pgdat->node_id,
> > -					zone->name,
> > -					migratetype_names[mtype]);
> > -		for (order = 0; order < MAX_ORDER; ++order) {
> > +	lockdep_assert_held(&zone->lock);
> > +	lockdep_assert_irqs_disabled();
> > +
> > +	/*
> > +	 * MIGRATE_MOVABLE is usually the largest one in large memory
> > +	 * systems. We skip iterating that list. Instead, we compute it by
> > +	 * subtracting the total of the rests from free_area->nr_free.
> > +	 */
> > +	for (order = 0; order < MAX_ORDER; ++order) {
> > +		unsigned long nr_total = 0;
> > +		struct free_area *area = &(zone->free_area[order]);
> > +
> > +		for (mtype = 0; mtype < MIGRATE_TYPES; mtype++) {
> >  			unsigned long freecount = 0;
> > -			struct free_area *area;
> >  			struct list_head *curr;
> >  
> > -			area = &(zone->free_area[order]);
> > -
> > +			if (mtype == MIGRATE_MOVABLE)
> > +				continue;
> >  			list_for_each(curr, &area->free_list[mtype])
> >  				freecount++;
> > -			seq_printf(m, "%6lu ", freecount);
> > +			nfree[order][mtype] = freecount;
> > +			nr_total += freecount;
> >  		}
> > +		nfree[order][MIGRATE_MOVABLE] = area->nr_free - nr_total;
> > +
> > +		/*
> > +		 * If we have already iterated more than 64k of list
> > +		 * entries, we might have hold the zone lock for too long.
> > +		 * Temporarily release the lock and reschedule before
> > +		 * continuing so that other lock waiters have a chance
> > +		 * to run.
> > +		 */
> > +		if (nr_total > (1 << 16)) {
> > +			spin_unlock_irq(&zone->lock);
> > +			cond_resched();
> > +			spin_lock_irq(&zone->lock);
> > +		}
> > +	}
> > +
> > +	for (mtype = 0; mtype < MIGRATE_TYPES; mtype++) {
> > +		seq_printf(m, "Node %4d, zone %8s, type %12s ",
> > +					pgdat->node_id,
> > +					zone->name,
> > +					migratetype_names[mtype]);
> > +		for (order = 0; order < MAX_ORDER; ++order)
> > +			seq_printf(m, "%6lu ", nfree[order][mtype]);
> >  		seq_putc(m, '\n');
> 
> This is not exactly a thing of beauty :( Presumably there might still
> be situations where the irq-off times remain excessive.

Yes. It is the list_for_each over the free_list that needs the lock and
that is the actual problem here. This can be really large with a _lot_
of memory. And this is why I objected to the patch. Because it doesn't
really address this problem. I would like to hear from Mel and Vlastimil
how would they feel about making free_list fully migrate type aware
(including nr_free).

> Why are we actually holding zone->lock so much?  Can we get away with
> holding it across the list_for_each() loop and nothing else?  If so,
> this still isn't a bulletproof fix.  Maybe just terminate the list
> walk if freecount reaches 1024.  Would anyone really care?
> 
> Sigh.  I wonder if anyone really uses this thing for anything
> important.  Can we just remove it all?

Vlastimil would know much better but I have seen this being used for
fragmentation related debugging. That should imply that 0400 should be
sufficient and a quick and easily backportable fix for the most pressing
immediate problem.
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ