[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191025125222.GC4596@optiplex-lnx>
Date: Fri, 25 Oct 2019 08:52:22 -0400
From: Rafael Aquini <aquini@...hat.com>
To: Michal Hocko <mhocko@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...e.de>, Waiman Long <longman@...hat.com>,
Johannes Weiner <hannes@...xchg.org>,
Roman Gushchin <guro@...com>, Vlastimil Babka <vbabka@...e.cz>,
Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
Jann Horn <jannh@...gle.com>, Song Liu <songliubraving@...com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
linux-mm@...ck.org, LKML <linux-kernel@...r.kernel.org>,
Michal Hocko <mhocko@...e.com>
Subject: Re: [PATCH 2/2] mm, vmstat: reduce zone->lock holding time by
/proc/pagetypeinfo
On Fri, Oct 25, 2019 at 09:26:10AM +0200, Michal Hocko wrote:
> From: Michal Hocko <mhocko@...e.com>
>
> pagetypeinfo_showfree_print is called by zone->lock held in irq mode.
> This is not really nice because it blocks both any interrupts on that
> cpu and the page allocator. On large machines this might even trigger
> the hard lockup detector.
>
> Considering the pagetypeinfo is a debugging tool we do not really need
> exact numbers here. The primary reason to look at the outuput is to see
> how pageblocks are spread among different migratetypes and low number of
> pages is much more interesting therefore putting a bound on the number
> of pages on the free_list sounds like a reasonable tradeoff.
>
> The new output will simply tell
> [...]
> Node 6, zone Normal, type Movable >100000 >100000 >100000 >100000 41019 31560 23996 10054 3229 983 648
>
> instead of
> Node 6, zone Normal, type Movable 399568 294127 221558 102119 41019 31560 23996 10054 3229 983 648
>
> The limit has been chosen arbitrary and it is a subject of a future
> change should there be a need for that.
>
> While we are at it, also drop the zone lock after each free_list
> iteration which will help with the IRQ and page allocator responsiveness
> even further as the IRQ lock held time is always bound to those 100k
> pages.
>
> Suggested-by: Andrew Morton <akpm@...ux-foundation.org>
> Reviewed-by: Waiman Long <longman@...hat.com>
> Signed-off-by: Michal Hocko <mhocko@...e.com>
> ---
> mm/vmstat.c | 23 ++++++++++++++++++++---
> 1 file changed, 20 insertions(+), 3 deletions(-)
>
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index 4e885ecd44d1..ddb89f4e0486 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -1383,12 +1383,29 @@ static void pagetypeinfo_showfree_print(struct seq_file *m,
> unsigned long freecount = 0;
> struct free_area *area;
> struct list_head *curr;
> + bool overflow = false;
>
> area = &(zone->free_area[order]);
>
> - list_for_each(curr, &area->free_list[mtype])
> - freecount++;
> - seq_printf(m, "%6lu ", freecount);
> + list_for_each(curr, &area->free_list[mtype]) {
> + /*
> + * Cap the free_list iteration because it might
> + * be really large and we are under a spinlock
> + * so a long time spent here could trigger a
> + * hard lockup detector. Anyway this is a
> + * debugging tool so knowing there is a handful
> + * of pages in this order should be more than
> + * sufficient
> + */
> + if (++freecount >= 100000) {
> + overflow = true;
> + break;
> + }
> + }
> + seq_printf(m, "%s%6lu ", overflow ? ">" : "", freecount);
> + spin_unlock_irq(&zone->lock);
> + cond_resched();
> + spin_lock_irq(&zone->lock);
> }
> seq_putc(m, '\n');
> }
> --
> 2.20.1
>
Acked-by: Rafael Aquini <aquini@...hat.com>
Powered by blists - more mailing lists