[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACVXFVMr=JMNHFe1GO=di99eB-6-=_pBkP3QH4x_qtKhdRZMFw@mail.gmail.com>
Date: Tue, 16 Oct 2012 21:47:03 +0800
From: Ming Lei <ming.lei@...onical.com>
To: Minchan Kim <minchan@...nel.org>
Cc: linux-kernel@...r.kernel.org,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
linux-usb@...r.kernel.org, linux-pm@...r.kernel.org,
Alan Stern <stern@...land.harvard.edu>,
Oliver Neukum <oneukum@...e.de>,
Jiri Kosina <jiri.kosina@...e.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mel@....ul.ie>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Michal Hocko <mhocko@...e.cz>, Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
"Rafael J. Wysocki" <rjw@...k.pl>, linux-mm <linux-mm@...ck.org>
Subject: Re: [RFC PATCH 1/3] mm: teach mm by current context info to not do
I/O during memory allocation
On Tue, Oct 16, 2012 at 9:09 PM, Minchan Kim <minchan@...nel.org> wrote:
>
> Good point. You can check it in __zone_reclaim and change gfp_mask of scan_control
> because it's never hot path.
>
>>
>> So could you make sure it is safe to move the branch into
>> __alloc_pages_slowpath()? If so, I will add the check into
>> gfp_to_alloc_flags().
>
> How about this?
It is quite smart change, :-)
Considered that other part(sched.h) of the patch need update, I
will merge your change into -v1 for further review with your
Signed-off-by if you have no objection.
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index d976957..b3607fa 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2614,10 +2614,16 @@ retry_cpuset:
> page = get_page_from_freelist(gfp_mask|__GFP_HARDWALL, nodemask, order,
> zonelist, high_zoneidx, alloc_flags,
> preferred_zone, migratetype);
> - if (unlikely(!page))
> + if (unlikely(!page)) {
> + /*
> + * Resume path can deadlock because block device
> + * isn't active yet.
> + */
Not only resume path, I/O transfer or its error handling path may deadlock too.
> + if (unlikely(tsk_memalloc_no_io(current)))
> + gfp_mask &= ~GFP_IOFS;
> page = __alloc_pages_slowpath(gfp_mask, order,
> zonelist, high_zoneidx, nodemask,
> preferred_zone, migratetype);
> + }
>
> trace_mm_page_alloc(page, order, gfp_mask, migratetype);
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index b5e45f4..6c2ccdd 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -3290,6 +3290,16 @@ static int __zone_reclaim(struct zone *zone, gfp_t gfp_mask, unsigned int order)
> };
> unsigned long nr_slab_pages0, nr_slab_pages1;
>
> + if (unlikely(tsk_memalloc_no_io(current))) {
> + sc.gfp_mask &= ~GFP_IOFS;
> + shrink.gfp_mask = sc.gfp_mask;
> + /*
> + * We allow to reclaim only clean pages.
> + * It can affect RECLAIM_SWAP and RECLAIM_WRITE mode
> + * but this is really rare event and allocator can
> * fallback to other zones.
> + */
> + sc.may_writepage = 0;
> + sc.may_swap = 0;
> + }
> +
> cond_resched();
> /*
> * We need to be able to allocate from the reserves for RECLAIM_SWAP
>
Thanks,
--
Ming Lei
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists