[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <528A7D36.5020500@sr71.net>
Date: Mon, 18 Nov 2013 12:48:54 -0800
From: Dave Hansen <dave@...1.net>
To: Naoya Horiguchi <n-horiguchi@...jp.nec.com>
CC: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
dave.jiang@...el.com, akpm@...ux-foundation.org, dhillf@...il.com,
Mel Gorman <mgorman@...e.de>
Subject: Re: [PATCH] mm: call cond_resched() per MAX_ORDER_NR_PAGES pages
copy
On 11/18/2013 12:20 PM, Naoya Horiguchi wrote:
>> > Really, though, a lot of things seem to have MAX_ORDER set up so that
>> > it's at 256MB or 512MB. That's an awful lot to do between rescheds.
> Yes.
>
> BTW, I found that we have the same problem for other functions like
> copy_user_gigantic_page, copy_user_huge_page, and maybe clear_gigantic_page.
> So we had better handle them too.
Is there a problem you're trying to solve here? The common case of the
cond_resched() call boils down to a read of a percpu variable which will
surely be in the L1 cache after the first run around the loop. In other
words, it's about as cheap of an operation as we're going to get.
Why bother trying to "optimize" it?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists