[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <53B50791.50208@lge.com>
Date: Thu, 03 Jul 2014 16:34:41 +0900
From: Gioh Kim <gioh.kim@....com>
To: Andrew Morton <akpm@...ux-foundation.org>,
Laura Abbott <lauraa@...eaurora.org>
CC: Michal Nazarewicz <mina86@...a86.com>,
Marek Szyprowski <m.szyprowski@...sung.com>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Mel Gorman <mgorman@...e.de>,
이건호 <gunho.lee@....com>,
Hugh Dickins <hughd@...gle.com>
Subject: Re: [RFC] CMA page migration failure due to buffers on bh_lru
Hi, Laura,
I has replaced the evict_bh_lrus(bh) with invalidate_bh_lrus() and it is working fine.
How about submit new patch with invalidate_bh_lrus()?
I would appreciate it.
2014-07-02 오후 2:46, Andrew Morton 쓴 글:
> On Mon, 30 Jun 2014 19:02:45 -0700 Laura Abbott <lauraa@...eaurora.org> wrote:
>
>> On 6/30/2014 6:07 PM, Gioh Kim wrote:
>>> Hi,Laura.
>>>
>>> I have a question.
>>>
>>> Does the __evict_bh_lru() not need bh_lru_lock()?
>>> The get_cpu_var() has already preenpt_disable() and can prevent other thread.
>>> But get_cpu_var cannot prevent IRQ context such like page-fault.
>>> I think if a page-fault occured and a file is read in IRQ context it can change cpu-lru.
>>>
>>> Is my concern correct?
>>>
>>>
>>
>> __evict_bh_lru is called via on_each_cpu_cond which I believe will disable irqs.
>> I based the code on the existing invalidate_bh_lru which did not take the bh_lru_lock
>> either. It's possible I missed something though.
>
> I fear that running on_each_cpu() within try_to_free_buffers() is going
> to be horridly expensive in some cases.
>
> Maybe CMA can use invalidate_bh_lrus() to shoot down everything before
> trying the allocation attempt. That should increase the success rate
> greatly and doesn't burden page reclaim. The bh LRU isn't terribly
> important from a performance point of view, so emptying it occasionally
> won't hurt.
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists