[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161015165521.GB31568@infradead.org>
Date: Sat, 15 Oct 2016 09:55:21 -0700
From: Christoph Hellwig <hch@...radead.org>
To: zhouxianrong@...wei.com
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
akpm@...ux-foundation.org, rientjes@...gle.com, hannes@...xchg.org,
chris@...is-wilson.co.uk, vdavydov.dev@...il.com,
mgorman@...hsingularity.net, joe@...ches.com,
shawn.lin@...k-chips.com, iamjoonsoo.kim@....com,
kuleshovmail@...il.com, zhouxiyu@...wei.com,
zhangshiming5@...wei.com, won.ho.park@...wei.com,
tuxiaobing@...wei.com
Subject: Re: [PATCH vmalloc] reduce purge_lock range and hold time of
On Sat, Oct 15, 2016 at 10:12:48PM +0800, zhouxianrong@...wei.com wrote:
> From: z00281421 <z00281421@...esmail.huawei.com>
>
> i think no need to place __free_vmap_area loop in purge_lock;
> _free_vmap_area could be non-atomic operations with flushing tlb
> but must be done after flush tlb. and the whole__free_vmap_area loops
> also could be non-atomic operations. if so we could improve realtime
> because the loop times sometimes is larg and spend a few time.
Right, see the previous patch in reply to Joel that drops purge_lock
entirely.
Instead of your open coded batch counter you probably want to add
a cond_resched_lock after the call to __free_vmap_area.
Powered by blists - more mailing lists