[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <2f11576a0910062048j1967de28ve33a134df6d4ab9c@mail.gmail.com>
Date: Wed, 7 Oct 2009 12:48:31 +0900
From: KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>
To: Ying Han <yinghan@...gle.com>
Cc: LKML <linux-kernel@...r.kernel.org>, linux-mm <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Oleg Nesterov <oleg@...hat.com>,
Christoph Lameter <cl@...ux-foundation.org>
Subject: Re: [PATCH 2/2] mlock use lru_add_drain_all_async()
Hi
> Hello KOSAKI-san,
>
> Few questions on the lru_add_drain_all_async(). If i understand
> correctly, the reason that we have lru_add_drain_all() in the mlock()
> call is to isolate mlocked pages into the separate LRU in case they
> are sitting in pagevec.
>
> And I also understand the RT use cases you put in the patch
> description, now my questions is that do we have race after applying
> the patch? For example that if the RT task not giving up the cpu by
> the time mlock returns, you have pages left in the pagevec which not
> being drained back to the lru list. Do we have problem with that?
This patch don't introduce new race. current code has following race.
1. call mlock
2. lru_add_drain_all()
3. another cpu grab the page into its pagevec
4. actual PG_mlocked processing
I'd like to explain why this code works. linux has VM_LOCKED in vma
and PG_mlocked in page. if we failed to turn on PG_mlocked, we can
recover it at vmscan phase by VM_LOCKED.
Then, this patch effect are
- increase race possibility a bit
- decrease RT-task problem risk
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists