[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <48D1851B.70703@goop.org>
Date: Wed, 17 Sep 2008 15:30:51 -0700
From: Jeremy Fitzhardinge <jeremy@...p.org>
To: Avi Kivity <avi@...hat.com>
CC: Nick Piggin <nickpiggin@...oo.com.au>,
Hugh Dickens <hugh@...itas.com>,
Linux Memory Management List <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Avi Kivity <avi@...ranet.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Rik van Riel <riel@...hat.com>
Subject: Re: Populating multiple ptes at fault time
Avi Kivity wrote:
> Jeremy Fitzhardinge wrote:
>> Minor faults are easier; if the page already exists in memory, we should
>> just create mappings to it. If neighbouring pages are also already
>> present, then we can can cheaply create mappings for them too.
>>
(Just to clarify an ambiguity here: by "present" I mean "exists in
memory" not "a present pte".)
> One problem is the accessed bit. If it's unset, the shadow code
> cannot make the pte present (since it has to trap in order to set the
> accessed bit); if it's set, we're lying to the vm.
So even if the guest pte were present but non-accessed, the shadow pte
would have to be non-present and you'd end up taking the fault anyway?
Hm, that does undermine the benefits. Does that mean that when the vm
clears the access bit, you always have to make the shadow non-present?
I guess so. And similarly with dirty and writable shadow.
The counter-argument is that something has gone wrong if we start
populating ptes that aren't going to be used in the near future anyway -
if they're never used then any effort taken to populate them is wasted.
Therefore, setting accessed on them from the outset isn't terribly bad.
(I'm not very convinced by that argument either, and it makes the
potential for bad side-effects much worse if the apparent RSS of a
process is multiplied by some factor.)
J
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists