[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <492302CE.5090702@goop.org>
Date: Tue, 18 Nov 2008 10:00:46 -0800
From: Jeremy Fitzhardinge <jeremy@...p.org>
To: Jan Beulich <jbeulich@...ell.com>
CC: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Zachary Amsden <zach@...are.com>
Subject: Re: arch_flush_lazy_mmu_mode() in arch/x86/mm/highmem_32.c
Jan Beulich wrote:
> If an interrupt (event) comes in, a multicall could of course be 'preempted',
> in order to service the event. But of course that works only if event
> delivery isn't disabled.
>
Well, if we made a local copy of the multicall queue, we could enable
interrupts before doing the multicall.
>>> There's no reason to do any flush at all if you suppress batching temporarily.
>>> And it only needs (would need) explicit suppressing here because you can't
>>> easily recognize being in the context of a page fault handler from the
>>> batching functions (other than recognizing being in the context of an
>>> interrupt handler, which is what would allow removing the flush calls from
>>> highmem_32.c).
>>>
>> I'm not sure what your concern is here. If batching is currently
>> enabled, then the flush will push out anything pending immediately. If
>> batching is disabled, then the flush will be a noop and return immediately.
>>
>
> Latency, as before. The page fault should have to take longer than it really
> needs, and the flushing of a pending batch clearly doesn't belong to the
> page fault itself.
>
Yes, I can see that. But in practice the batches are pretty small; the
cap is only 32 entries to start with, and there are very few operations
which can really get close to that. Large mprotects and munmaps of
present pages are the only way to make large batches, and they're
uncommon and expensive operations anyway.
J
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists