lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Tue, 18 Nov 2008 10:06:41 -0800
From:	Zachary Amsden <zach@...are.com>
To:	Jan Beulich <jbeulich@...ell.com>
Cc:	Jeremy Fitzhardinge <jeremy@...p.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: arch_flush_lazy_mmu_mode() in arch/x86/mm/highmem_32.c

On Tue, 2008-11-18 at 09:28 -0800, Jan Beulich wrote:
> >>> Jeremy Fitzhardinge <jeremy@...p.org> 18.11.08 18:01 >>>

> Latency, as before. The page fault should have to take longer than it really
> needs, and the flushing of a pending batch clearly doesn't belong to the
> page fault itself.

Page faults for vmalloc area syncing are extremely rare to begin with,
and only happen on non-PAE kernels (although, perhaps on Xen in
PAE-mode, since the PMD isn't fully shared).  Latency isn't an issue
there.

Latency could be added on interrupts which somehow end up in a
kmap_atomic path, but there are very restricted uses of this; glancing
around, I see ide_io_buffers, aio, USB DMA peeking, bounce buffers,
memory sticks, NTFS, a couple SCSI drivers...

Most of these are doing things like PIO or data copy... I'm sure there
are some hot paths here such as aio, but do you really see an issue with
potentially having to process 32 queued multicalls, I mean the latency
can't be that high?  Do you have any statistics that show this latency
to be a problem?

Our measurements show that the lazy mode batching rarely gets more than
a couple updates; every once in a while, you might get a blob of 32 or
so, but in the common case, there are typically only a few updates.  I
really can't realistically imagine a scenario where it would measurably
affect performance to have to issue a typically small flush in the
already rare case that you happen to take an interrupt in a MMU batching
region...

This whole thing is already pretty tricky to get right and one could
even say a bit fragile.  It's been a problematic source of bugs in the
past.  I don't see how making it more complex than it already is, is
going to help anyone.  If anything, we should be looking to simplify
it..

Zach

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ