lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4C4F2BF2.6010903@goop.org>
Date:	Tue, 27 Jul 2010 11:56:50 -0700
From:	Jeremy Fitzhardinge <jeremy@...p.org>
To:	Nick Piggin <npiggin@...e.de>
CC:	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
Subject: Re: [PATCH] vmap: add flag to allow lazy unmap to be disabled at
 runtime

  On 07/27/2010 01:24 AM, Nick Piggin wrote:
> On Mon, Jul 26, 2010 at 01:24:51PM -0700, Jeremy Fitzhardinge wrote:
>>
>> [ Nick, I forget if I sent this to you before.  Could you Ack it if it looks OK? Thanks, J ]
>>
>> Add a flag to force lazy_max_pages() to zero to prevent any outstanding
>> mapped pages.  We'll need this for Xen.
> You have sent this to me before, probably several times, and I always
> forget about it right as you send it again.
>
> It's no problem merging something like this for Xen, although as you
> know I would love to see an approach where Xen would benefit from
> delayed flushing as well :)

Yes indeed, that would be nice to get.  What it comes down to is we need 
to be able to flush any lazy vunmap aliases from within interrupt 
context, but the code really isn't set up to do that, and last time I 
tried to understand that code I couldn't see a straightforward way to 
make it work.   It would also be nice to have a way to shoot down the 
aliases for a specific page, assuming that's any more efficient than 
flushing everything.

I don't think anything has changed since we last talked about this.

> You will need to disable lazy flushing from the per-cpu allocator
> (vm_map_ram/vm_unmap_ram, which are used by XFS now). That's not
> tied to the lazy_max stuff (which it should be, arguably)

Ah, OK.  I should really add xfs to our roster of regularly tested 
filesystems, since it seems to play the most games.  Do you know of any 
other filesystems which do that kind of thing?

> That code basically allocates per-cpu chunks of va from the global
> allocator, uses them, then frees them back to the global allocator
> all without doing any TLB flushing.
>
> If you have to do global TLB flushing there, then it's probably not
> much value in per-cpu locking of the address allocator anyway, so
> you could just add a test for vmap_lazy_unmap in these branches:
>
>    if (likely(count<= VMAP_MAX_ALLOC)&&  !vmap_lazy_unmap)

We don't need to do any tlb flushing in these cases, because we're 
concerned about making sure we know what ptes a given page is mapped 
by.  The hypervisor will do any tlb flushing it requires to maintain its 
own invariants (so, for example, we can't use a stale tlb entry to keep 
accessing a page we've given back to Xen).

Thanks,
     J
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ