lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <49C27E09.5070307@goop.org>
Date:	Thu, 19 Mar 2009 10:16:57 -0700
From:	Jeremy Fitzhardinge <jeremy@...p.org>
To:	Avi Kivity <avi@...hat.com>
CC:	Nick Piggin <nickpiggin@...oo.com.au>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Linux Memory Management List <linux-mm@...ck.org>,
	Xen-devel <xen-devel@...ts.xensource.com>,
	Jan Beulich <jbeulich@...ell.com>, Ingo Molnar <mingo@...e.hu>,
	Keir Fraser <keir.fraser@...citrix.com>
Subject: Re: Question about x86/mm/gup.c's use of disabled interrupts

Avi Kivity wrote:
>> And the hypercall could result in no Xen-level IPIs at all, so it 
>> could be very quick by comparison to an IPI-based Linux 
>> implementation, in which case the flag polling would be particularly 
>> harsh.
>
> Maybe we could bring these optimizations into Linux as well.  The only 
> thing Xen knows that Linux doesn't is if a vcpu is not scheduled; all 
> other information is shared.

I don't think there's a guarantee that just because a vcpu isn't running 
now, it won't need a tlb flush.  If a pcpu does runs vcpu 1 -> idle -> 
vcpu 1, then there's no need for it to do a tlb flush, but the hypercall 
can make force a flush when it reschedules vcpu 1 (if the tlb hasn't 
already been flushed by some other means).

(I'm not sure to what extent Xen implements this now, but I wouldn't 
want to over-constrain it.)

>> Also, the straightforward implementation of "poll until all target 
>> cpu's flags are clear" may never make progress, so you'd have to 
>> "scan flags, remove busy cpus from set, repeat until all cpus done".
>>
>> All annoying because this race is pretty unlikely, and it seems a 
>> shame to slow down all tlb flushes to deal with it.  Some kind of 
>> global "doing gup_fast" counter would get flush_tlb_others bypass the 
>> check, at the cost of putting a couple of atomic ops around the 
>> outside of gup_fast.
>
> The nice thing about local_irq_disable() is that it scales so well.

Right.  But it effectively puts the burden on the tlb-flusher to check 
the state (implicitly, by trying to send an interrupt).  Putting an 
explicit poll in gets the same effect, but its pure overhead just to 
deal with the gup race.

I'll put a patch together and see how it looks.

    J
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ