lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sat, 28 Mar 2009 13:27:15 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Jeremy Fitzhardinge <jeremy@...p.org>
Cc:	Avi Kivity <avi@...hat.com>, Nick Piggin <npiggin@...e.de>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Linux Memory Management List <linux-mm@...ck.org>,
	the arch/x86 maintainers <x86@...nel.org>
Subject: Re: [PATCH 1/2] x86/mm: maintain a percpu "in get_user_pages_fast"
 flag

On Fri, 2009-03-27 at 22:01 -0700, Jeremy Fitzhardinge wrote:
> Avi Kivity wrote:
> > Jeremy Fitzhardinge wrote:
> >> get_user_pages_fast() relies on cross-cpu tlb flushes being a barrier
> >> between clearing and setting a pte, and before freeing a pagetable page.
> >> It usually does this by disabling interrupts to hold off IPIs, but
> >> some tlb flush implementations don't use IPIs for tlb flushes, and
> >> must use another mechanism.
> >>
> >> In this change, add in_gup_cpumask, which is a cpumask of cpus currently
> >> performing a get_user_pages_fast traversal of a pagetable.  A cross-cpu
> >> tlb flush function can use this to determine whether it should hold-off
> >> on the flush until the gup_fast has finished.
> >>
> >> @@ -255,6 +260,10 @@ int get_user_pages_fast(unsigned long start, int 
> >> nr_pages, int write,
> >>      * address down to the the page and take a ref on it.
> >>      */
> >>     local_irq_disable();
> >> +
> >> +    cpu = smp_processor_id();
> >> +    cpumask_set_cpu(cpu, in_gup_cpumask);
> >> +
> >
> > This will bounce a cacheline, every time.  Please wrap in CONFIG_XEN 
> > and skip at runtime if Xen is not enabled.
> 
> Every time?  Only when running successive gup_fasts on different cpus, 
> and only twice per gup_fast. (What's the typical page count?  I see that 
> kvm and lguest are page-at-a-time users, but presumably direct IO has 
> larger batches.)

The larger the batch, the longer the irq-off latency, I've just proposed
adding a batch mechanism to gup_fast() to limit this.




--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ