lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Sat, 28 Mar 2009 08:54:28 +0100 From: Eric Dumazet <dada1@...mosbay.com> To: Jeremy Fitzhardinge <jeremy@...p.org> CC: Avi Kivity <avi@...hat.com>, Nick Piggin <npiggin@...e.de>, Linux Kernel Mailing List <linux-kernel@...r.kernel.org>, Linux Memory Management List <linux-mm@...ck.org>, the arch/x86 maintainers <x86@...nel.org> Subject: Re: [PATCH 1/2] x86/mm: maintain a percpu "in get_user_pages_fast" flag Jeremy Fitzhardinge a écrit : > Avi Kivity wrote: >> Jeremy Fitzhardinge wrote: >>> get_user_pages_fast() relies on cross-cpu tlb flushes being a barrier >>> between clearing and setting a pte, and before freeing a pagetable page. >>> It usually does this by disabling interrupts to hold off IPIs, but >>> some tlb flush implementations don't use IPIs for tlb flushes, and >>> must use another mechanism. >>> >>> In this change, add in_gup_cpumask, which is a cpumask of cpus currently >>> performing a get_user_pages_fast traversal of a pagetable. A cross-cpu >>> tlb flush function can use this to determine whether it should hold-off >>> on the flush until the gup_fast has finished. >>> >>> @@ -255,6 +260,10 @@ int get_user_pages_fast(unsigned long start, int >>> nr_pages, int write, >>> * address down to the the page and take a ref on it. >>> */ >>> local_irq_disable(); >>> + >>> + cpu = smp_processor_id(); >>> + cpumask_set_cpu(cpu, in_gup_cpumask); >>> + >> >> This will bounce a cacheline, every time. Please wrap in CONFIG_XEN >> and skip at runtime if Xen is not enabled. > > Every time? Only when running successive gup_fasts on different cpus, > and only twice per gup_fast. (What's the typical page count? I see that > kvm and lguest are page-at-a-time users, but presumably direct IO has > larger batches.) If I am not mistaken, shared futexes where hitting hard mm semaphore. Then gup_fast was introduced in kernel/futex.c to remove this contention point. Yet, this contention point was process specific, not a global one :) And now, you want to add a global hot point, that would slow down unrelated processes, only because they use shared futexes, thousand times per second... > > Alternatively, it could have per-cpu flags and the other side could > construct the mask (I originally had that, but this was simpler). Simpler but would be a regression for legacy applications still using shared futexes (because statically linked with old libc) -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists