lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <20140309170909.GA13335@redhat.com> Date: Sun, 9 Mar 2014 18:09:09 +0100 From: Oleg Nesterov <oleg@...hat.com> To: Linus Torvalds <torvalds@...ux-foundation.org> Cc: Davidlohr Bueso <davidlohr@...com>, Andrew Morton <akpm@...ux-foundation.org>, Ingo Molnar <mingo@...nel.org>, Peter Zijlstra <peterz@...radead.org>, Michel Lespinasse <walken@...gle.com>, Mel Gorman <mgorman@...e.de>, Rik van Riel <riel@...hat.com>, KOSAKI Motohiro <kosaki.motohiro@...il.com>, Davidlohr Bueso <davi@...hat.com>, Linux Kernel Mailing List <linux-kernel@...r.kernel.org> Subject: Re: [PATCH v4] mm: per-thread vma caching On 03/09, Linus Torvalds wrote: > > On Sun, Mar 9, 2014 at 5:57 AM, Oleg Nesterov <oleg@...hat.com> wrote: > > > > No, dup_task_struct() is obviously lockless. And the new child is not yet > > visible to for_each_process_thread(). > > Ok, then the siimple approach is to just do > > /* Did we miss an invalidate event? * > if (mm->seqcount < tsk->seqcount) > clear_vma_cache(); > > after making the new thread visible. > > Then the "race" becomes one of "we cannot have 4 billion mmap/munmap > events in other threads while we're setting up a new thread", But it's not the "while we're setting up a new thread", it is "since vmacache_valid() was called list time". And the cloning task can just sleep(A_LOT) and then do CLONE_VM. Of course, of course, this race is pute theoretical anyway. But imho makes sense to fix anyway, and the natural/trivial approach is just to move vmacache_flush(tsk) from dup_mm() to copy_mm(), right after the "if (!oldmm)" check. Oleg. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists