lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 16 Sep 2014 22:51:10 +0200 From: Radim Krčmář <rkrcmar@...hat.com> To: Paolo Bonzini <pbonzini@...hat.com> Cc: Andres Lagar-Cavilla <andreslc@...gle.com>, Gleb Natapov <gleb@...hat.com>, Rik van Riel <riel@...hat.com>, Peter Zijlstra <peterz@...radead.org>, Mel Gorman <mgorman@...e.de>, Andy Lutomirski <luto@...capital.net>, Andrew Morton <akpm@...ux-foundation.org>, Andrea Arcangeli <aarcange@...hat.com>, Sasha Levin <sasha.levin@...cle.com>, Jianyu Zhan <nasa4836@...il.com>, Paul Cassella <cassella@...y.com>, Hugh Dickins <hughd@...gle.com>, Peter Feiner <pfeiner@...gle.com>, kvm@...r.kernel.org, linux-kernel@...r.kernel.org, linux-mm@...ck.org Subject: Re: [PATCH] kvm: Faults which trigger IO release the mmap_sem 2014-09-15 13:11-0700, Andres Lagar-Cavilla: > +int kvm_get_user_page_retry(struct task_struct *tsk, struct mm_struct *mm, The suffix '_retry' is not best suited for this. On first reading, I imagined we will be retrying something from before, possibly calling it in a loop, but we are actually doing the first and last try in one call. Hard to find something that conveys our lock-dropping mechanic, '_polite' is my best candidate at the moment. > + int flags = FOLL_TOUCH | FOLL_HWPOISON | (FOLL_HWPOISON wasn't used before, but it's harmless.) 2014-09-16 15:51+0200, Paolo Bonzini: > Il 15/09/2014 22:11, Andres Lagar-Cavilla ha scritto: > > @@ -1177,9 +1210,15 @@ static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, > > npages = get_user_page_nowait(current, current->mm, > > addr, write_fault, page); > > up_read(¤t->mm->mmap_sem); > > - } else > > - npages = get_user_pages_fast(addr, 1, write_fault, > > - page); > > + } else { > > + /* > > + * By now we have tried gup_fast, and possible async_pf, and we ^ (If we really tried get_user_pages_fast, we wouldn't be here, so I'd prepend two underscores here as well.) > > + * are certainly not atomic. Time to retry the gup, allowing > > + * mmap semaphore to be relinquished in the case of IO. > > + */ > > + npages = kvm_get_user_page_retry(current, current->mm, addr, > > + write_fault, page); > > This is a separate logical change. Was this: > > down_read(&mm->mmap_sem); > npages = get_user_pages(NULL, mm, addr, 1, 1, 0, NULL, NULL); > up_read(&mm->mmap_sem); > > the intention rather than get_user_pages_fast? I believe so as well. (Looking at get_user_pages_fast and __get_user_pages_fast made my abstraction detector very sad.) > I think a first patch should introduce kvm_get_user_page_retry ("Retry a > fault after a gup with FOLL_NOWAIT.") and the second would add > FOLL_TRIED ("This properly relinquishes mmap semaphore if the > filemap/swap has to wait on page lock (and retries the gup to completion > after that"). Not sure if that would help to understand the goal ... > Apart from this, the patch looks good. The mm/ parts are minimal, so I > think it's best to merge it through the KVM tree with someone's Acked-by. I would prefer to have the last hunk in a separate patch, but still, Acked-by: Radim Krčmář <rkrcmar@...hat.com> -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists