lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20171006124148.GA16466@flask>
Date:   Fri, 6 Oct 2017 14:41:49 +0200
From:   Radim Krčmář <rkrcmar@...hat.com>
To:     Boqun Feng <boqun.feng@...il.com>
Cc:     Paolo Bonzini <pbonzini@...hat.com>, linux-kernel@...r.kernel.org,
        kvm@...r.kernel.org,
        "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Wanpeng Li <wanpeng.li@...mail.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>,
        "H. Peter Anvin" <hpa@...or.com>, x86@...nel.org
Subject: Re: [PATCH] kvm/x86: Avoid async PF to end RCU read-side critical
 section early in PREEMPT=n kernel

2017-10-06 09:33+0800, Boqun Feng:
> On Tue, Oct 03, 2017 at 02:11:08PM +0000, Paolo Bonzini wrote:
> > I'd prefer a slight change in subject and topic:
> > 
> > ------- 8< --------
> > Subject: [PATCH] kvm/x86: Avoid async PF preempting the kernel incorrectly
> > 
> > Currently, in PREEMPT_COUNT=n kernel, kvm_async_pf_task_wait() could call
> > schedule() to reschedule in some cases.  This could result in
> > accidentally ending the current RCU read-side critical section early,
> > causing random memory corruption in the guest, or otherwise preempting
> > the currently running task inside between preempt_disable and
> > preempt_enable.
> > 
> > The difficulty to handle this well is because we don't know whether an
> > async PF delivered in a preemptible section or RCU read-side critical section
> > for PREEMPT_COUNT=n, since preempt_disable()/enable() and rcu_read_lock/unlock()
> > are both no-ops in that case.
> > 
> > To cure this, we treat any async PF interrupting a kernel context as one
> > that cannot be preempted, preventing kvm_async_pf_task_wait() from choosing
> > the schedule() path in that case.
> > 
> > To do so, a second parameter for kvm_async_pf_task_wait() is introduced,
> > so that we know whether it's called from a context interrupting the
> > kernel, and the parameter is set properly in all the callsites.
> > 
> > Cc: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
> > Cc: Peter Zijlstra <peterz@...radead.org>
> > Cc: Wanpeng Li <wanpeng.li@...mail.com>
> > Cc: stable@...r.kernel.org
> > Signed-off-by: Boqun Feng <boqun.feng@...il.com>
> > ------- 8< --------
> > 
> 
> It's more concise and accurate now!
> 
> Learned a lot from your modification of commit messages, thanks!

Applied with the updated commit message, thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ