lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4CAE00CB.1070400@redhat.com>
Date:	Thu, 07 Oct 2010 19:18:03 +0200
From:	Avi Kivity <avi@...hat.com>
To:	Gleb Natapov <gleb@...hat.com>
CC:	kvm@...r.kernel.org, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org, mingo@...e.hu,
	a.p.zijlstra@...llo.nl, tglx@...utronix.de, hpa@...or.com,
	riel@...hat.com, cl@...ux-foundation.org, mtosatti@...hat.com
Subject: Re: [PATCH v6 08/12] Handle async PF in a guest.

  On 10/07/2010 07:14 PM, Gleb Natapov wrote:
> On Thu, Oct 07, 2010 at 03:10:27PM +0200, Avi Kivity wrote:
> >   On 10/04/2010 05:56 PM, Gleb Natapov wrote:
> >  >When async PF capability is detected hook up special page fault handler
> >  >that will handle async page fault events and bypass other page faults to
> >  >regular page fault handler. Also add async PF handling to nested SVM
> >  >emulation. Async PF always generates exit to L1 where vcpu thread will
> >  >be scheduled out until page is available.
> >  >
> >
> >  Please separate guest and host changes.
> >
> >  >+void kvm_async_pf_task_wait(u32 token)
> >  >+{
> >  >+	u32 key = hash_32(token, KVM_TASK_SLEEP_HASHBITS);
> >  >+	struct kvm_task_sleep_head *b =&async_pf_sleepers[key];
> >  >+	struct kvm_task_sleep_node n, *e;
> >  >+	DEFINE_WAIT(wait);
> >  >+
> >  >+	spin_lock(&b->lock);
> >  >+	e = _find_apf_task(b, token);
> >  >+	if (e) {
> >  >+		/* dummy entry exist ->   wake up was delivered ahead of PF */
> >  >+		hlist_del(&e->link);
> >  >+		kfree(e);
> >  >+		spin_unlock(&b->lock);
> >  >+		return;
> >  >+	}
> >  >+
> >  >+	n.token = token;
> >  >+	n.cpu = smp_processor_id();
> >  >+	init_waitqueue_head(&n.wq);
> >  >+	hlist_add_head(&n.link,&b->list);
> >  >+	spin_unlock(&b->lock);
> >  >+
> >  >+	for (;;) {
> >  >+		prepare_to_wait(&n.wq,&wait, TASK_UNINTERRUPTIBLE);
> >  >+		if (hlist_unhashed(&n.link))
> >  >+			break;
> >  >+		local_irq_enable();
> >
> >  Suppose we take another apf here.  And another, and another (for
> >  different pages, while executing schedule()).  What's to prevent
> >  kernel stack overflow?
> >
> Host side keeps track of outstanding apfs and will not send apf for the
> same phys address twice. It will halt vcpu instead.

What about different pages, running the scheduler code?

Oh, and we'll run the scheduler recursively.

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ