lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Sun, 18 Mar 2012 12:52:37 -0700 (PDT)
From:	Dan Magenheimer <dan.magenheimer@...cle.com>
To:	Akshay Karle <akshay.a.karle@...il.com>
Cc:	Konrad Wilk <konrad.wilk@...cle.com>, linux-kernel@...r.kernel.org,
	kvm@...r.kernel.org, ashu tripathi <er.ashutripathi@...il.com>,
	nishant gulhane <nishant.s.gulhane@...il.com>,
	Shreyas Mahure <shreyas.mahure@...il.com>,
	amarmore2006 <amarmore2006@...il.com>,
	mahesh mohan <mahesh6490@...il.com>
Subject: RE: [RFC 1/2] kvm: host-side changes for tmem on KVM

> From: Akshay Karle [mailto:akshay.a.karle@...il.com]
> Subject: RE: [RFC 1/2] kvm: host-side changes for tmem on KVM
> 
> > > From: Akshay Karle [mailto:akshay.a.karle@...il.com]
> > > Subject: Re: [RFC 1/2] kvm: host-side changes for tmem on KVM
> > >
> > > >> @@ -669,7 +670,6 @@ static struct zv_hdr *zv_create(struct x
> > > >>       int chunks = (alloc_size + (CHUNK_SIZE - 1)) >> CHUNK_SHIFT;
> > > >>       int ret;
> > > >>
> > > >> -     BUG_ON(!irqs_disabled());
> > > >
> > > > Can you explain why?
> > >
> > > Zcache is by default used in the non-virtualized environment for page compression. Whenever
> > > a page is to be evicted from the page cache the spin_lock_irq is held on the page mapping.
> > > To ensure that this is done, the BUG_ON(!irqs_disabled()) was used.
> > > But now the situation is different, we are using zcache functions for kvm VM's.
> > > So if any page of the guest is to be evicted the irqs should be disabled in just that
> > > guest and not the host, so we removed the BUG_ON(!irqs_disabled()); line.
> >
> > I think irqs may still need to be disabled (in your code by the caller)
> > since the tmem code (in tmem.c) takes spinlocks with this assumption.
> > I'm not sure since I don't know what can occur with scheduling a
> > kvm guest during an interrupt... can a different vcpu of the same guest
> > be scheduled on this same host pcpu?
> 
> The irqs are disabled but only in the guest kernel not in the host. We
> tried adding the spin_lock_irq code into the host but that was resulting
> in host panic as the lock is being taken on the entire mapping. If the
> irqs are disabled in the guest, is there a need to disable them on the
> host as well? Because the mappings maybe different in the host and the
> guest.

The issue is that interrupts MUST be disabled in code this is
called by zcache_put_page() and by zv_create() because the
called code (tmem_put and xv_malloc) takes locks.  This may
be difficult to reproduce, but if an interrupt occurs during
a critical region, a deadlock is possible.

You don't need to do a spin_lock_irq.  You just need to do a local_irq_save
and restore in zcache_put_page if kvm_tmem_enabled.  Look at zcache_get_page
as an example... the code in zcache_put_page would be something like:

{
	if (kvm_tmem_enabled)
		local_irq_save(flags);
			:
			:
out:
	if (kvm_tmem_enabled)
		local_irq_restore(flags);
	return ret;
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ