lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 4 Oct 2011 18:03:00 -0700 (PDT)
From:	Dan Magenheimer <dan.magenheimer@...cle.com>
To:	Dave Hansen <dave@...ux.vnet.ibm.com>,
	Nitin Gupta <ngupta@...are.org>
Cc:	Seth Jennings <sjenning@...ux.vnet.ibm.com>,
	Greg KH <greg@...ah.com>, gregkh@...e.de,
	devel@...verdev.osuosl.org, cascardo@...oscopio.com,
	linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	brking@...ux.vnet.ibm.com, rcj@...ux.vnet.ibm.com
Subject: RE: [PATCH v2 0/3] staging: zcache: xcfmalloc support

> From: Dave Hansen [mailto:dave@...ux.vnet.ibm.com]
> Sent: Monday, October 03, 2011 12:23 PM
> To: Nitin Gupta
> Cc: Dan Magenheimer; Seth Jennings; Greg KH; gregkh@...e.de; devel@...verdev.osuosl.org;
> cascardo@...oscopio.com; linux-kernel@...r.kernel.org; linux-mm@...ck.org; brking@...ux.vnet.ibm.com;
> rcj@...ux.vnet.ibm.com
> Subject: Re: [PATCH v2 0/3] staging: zcache: xcfmalloc support
> 
> On Mon, 2011-10-03 at 13:54 -0400, Nitin Gupta wrote:
> > I think disabling preemption on the local CPU is the cheapest we can get
> > to protect PCPU buffers. We may experiment with, say, multiple buffers
> > per CPU, so we end up disabling preemption only in highly improbable
> > case of getting preempted just too many times exactly within critical
> > section.
> 
> I guess the problem is two-fold: preempt_disable() and
> local_irq_save().
> 
> > static int zcache_put_page(int cli_id, int pool_id, struct tmem_oid *oidp,
> >                                 uint32_t index, struct page *page)
> > {
> >         struct tmem_pool *pool;
> >         int ret = -1;
> >
> >         BUG_ON(!irqs_disabled());
> 
> That tells me "zcache" doesn't work with interrupts on.  It seems like
> awfully high-level code to have interrupts disabled.  The core page
> allocator has some irq-disabling spinlock calls, but that's only really
> because it has to be able to service page allocations from interrupts.
> What's the high-level reason for zcache?
> 
> I'll save the discussion about preempt for when Seth posts his patch.

I completely agree that the irq/softirq/preempt states should be
re-examined and, where possible, improved before zcache moves
out of staging.

Actually, I think cleancache_put is called from a point in the kernel
where irqs are disabled.  I believe it is unsafe to call a routine
sometimes with irqs disabled and sometimes with irqs enabled?
I think some points of call to cleancache_flush may also have
irqs disabled.

IIRC, much of the zcache code has preemption disabled because
it is unsafe for a page fault to occur when zcache is running,
since the page fault may cause a (recursive) call into zcache
and possibly recursively take a lock.

Anyway, some of the atomicity constraints in the code are
definitely required, but there are very likely some constraints
that are overzealous and can be removed.  For now, I'd rather
have the longer interrupt latency with code that works than
have developers experimenting with zcache and see lockups. :-}

Dan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ