lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20111222012312.GP9213@google.com>
Date:	Wed, 21 Dec 2011 17:23:12 -0800
From:	Tejun Heo <tj@...nel.org>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	Linus Torvalds <torvalds@...ux-foundation.org>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH UPDATED 2/2] mempool: fix first round failure behavior

Hello, Andrew.

On Wed, Dec 21, 2011 at 05:09:19PM -0800, Andrew Morton wrote:
> If the pool is empty and the memory allocator is down into its
> emergency reserves then we have:
> 
> Old behaviour: Wait for someone to return an item, then retry
> 
> New behaviour: enable page reclaim in gfp_mask, retry a
>                single time then wait for someone to return an item.
> 
> So what we can expect to see is that in this low-memory situation,
> mempool_alloc() will perform a lot more page reclaim, and more mempool
> items will be let loose into the kernel.
> 
> I'm not sure what the effects of this will be.  I can't immediately
> point at any bad ones.  Probably not much, as the mempool_alloc()
> caller will probably be doing other allocations, using the
> reclaim-permitting gfp_mask.
> 
> But I have painful memories of us (me and Jens, iirc) churning this
> code over and over again until it stopped causing problems.  Some were
> subtle and nasty.  Much dumpster diving into the pre-git changelogs
> should be done before changing it, lest we rediscover long-fixed
> problems :(

I see.  It just seemed like a weird behavior and looking at the commit
log, there was originally code to kick reclaim there, so the sequence
made sense - first try w/o reclaim, look at the mempool, kick reclaim
and retry w/ GFP_WAIT and then wait for someone else to free.  That
part was removed by 20a77776c24 "[PATCH] mempool: simplify alloc" back
in 05.  In the process, it also lost retry w/ reclaim before waiting
for mempool reserves.

I was trying to add percpu mempool and this bit me as percpu allocator
can't to NOIO and the above delayed retry logic ended up adding random
5s delay (or until the next free).

> > That said, I still find it a bit unsettling that a GFP_ATOMIC
> > allocation which would otherwise succeed may fail when issued through
> > mempool.
> 
> Spose so.  It would be strange to call mempool_alloc() with GFP_ATOMIC.
> Because "wait for an item to be returned" is the whole point of the
> thing.

Yeah but the pool can be used from multiple code paths and I think it
plausible to use it that way and expect at least the same or better
alloc behavior as not using mempool.  Eh... this doesn't really affect
correctness, so not such a big deal but still weird.

> > Maybe the RTTD is clearing __GFP_NOMEMALLOC on retry if the
> > gfp requsted by the caller is !__GFP_WAIT && !__GFP_NOMEMALLOC?
> 
> What the heck is an RTTD?

Right thing to do?  Hmmm... I thought other people were using it too.
It's quite possible that I just dreamed it up tho.

> > +	/*
> > +	 * We use gfp mask w/o __GFP_WAIT or IO for the first round.  If
> > +	 * alloc failed with that and @pool was empty, retry immediately.
> > +	 */
> > +	if (gfp_temp != gfp_mask) {
> > +		gfp_temp = gfp_mask;
> > +		spin_unlock_irqrestore(&pool->lock, flags);
> > +		goto repeat_alloc;
> > +	}
> > +
> 
> Here, have a faster kernel ;)

;)

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ