lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1255552911.21134.51.camel@rc-desk>
Date:	Wed, 14 Oct 2009 13:41:51 -0700
From:	reinette chatre <reinette.chatre@...el.com>
To:	Mel Gorman <mel@....ul.ie>
Cc:	Frans Pop <elendil@...net.nl>,
	David Rientjes <rientjes@...gle.com>,
	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	"Rafael J. Wysocki" <rjw@...k.pl>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Kernel Testers List <kernel-testers@...r.kernel.org>,
	Pekka Enberg <penberg@...helsinki.fi>,
	Bartlomiej Zolnierkiewicz <bzolnier@...il.com>,
	Karol Lewandowski <karol.k.lewandowski@...il.com>,
	"Abbas, Mohamed" <mohamed.abbas@...el.com>,
	"John W. Linville" <linville@...driver.com>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: Re: [Bug #14141] order 2 page allocation failures in iwlagn

On Wed, 2009-10-14 at 09:50 -0700, Mel Gorman wrote:

> What is your take on GFP_ATOMIC-direct deleting the pool before the tasklet
> can refill it with GFP_KERNEL?

I am not sure I understand your question. We attempt to reclaim a
received buffer on every receive, and with a queue size of 256 + 64 we
assume to have a pretty big buffer to deal with cases when allocations
fail. So, technically, for us to get into this situation where we start
seeing these allocation failures there would have been more than 200
times in which GFP_ATOMIC allocations failed that we did _not_ see since
we only see those warnings when there are less than 8 free buffers
remaining. More on this below ...

>  Should direct allocation be falling back to
> calling with GFP_KERNEL when the pool has been depleted instead of failing?

This is the intention of the current implementation. In the tasklet we
run iwl_rx_replenish_now(), which attempts the GFP_ATOMIC allocations
first by calling iwl_rx_allocate() with the GFP_ATOMIC flag. No
particular action is taken when this fails (apart from the error
message), but if the buffers are running low then iwl_rx_queue_restock()
(which is also called from iwl_rx_replenish_now()) will queue work that
will do the allocation with GFP_KERNEL.

We do queue the GFP_KERNEL allocations when there are only a few buffers
remaining in the queue (8 right now) ... maybe we can make this higher?

I am not sure if this will help in what you are trying to figure out
here, but would it help to play with the numbers here? That is, in
iwl_rx_queue_restock() we have:

if (rxq->free_count <= RX_LOW_WATERMARK)
	queue_work(priv->workqueue, &priv->rx_replenish);

Would it help here to make that value higher? Maybe queue the GFP_KERNEL
allocation when there are, for example, 50 or 100 free buffers
remaining? 

Reinette


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ