lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 21 Jan 2013 13:01:50 -0800
From:	Greg KH <gregkh@...uxfoundation.org>
To:	Arnd Bergmann <arnd@...db.de>
Cc:	Soeren Moch <smoch@....de>, Jason Cooper <jason@...edaemon.net>,
	Thomas Petazzoni <thomas.petazzoni@...e-electrons.com>,
	Andrew Lunn <andrew@...n.ch>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	linux-kernel@...r.kernel.org, Michal Hocko <mhocko@...e.cz>,
	linux-mm@...ck.org, Kyungmin Park <kyungmin.park@...sung.com>,
	Mel Gorman <mgorman@...e.de>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Marek Szyprowski <m.szyprowski@...sung.com>,
	linaro-mm-sig@...ts.linaro.org,
	linux-arm-kernel@...ts.infradead.org,
	Sebastian Hesselbarth <sebastian.hesselbarth@...il.com>
Subject: Re: [PATCH v2] mm: dmapool: use provided gfp flags for all
 dma_alloc_coherent() calls

On Mon, Jan 21, 2013 at 06:55:25PM +0000, Arnd Bergmann wrote:
> On Monday 21 January 2013, Soeren Moch wrote:
> > On 01/19/13 21:05, Arnd Bergmann wrote:
> > > from the distribution of the numbers, it seems that there is exactly 1 MB
> > > of data allocated between bus addresses 0x1f90000 and 0x1f9ffff, allocated
> > > in individual pages. This matches the size of your pool, so it's definitely
> > > something coming from USB, and no single other allocation, but it does not
> > > directly point to a specific line of code.
> > Very interesting, so this is no fragmentation problem nor something 
> > caused by sata or ethernet.
> 
> Right.
> 
> > > One thing I found was that the ARM dma-mapping code seems buggy in the way
> > > that it does a bitwise and between the gfp mask and GFP_ATOMIC, which does
> > > not work because GFP_ATOMIC is defined by the absence of __GFP_WAIT.
> > >
> > > I believe we need the patch below, but it is not clear to me if that issue
> > > is related to your problem or now.
> > Out of curiosity I checked include/linux/gfp.h. GFP_ATOMIC is defined as 
> > __GFP_HIGH (which means 'use emergency pool', and no wait), so this 
> > patch should not make any difference for "normal" (GPF_ATOMIC / 
> > GFP_KERNEL) allocations, only for gfp_flags accidentally set to zero. 
> 
> Yes, or one of the rare cases where someone intentionally does something like
> (GFP_ATOMIC & !__GFP_HIGH) or (GFP_KERNEL || __GFP_HIGH), which are both
> wrong.
> 
> > So, can a new test with this patch help to debug the pool exhaustion?
> 
> Yes, but I would not expect this to change much. It's a bug, but not likely
> the one you are hitting.
> 
> > > So even for a GFP_KERNEL passed into usb_submit_urb, the ehci driver
> > > causes the low-level allocation to be GFP_ATOMIC, because
> > > qh_append_tds() is called under a spinlock. If we have hundreds
> > > of URBs in flight, that will exhaust the pool rather quickly.
> > >
> > Maybe there are hundreds of URBs in flight in my application, I have no 
> > idea how to check this.
> 
> I don't know a lot about USB, but I always assumed that this was not
> a normal condition and that there are only a couple of URBs per endpoint
> used at a time. Maybe Greg or someone else with a USB background can
> shed some light on this.

There's no restriction on how many URBs a driver can have outstanding at
once, and if you have a system with a lot of USB devices running at the
same time, there could be lots of URBs in flight depending on the number
of host controllers and devices and drivers being used.

Sorry,

greg k-h
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ