lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1407919331.5835.8.camel@weser.hi.pengutronix.de>
Date:	Wed, 13 Aug 2014 10:42:11 +0200
From:	Lucas Stach <l.stach@...gutronix.de>
To:	Jerome Glisse <j.glisse@...il.com>
Cc:	Mario Kleiner <mario.kleiner.de@...il.com>,
	Thomas Hellstrom <thellstrom@...are.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
	kamal@...onical.com, LKML <linux-kernel@...r.kernel.org>,
	"dri-devel@...ts.freedesktop.org" <dri-devel@...ts.freedesktop.org>,
	Dave Airlie <airlied@...hat.com>, ben@...adent.org.uk,
	Michel Dänzer <michel@...nzer.net>,
	m.szyprowski@...sung.com
Subject: Re: CONFIG_DMA_CMA causes ttm performance problems/hangs.

Am Dienstag, den 12.08.2014, 22:17 -0400 schrieb Jerome Glisse:
[...]
> > I haven't tested the patch yet. For the original bug it won't help directly,
> > because the super-slow allocations which cause the desktop stall are
> > tt_cached allocations, so they go through the if (is_cached) code path which
> > isn't improved by Jerome's patch. is_cached always releases memory
> > immediately, so the tt_cached pool just bounces up and down between 4 and 7
> > pages. So this was an independent issue. The slow allocations i noticed were
> > mostly caused by exa allocating new gem bo's, i don't know which path is
> > taken by 3d graphics?
> > 
> > However, the fixed ttm path could indirectly solve the DMA_CMA stalls by
> > completely killing CMA for its intended purpose. Typical CMA sizes are
> > probably around < 100 MB (kernel default is 16 MB, Ubuntu config is 64 MB),
> > and the limit for the page pool seems to be more like 50% of all system RAM?
> > Iow. if the ttm dma pool is allowed to grow that big with recycled pages, it
> > probably will almost completely monopolize the whole CMA memory after a
> > short amount of time. ttm won't suffer stalls if it essentially doesn't
> > interact with CMA anymore after a warmup period, but actual clients which
> > really need CMA (ie., hardware without scatter-gather dma etc.) will be
> > starved of what they need as far as my limited understanding of the CMA
> > goes.
> 
> Yes currently we allow the pool to be way too big, given that pool was probably
> never really use we most likely never had much of an issue. So i would hold on
> applying my patch until more proper limit are in place. My thinking was to go
> for something like 32/64M at most and less then that if < 256M total ram. I also
> think that we should lower the pool size on first call to shrink and only increase
> it again after some timeout since last call to shrink so that when shrink is call
> we minimize our pool size at least for a time. Will put together couple patches
> for doing that.
> 
> > 
> > So fwiw probably the fix to ttm will increase the urgency for the CMA people
> > to come up with a fix/optimization for the allocator. Unless it doesn't
> > matter if most desktop systems have CMA disabled by default, and ttm is
> > mostly used by desktop graphics drivers (nouveau, radeon, vmgfx)? I only
> > stumbled over the problem because the Ubuntu 3.16 mainline testing kernels
> > are compiled with CMA on.
> > 
> 
> Enabling cma on x86 is proof of brain damage that said the dma allocator should
> not use the cma area for single page allocation.
> 
Harsh words.

Yes, allocating pages unconditionally from CMA if it is enabled is an
artifact of CMAs ARM heritage. While it seems completely backwards to
allocate single pages from CMA on x86, on ARM the CMA pool is the only
way to get lowmem pages on which you are allowed to change the caching
state.

So the obvious fix is to avoid CMA for order 0 allocations on x86. I can
cook a patch for this.

Regards,
Lucas 
-- 
Pengutronix e.K.             | Lucas Stach                 |
Industrial Linux Solutions   | http://www.pengutronix.de/  |

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ