[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1449251120.25029.31.camel@edumazet-glaptop2.roam.corp.google.com>
Date: Fri, 04 Dec 2015 09:45:20 -0800
From: Eric Dumazet <eric.dumazet@...il.com>
To: Phil Sutter <phil@....cc>
Cc: Herbert Xu <herbert@...dor.apana.org.au>, davem@...emloft.net,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
tgraf@...g.ch, fengguang.wu@...el.com, wfg@...ux.intel.com,
lkp@...org
Subject: Re: rhashtable: Use __vmalloc with GFP_ATOMIC for table allocation
On Fri, 2015-12-04 at 18:01 +0100, Phil Sutter wrote:
> On Fri, Dec 04, 2015 at 10:39:56PM +0800, Herbert Xu wrote:
> > On Thu, Dec 03, 2015 at 08:08:39AM -0800, Eric Dumazet wrote:
> > >
> > > Anyway, __vmalloc() can be used with GFP_ATOMIC, have you tried this ?
> >
> > OK I've tried it and I no longer get any ENOMEM errors!
>
> I can't confirm this, sadly. Using 50 threads, results seem to be stable
> and good. But increasing the number of threads I can provoke ENOMEM
> condition again. See attached log which shows a failing test run with
> 100 threads.
>
> I tried to extract logs of a test run with as few as possible failing
> threads, but wasn't successful. It seems like the error amplifies
> itself: While having stable success with less than 70 threads, going
> beyond a margin I could not identify exactly, much more threads failed
> than expected. For instance, the attached log shows 70 out of 100
> threads failing, while for me every single test with 50 threads was
> successful.
>
> HTH, Phil
But this patch is about GFP_ATOMIC allocations, I doubt your test is
using GFP_ATOMIC.
Threads (process context) should use GFP_KERNEL allocations.
BTW, if 100 threads are simultaneously trying to vmalloc(32 MB), this
might not be very wise :(
Only one should really do this, while others are waiting.
If we really want parallelism (multiple cpus coordinating their effort),
it should be done very differently.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists