[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150901134310.GB27550@orbit.nwl.cc>
Date: Tue, 1 Sep 2015 15:43:11 +0200
From: Phil Sutter <phil@....cc>
To: Herbert Xu <herbert@...dor.apana.org.au>
Cc: tgraf@...g.ch, davem@...emloft.net, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org, fengguang.wu@...el.com,
wfg@...ux.intel.com, lkp@...org
Subject: Re: [PATCH 2/3] rhashtable-test: retry insert operations in threads
On Tue, Sep 01, 2015 at 09:00:57PM +0800, Herbert Xu wrote:
> On Tue, Sep 01, 2015 at 02:46:48PM +0200, Phil Sutter wrote:
> >
> > This is not an inherent behaviour of the implementation but general
> > agreement. The insertion may fail non-permanently (returning -EBUSY),
> > users are expected to handle this by retrying the operation.
>
> Absolutely not. The only reason for an insertion to fail is if we
> can't allocate enough memory. Unless the user is also looping its
> kmalloc calls it definitely shouldn't be retrying the insert.
rhashtable_insert_fast() returns -EBUSY if the table is full
(rht_grow_above_100() returns true) and an asynchronous rehash operation
is active. AFAICT, this is not necessarily caused by memory pressure.
> If an expansion fails it means either that the system is suffering
> a catastrophic memory shortage, or the user of rhashtable is doing
> something wrong.
Hmm. Since memory allocation is first tried with GFP_ATOMIC set and upon
failure retried in background, this seems like a situation which might
happen during normal use. If that already indicates a severe problem,
why retry in background at all?
Cheers, Phil
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists