[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180622183502.i5dv4d3tbwh5sw6u@linux-r8p5>
Date: Fri, 22 Jun 2018 11:35:02 -0700
From: Davidlohr Bueso <dave@...olabs.net>
To: akpm@...ux-foundation.org, torvalds@...ux-foundation.org
Cc: tgraf@...g.ch, herbert@...dor.apana.org.au,
manfred@...orfullife.com, mhocko@...nel.org,
guillaume.knispel@...ersonicimagine.com, linux-api@...r.kernel.org,
linux-kernel@...r.kernel.org, Davidlohr Bueso <dbueso@...e.de>,
neilb@...e.com
Subject: Re: [PATCH v2 1/4] lib/rhashtable: simplify bucket_table_alloc()
On Fri, 22 Jun 2018, Davidlohr Bueso wrote:
>This slightly changes the gfp flags passed on to nested_table_alloc() as it will now
>also use GFP_ATOMIC | __GFP_NOWARN. However, I consider this a positive consequence
>as for the same reasons we want nowarn semantics in bucket_table_alloc().
If this is not acceptable, we can just keep the caller's current semantics - the
atomic flag could also be labeled 'rehash' or something considering that it comes
only from insert_rehash() when we get EAGAIN after trying to insert the first time:
diff --git a/lib/rhashtable.c b/lib/rhashtable.c
index 9427b5766134..18740b052aec 100644
--- a/lib/rhashtable.c
+++ b/lib/rhashtable.c
@@ -172,17 +172,15 @@ static struct bucket_table *bucket_table_alloc(struct rhashtable *ht,
{
struct bucket_table *tbl = NULL;
size_t size, max_locks;
+ bool atomic = (gfp == GFP_ATOMIC);
int i;
size = sizeof(*tbl) + nbuckets * sizeof(tbl->buckets[0]);
- if (gfp != GFP_KERNEL)
- tbl = kzalloc(size, gfp | __GFP_NOWARN | __GFP_NORETRY);
- else
- tbl = kvzalloc(size, gfp);
+ tbl = kvzalloc(size, atomic ? gfp | __GFP_NOWARN : gfp);
size = nbuckets;
- if (tbl == NULL && gfp != GFP_KERNEL) {
+ if (tbl == NULL && atomic) {
tbl = nested_bucket_table_alloc(ht, nbuckets, gfp);
nbuckets = 0;
}
Powered by blists - more mailing lists