[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1410794580.7106.168.camel@edumazet-glaptop2.roam.corp.google.com>
Date: Mon, 15 Sep 2014 08:23:00 -0700
From: Eric Dumazet <eric.dumazet@...il.com>
To: Thomas Graf <tgraf@...g.ch>
Cc: davem@...emloft.net, paulmck@...ux.vnet.ibm.com,
john.r.fastabend@...el.com, kaber@...sh.net,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 5/5] rhashtable: Per bucket locks & expansion/shrinking
in work queue
On Mon, 2014-09-15 at 14:18 +0200, Thomas Graf wrote:
> +static int alloc_bucket_locks(struct rhashtable *ht, struct bucket_table *tbl)
> +{
> + unsigned int i, size;
> +#if defined(CONFIG_PROVE_LOCKING)
> + unsigned int nr_pcpus = 2;
> +#else
> + unsigned int nr_pcpus = num_possible_cpus();
> +#endif
> +
> + nr_pcpus = min_t(unsigned int, nr_pcpus, 32UL);
> + size = nr_pcpus * ht->p.locks_mul;
> +
You need to roundup to next power of two.
> + if (sizeof(spinlock_t) != 0) {
> +#ifdef CONFIG_NUMA
> + if (size * sizeof(spinlock_t) > PAGE_SIZE)
> + tbl->locks = vmalloc(size * sizeof(spinlock_t));
> + else
> +#endif
> + tbl->locks = kmalloc_array(size, sizeof(spinlock_t),
> + GFP_KERNEL);
> + if (!tbl->locks)
> + return -ENOMEM;
> + for (i = 0; i < size; i++)
> + spin_lock_init(&tbl->locks[i]);
> + }
> + tbl->locks_mask = size - 1;
> +
> + return 0;
> +}
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists