[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <473C3B00.9090602@trash.net>
Date: Thu, 15 Nov 2007 13:26:40 +0100
From: Patrick McHardy <kaber@...sh.net>
To: Eric Dumazet <dada1@...mosbay.com>
CC: David Miller <davem@...emloft.net>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Netfilter Development Mailinglist
<netfilter-devel@...r.kernel.org>
Subject: Re: [PATCH] netfilter : struct xt_table_info diet
Eric Dumazet wrote:
> On Wed, 14 Nov 2007 18:19:41 +0100
> Patrick McHardy <kaber@...sh.net> wrote:
>
>>>diff --git a/net/ipv4/netfilter/arp_tables.c b/net/ipv4/netfilter/arp_tables.c
>>>index 2909c92..ed3bd0b 100644
>>>--- a/net/ipv4/netfilter/arp_tables.c
>>>+++ b/net/ipv4/netfilter/arp_tables.c
>>>@@ -811,7 +811,7 @@ static int do_replace(void __user *user, unsigned int len)
>>> return -ENOPROTOOPT;
>>>
>>> /* overflow check */
>>>- if (tmp.size >= (INT_MAX - sizeof(struct xt_table_info)) / NR_CPUS -
>>>+ if (tmp.size >= (INT_MAX - XT_TABLE_INFO_SZ) / NR_CPUS -
>>> SMP_CACHE_BYTES)
>>
>>
>>Shouldn't NR_CPUs be replaced by nr_cpu_ids here? I'm wondering
>>why we still include NR_CPUs in the calculation at all though,
>>unlike in 2.4, we don't allocate one huge area of memory anymore
>>but do one allocation per CPU. IIRC it even was you who changed
>>that.
>
>
> Yes, doing an allocation per possible cpu was better than one giant
> allocation (memory savings and NUMA aware)
>
> Well, technically speaking you are right, we may also replace these
> divides per NR_CPUS by nr_cpu_ids (or even better : num_possible_cpus())
>
> Because with NR_CPUS=4096, we actually limit tmp.size to about 524000,
> what a shame ! :)
We actually had complaints about number of rule limitations, but that
was more likely caused by vmalloc limits :) But of course we do need
to include the number of CPUs in the check, I misread the code.
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists