[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <49169E92.8080802@cosmosbay.com>
Date: Sun, 09 Nov 2008 09:25:54 +0100
From: Eric Dumazet <dada1@...mosbay.com>
To: David Stevens <dlstevens@...ibm.com>
CC: Alexey Dobriyan <adobriyan@...il.com>, alan@...rguk.ukuu.org.uk,
davem@...emloft.net, netdev@...r.kernel.org,
netdev-owner@...r.kernel.org, Eric Sesterhenn <snakebyte@....de>
Subject: Re: [PATCH] net: fix /proc/net/snmp as memory corruptor
David Stevens a écrit :
>> If you are not sure what I am talking about, then you should probably
>> not use static variables at all. I found this fix quite obvious...
>
> Actually, I didn't realize "out" was static -- was looking at just
> the patch, and obviously missing your point.
> I don't have a problem with 16 ints on the stack (or shorts, as
> you pointed out)-- I didn't want the data on the stack, which may be
> 64-bit ints. In your patch, you're collecting all of it on the stack
> (doubling its size).
> If there is no interlocking at a higher layer (and I haven't
> looked
> at this in a long time...) (ie, exclusive opens), then I agree, it
> shouldn't be static.
>
> Why not just that? (ie, add count=0 as Alexey did and remove the
> static qualifier from "out")
Well, this patch also saves 143+64 bytes because of the cleanup.
before :
# size net/ipv4/proc.o
text data bss dec hex filename
5191 16 64 5271 1497 net/ipv4/proc.o
After :
# size net/ipv4/proc.o
text data bss dec hex filename
5048 16 0 5064 13c8 net/ipv4/proc.o
1) bug correction from Alexey
2) bug correction about using a static array without exclusion
3) minor cleanup to save 143 bytes of text and 64 bytes of data
4) save thousand of cpu cycles if many possible cpus.
5) Actually works, even if using 128 bytes on stack, in a function
that only calls seq_printf() (we are not in the network stack)
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists