[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <43F901BD926A4E43B106BF17856F0755F65FF46E@orsmsx508.amr.corp.intel.com>
Date: Fri, 10 Sep 2010 13:25:11 -0700
From: "Rose, Gregory V" <gregory.v.rose@...el.com>
To: David Miller <davem@...emloft.net>,
"error27@...il.com" <error27@...il.com>
CC: "Kirsher, Jeffrey T" <jeffrey.t.kirsher@...el.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"kernel-janitors@...r.kernel.org" <kernel-janitors@...r.kernel.org>
Subject: RE: [patch] ixgbevf: potential NULL dereference on allocation
failure
>-----Original Message-----
>From: David Miller [mailto:davem@...emloft.net]
>Sent: Friday, September 10, 2010 1:22 PM
>To: error27@...il.com
>Cc: Rose, Gregory V; Kirsher, Jeffrey T; netdev@...r.kernel.org; kernel-
>janitors@...r.kernel.org
>Subject: Re: [patch] ixgbevf: potential NULL dereference on allocation
>failure
>
>It's trying to optimize out the "down/up" of the device, which needs
>to be done if we allocated a new TX ring.
>
>It also adjusts the semantics of the error return, in that if the
>TX ring re-sizing went OK but the RX resizing failed, it returns
>success.
>
>That's kind of crummy semantics, if any part fails we should unwind
>and return an error. So just do the necessary memory allocations
>first, and don't make any changes unless they all succeed.
>
>This code also seems to be incredibly racy. It allocates the new RING
>structure, and copies the existing entries over. Meanwhile the chip
>is still running and we're potentially processing these same ring
>entries, so by the time we actually assign adapter->{rx,tx}_ring
>pointers the contents could have changed.
>
>Probably the simplest thing to do is to structure this such that the
>chip is quiesced around the entire ring set operation, so something
>like:
[snip]
I'll take this up and see what I can do to fix it up a bit and address your concerns. Thanks for the suggestions.
- Greg
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists