[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20101112.130824.68146775.davem@davemloft.net>
Date: Fri, 12 Nov 2010 13:08:24 -0800 (PST)
From: David Miller <davem@...emloft.net>
To: john.r.fastabend@...el.com
Cc: netdev@...r.kernel.org, eric.dumazet@...il.com, therbert@...gle.com
Subject: Re: [net-2.6 PATCH] nete zero kobject in rx_queue_release
From: John Fastabend <john.r.fastabend@...el.com>
Date: Thu, 11 Nov 2010 12:13:41 -0800
> netif_set_real_num_rx_queues() can decrement and increment
> the number of rx queues. For example ixgbe does this as
> features and offloads are toggled. Presumably this could
> also happen across down/up on most devices if the available
> resources changed (cpu offlined).
>
> The kobject needs to be zero'd in this case so that the
> state is not preserved across kobject_put()/kobject_init_and_add().
>
> This resolves the following error report.
...
> Signed-off-by: John Fastabend <john.r.fastabend@...el.com>
I think it's probably better to clear the entire netdev_rx_queue
object rather than just the embedded kobject.
Otherwise we leave dangling rps_map, rps_flow_table, etc. pointers.
In fact, it's more tricky than this, because notice that your
patch will memset() free'd memory in the case where the
first->count drops to zero and we execute the kfree().
So we'll need something like:
if (atomic_dec_and_test(&first->count))
kfree(first);
else
/* clear everything except queue->first */
or, alternatively:
--------------------
map = rcu_dereference_raw(queue->rps_map);
if (map) {
call_rcu(&map->rcu, rps_map_release);
rcu_assign_pointer(queue->rps_map, NULL);
}
flow_table = rcu_dereference_raw(queue->rps_flow_table);
if (flow_table) {
call_rcu(&flow_table->rcu, rps_dev_flow_table_release);
rcu_assign_pointer(queue->rps_flow_table, NULL);
}
if (atomic_dec_and_test(&first->count))
kfree(first);
else
memset(kobj);
--------------------
Something like that.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists