[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101127093342.3f1d01ce@stein>
Date: Sat, 27 Nov 2010 09:33:42 +0100
From: Stefan Richter <stefanr@...6.in-berlin.de>
To: Maxim Levitsky <maximlevitsky@...il.com>
Cc: "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
linux1394-devel <linux1394-devel@...ts.sourceforge.net>
Subject: Re: [Q] How to invalidate ARP cache for a network device from
within kernel
On Nov 27 Maxim Levitsky wrote:
> Subject: [PATCH 1/3] firewire: net: restart ISO channel on bus resets
>
> ---
> drivers/firewire/net.c | 5 +++++
> 1 files changed, 5 insertions(+), 0 deletions(-)
>
> diff --git a/drivers/firewire/net.c b/drivers/firewire/net.c
> index 1a467a9..007969c 100644
> --- a/drivers/firewire/net.c
> +++ b/drivers/firewire/net.c
> @@ -1593,10 +1593,15 @@ static void fwnet_update(struct fw_unit *unit)
> {
> struct fw_device *device = fw_parent_device(unit);
> struct fwnet_peer *peer = dev_get_drvdata(&unit->device);
> + struct fwnet_device *dev = peer->dev;
> int generation;
>
> generation = device->generation;
>
> + fw_iso_context_stop(dev->broadcast_rcv_context);
> + fw_iso_context_start(dev->broadcast_rcv_context, -1, 0,
> + FW_ISO_CONTEXT_MATCH_ALL_TAGS);
> +
> spin_lock_irq(&peer->dev->lock);
> peer->node_id = device->node_id;
> peer->generation = generation;
Could you add a changelog?
And it can be optimized to do only once per bus generation. E.g. add a
generation field for the IR context in fwnet_device. Or do it only if
fw_unit is that of the local node.
OTOH, is this actually necessary on normal bus resets? It should only
be necessary after PM resume, right? If so, perhaps do it only if the
generation increased by more than one.
Also, wouldn't a fw_iso_context_start alone, without prior stop,
suffice?
--
Stefan Richter
-=====-==-=- =-== ==-==
http://arcgraph.de/sr/
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists