[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4589B09E.40503@s5r6.in-berlin.de>
Date: Wed, 20 Dec 2006 22:52:30 +0100
From: Stefan Richter <stefanr@...6.in-berlin.de>
To: Kristian Høgsberg <krh@...hat.com>
CC: linux-kernel@...r.kernel.org, linux1394-devel@...ts.sourceforge.net
Subject: Re: [PATCH 0/4] New firewire stack - updated patches
Kristian Høgsberg wrote:
> And as for gap count optimization, I just added that to my git repo.
What can I say.
...
> and the optimization is definitely noticable. This is a setup
> where the box and the disk are both connected to a hub so the max hops
> is 2, so we're using gap count 4:
...
> Though I see that Mac OS X uses a more conservative setting for a
> similiar topology, so maybe we need to add a bit or "margin" to the
> numbers from the table from 1394.
The table in IEEE 1394-1995 is not entirely safe. Use the one from IEEE
1394a-2000:
Hops GC Hops GC Hops GC Hops GC Hops GC
1 5 6 16 11 29 16 43 21 57
2 7 7 18 12 32 17 46 22 59
3 8 8 21 13 35 18 48 23 62
4 10 9 24 14 37 19 51
5 13 10 26 15 40 20 54
(It's certainly not necessary to optimize for more than ~8 hops.)
But note, this does not work anymore as soon as there is an 1394b node
in the mix --- at least with one or more 1394b repeater node, I'm not
sure about 1394b leaf nodes. Because of the potentially much larger
repeater delays of 1394b PHYs, the only suitable method to determine a
working least gap count of such setups is round-trip delay measurement
with ping packets. But a good compromise would be to run table-based gap
count optimization for 1394a environments and no optimization for 1394b
or mixed environments. (Even though the latter would also clearly
benefit from it.)
--
Stefan Richter
-=====-=-==- ==-- =-=--
http://arcgraph.de/sr/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists