[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1508729177.2898.5.camel@klaipeden.com>
Date: Mon, 23 Oct 2017 12:26:17 +0900
From: Koichiro Den <den@...ipeden.com>
To: Eric Dumazet <eric.dumazet@...il.com>
Cc: netdev@...r.kernel.org, davem@...emloft.net, ncardwell@...gle.com
Subject: Re: [net-next] tcp: do tcp_mstamp_refresh before retransmits on TSQ
handler
On Sun, 2017-10-22 at 10:11 -0700, Eric Dumazet wrote:
> On Sun, 2017-10-22 at 09:49 -0700, Eric Dumazet wrote:
>
> > Here is the test I cooked.
> >
> > Note that it actually passes just fine, do I am wondering if your patch
> > is needed after all..
> >
>
> Oh this is because tcp_mstamp_refresh() is called from tcp_write_xmit()
> and tcp_write_xmit() is directly called from tcp_tsq_handler()
Thanks for all you help. I guess we should focus on retransmit call path
which comes just before that, or not?
>
> So it looks your patch has no observable effect.
>
> (Even with usec TS as we use here at Google, it has no effect since TCP
> stack does not consider RTT samples being valid if skb was ever
> retransmitted)
>
>
Is the point about tcp_rack_advance part or in general?
Btw, perf probe makes the original point described in my commit message apparent
as follows:
$ perf probe 'T1=tcp_write_xmit:25 tm=+1464(%bx):u64'
$ perf probe 'T2=tcp_xmit_retransmit_queue+0
l_o=+1664(%di):u32 # tp->lost_out
r_o=+1520(%di):u32 # tp->retrans_out
tm=+1464(%di):u64' # skb_mstamp will be set to this.
$ perf trace -aT --no-syscalls --call-graph dwarf \
-e probe:T1/max-stack=4/
-e probe:T2/max-stack=1/ |& \
sed 's/..kernel.kallsyms..//;s/probe://' # to make it a bit more readable :p
$ # run fq-pacing-tsq-rtx-ts.pkt Eric provided:
[before patch applied]
-----8<------8<-----
2560581.604 T2:(ffffffff817da5e0) l_o=25 r_o=0 tm=2560581404)
tcp_xmit_retransmit_queue.part.33
tcp_xmit_recovery.part.53
2560581.614 T1:(ffffffff817d7e74) tm=2560581612) # <--- (1)
tcp_write_xmit
__tcp_push_pending_frames
tcp_rcv_established
tcp_v4_do_rcv
/*
* ^
* | could be unexpectedly long, e.g. due to BLOCK_SOFTIRQ
* v
*/
2560582.215 T2:(ffffffff817da5e0) l_o=25 r_o=5 tm=2560581612)
/* <- same as (1) above */
tcp_xmit_retransmit_queue.part.33
tcp_tasklet_func
----->8------->8-----
[after patch applied]
-----8<------8<-----
97846.150 T2:(ffffffff817da5e0) l_o=25 r_o=0 tm=97846132)
tcp_xmit_retransmit_queue.part.33
tcp_xmit_recovery.part.53
97846.159 T1:(ffffffff817d7e74) tm=97846158) # <--- (1)
tcp_write_xmit
__tcp_push_pending_frames
tcp_rcv_established
tcp_v4_do_rcv
/*
* ^
* | could be unexpectedly long, e.g. due to BLOCK_SOFTIRQ
* v
*/
97847.144 T2:(ffffffff817da5e0) l_o=25 r_o=5 tm=97847134)
/* <- not the same as (1).
* never affected now that we ensure it's refreshed */
tcp_xmit_retransmit_queue.part.33
tcp_tasklet_func
----->8------->8-----
Now I wonder this is more of a theoretical one rather than a patch to fix one
specific bug.
Powered by blists - more mailing lists