lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Tue, 03 Mar 2015 17:18:19 +0800
From:	Fan Du <fengyuleidian0615@...il.com>
To:	John Heffner <johnwheffner@...il.com>
CC:	Fan Du <fan.du@...el.com>, Eric Dumazet <edumazet@...gle.com>,
	David Miller <davem@...emloft.net>,
	Netdev <netdev@...r.kernel.org>
Subject: Re: [PATCHv3 net-next 3/4] ipv4: shrink current mss for tcp PMTU
 blackhole detection

于 2015年03月03日 04:32, John Heffner 写道:
> On Mon, Mar 2, 2015 at 4:29 AM, Fan Du<fengyuleidian0615@...il.com>  wrote:
>> >Timeout indicates search_high should be set to the new mtu corresponding to
>> >current_mss no
>> >matter how we change search_low. So the best shot here IMO would be updating
>> >search_high
>> >with current_mss, which in return makes the search window*slide*  from right
>> >to left, and
>> >the probing will converge in good speed eventually.
>> >
>> >So my thoughts is:
>> >@@ -113,6 +113,7 @@ static void tcp_mtu_probing(struct inet_connection_sock
>> >*icsk, struct sock *sk)
>> >                         struct tcp_sock *tp = tcp_sk(sk);
>> >                         int mss;
>> >
>> >+                       icsk_mtup.search_high = tcp_mss_to_mtu(sk,
>> >tcp_current_mss(sk));
>> >                         mss = tcp_mtu_to_mss(sk, icsk->icsk_mtup.search_low)
>>>> >>>1;
>> >                         mss = min(net->ipv4.sysctl_tcp_base_mss, mss);
>> >                         mss = max(mss, 68 - tp->tcp_header_len);
> Search_high should be adjusted downward only when you're quite certain
> that you've gotten a negative signal.  There are many possible reasons
> for successive timeouts including intermittent disconnection, and they
> should not have a persistent (or very long-term) effect on MTU.  Leave
> search_high where it is until a working MTU can be established, then
> probe upward until probing can give you confidence you've found a new
> ceiling, or gotten back to the old one.
>
> If you think the current approach is broken, it would help to see a
> concrete demonstration of how it's deficient (a real packet trace is
> good!), and how a different approach work better.

 > With original approach(doubling mss), mss is not in between search_low and search_high,
 > it always equates search_low(subtract headers), the potential mss in case of blackhole
 > is 256 and 128, after doubling, it will become 512 and 256 eventually no matter how
 > route changes, even mtu reduced from 1500 to 1100 in intermediate node.

As for above statement, my test scenario is simple, using vxlan tunnel to connect two
docker instances on two hosts. All mtu default to 1500, run iperf between docker instances.
After the connection is setup, adjust the iperf sender host physical eth0 mtu to
1100, and after a couple of seconds, mss will be set to 512.

And after using binary search, actually, search_high will be set to probe_size trigger from
tcp_fastretrans_alert by my investigation. So the searching window does slide to left, thus
I will drop this patch. Maybe this is because my test method is not practical. And I believe
what you said about:
> There are many possible reasons for successive timeouts including intermittent disconnection,
 > and they should not have a persistent (or very long-term) effect on MTU.

Never mind, no matter whether to adjust search_high, the reprobe timer will restore search_high
to maximum allowed, and once again, new mss will be available.

I will resend the rest of patches after incorporating your comments for reviewing.
Thanks for your feedback.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ