lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 12 Dec 2018 15:11:56 +0100
From:   Michal Kubecek <mkubecek@...e.cz>
To:     netdev@...r.kernel.org
Cc:     Peter Oskolkov <posk@...gle.com>,
        Gustavo Figueira <gfigueira@...e.com>
Subject: RFC: Dropping duplicate fragments as overlapping?

Hello,

one of our customers started seeing NFS failures since updating to
a kernel with "FragmentSmack" fixes.

They are using a weird setup abusing "broadcast" mode of bonding so that
each packet between some hosts is duplicated. Commit 7969e5c40dfd ("ip:
discard IPv4 datagrams with overlapping segments.") identifies duplicate
IPv4 fragments as "overlapping" and drops the whole queue so that
fragmented packets are never delivered unless the reassembly is
completed before any of the dupplicates arrives.

We have verified that modifying the reassembly code to check if start
and length of newly received fragment match the earlier one and only
drop the new fragment in such case resolves their issue. But I don't
find such solution desirable for two reasons:

1. IPv6 reassembly code always dropped the packets on receiving
a duplicate fragment and one can interpret RFC 5722 to actually require
us to. (I'm not completely sure as RFC 5722 doesn't seem to define what
is meant by "overlapping".) In any case, there seem to be no complaints
about that.

2. The purpose of commit 7969e5c40dfd is to prevent an attacker from
overloading the reassembly code by forcing it to look up large numbers
of random fragments of a packet which is never going to be completed.
With the change indicated above, the attacker still could send a lot of
copies of the same fragment (carefully prepared to maximize the CPU time
spent by lookup) so that the commit would become effectively useless.
(We would not need to handle handling the overlapping fragments but it
still does not feel like something we would want.)

On the other hand, the customer is kind of right that their setup,
however insane, used to work and it does not work any longer which is
a regression from their point of view.

The question I would like to discuss is: is dropping all packets with
duplicate fragments an acceptable loss for the "FragmentSmack"
mitigation? Or the other way around: would the regression (which was
AFAIK only encountered as a result of a misconfiguration) justify
weakening the "FragmentSmack" mitigation?

Michal Kubecek

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ