lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 20 Nov 2018 10:16:38 -0800 (PST)
From:   David Miller <davem@...emloft.net>
To:     edumazet@...gle.com
Cc:     netdev@...r.kernel.org, eric.dumazet@...il.com
Subject: Re: [PATCH net-next] tcp: drop dst in tcp_add_backlog()

From: Eric Dumazet <edumazet@...gle.com>
Date: Tue, 20 Nov 2018 10:08:12 -0800

> On Tue, Nov 20, 2018 at 10:01 AM David Miller <davem@...emloft.net> wrote:
>>
>> From: Eric Dumazet <edumazet@...gle.com>
>> Date: Mon, 19 Nov 2018 17:45:55 -0800
>>
>> > Under stress, softirq rx handler often hits a socket owned by the user,
>> > and has to queue the packet into socket backlog.
>> >
>> > When this happens, skb dst refcount is taken before we escape rcu
>> > protected region. This is done from __sk_add_backlog() calling
>> > skb_dst_force().
>> >
>> > Consumer will have to perform the opposite costly operation.
>> >
>> > AFAIK nothing in tcp stack requests the dst after skb was stored
>> > in the backlog. If this was the case, we would have had failures
>> > already since skb_dst_force() can end up clearing skb dst anyway.
>> >
>> > Signed-off-by: Eric Dumazet <edumazet@...gle.com>
>>
>> Hmmm, it seems to be used by connection completion code to setup the
>> socket cached rx dst, right?
>>
>> For example tcp_finish_connect() --> icsk->icsk_af_ops->sk_rx_dst_set(sk, skb)
> 
> We already cope with skb->dst being NULL there I believe.
> 
> For reference look at
> 
> commit 5037e9ef9454917b047f9f3a19b4dd179fbf7cd4    net: fix IP early demux races

Well, I'm sure we "handle" it.  But I was more asking about the performance
tradeoff, which probably is on the side of your change but I wanted to
just be sure.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ