lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 29 Apr 2015 15:38:35 -0700
From:	Alexei Starovoitov <ast@...mgrid.com>
To:	Eric Dumazet <eric.dumazet@...il.com>
CC:	"David S. Miller" <davem@...emloft.net>,
	Eric Dumazet <edumazet@...gle.com>,
	Daniel Borkmann <daniel@...earbox.net>,
	Thomas Graf <tgraf@...g.ch>,
	Jamal Hadi Salim <jhs@...atatu.com>,
	John Fastabend <john.r.fastabend@...el.com>,
	netdev@...r.kernel.org
Subject: Re: [PATCH RFC net-next] pktgen: introduce 'rx' mode

On 4/29/15 3:19 PM, Eric Dumazet wrote:
> On Wed, 2015-04-29 at 14:55 -0700, Alexei Starovoitov wrote:
>> On 4/28/15 9:14 PM, Eric Dumazet wrote:
>>> On Tue, 2015-04-28 at 19:11 -0700, Alexei Starovoitov wrote:
>>>
>>>
>>> This looks buggy.
>>>
>>> skb can be put on a queue, so skb->next and skb->prev cannot be reused,
>>> or queues will be corrupted.
>>
>> don't see the bug yet.
>> Any layer that wants to do such queueing should do skb_share_check
>> first. Just like ip_rcv does. So everything in IP world should
>> work fine, because it will be operating on clean cloned skb.
>
> Really this is what _you_ think is needed, so that your patch can fly.
>
> In current state of the stack, the skb_share_check() is done where we
> know that packet _might_ be delivered to multiple end points
> (deliver_skb() does atomic_inc(&skb->users) )
>
> But RPS/RFS/GRO do not care of your new rule.
>
> Yes, before reaching __netif_receive_skb_core(), packets are supposed to
> belong to the stack. We are supposed to queue them, without adding a
> check for skb->users being one or not, and eventually add an expensive
> memory allocation/copy.
>
> We are not going to add an extra check just to make pktgen rx fast.
> pktgen will have to comply to existing rules.

I'm not making and not suggesting any new rules.
ip_rcv is doing this skb_share_check() not because of pktgen rx,
but because there can be taps and deliver_skb() as you said.
gro has a different interface and this pktgen cannot benchmark it.
rps/rfs is not benchmarkable but this approach either.
To me this is all fine. I'm not trying to do a universal
benchmarking tool. This one is dumb and simple and primarily
oriented to benchmark changes to netif_receive_skb and ingress
qdisc only. I'm not suggesting to use it everywhere.
I already mentioned in cover letter:
"The profile dump looks as expected for RX of UDP packets
without local socket except presence of __skb_clone."
Clearly I'm not suggesting to use pktgen rx to optimize IP stack
and not suggesting at all that stack should assume users!=1
when skb hits netif_receive_skb. Today at the beginning of
netif_receive_skb we know that users==1 without checking.
I'm not changing that assumption.
Just like pktgen xmit path is cheating little bit while
benchmarking TX, I'm cheating a little bit with users!=1 on RX.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ