[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <52CD5E37.5070104@aimvalley.nl>
Date: Wed, 08 Jan 2014 15:18:31 +0100
From: Norbert van Bolhuis <nvbolhuis@...valley.nl>
To: Daniel Borkmann <dborkman@...hat.com>
CC: Jesper Dangaard Brouer <brouer@...hat.com>, netdev@...r.kernel.org,
David Miller <davem@...emloft.net>, uaca@...mni.uv.es
Subject: Re: single process receives own frames due to PACKET_MMAP
On 01/07/14 16:57, Daniel Borkmann wrote:
> On 01/07/2014 04:46 PM, Norbert van Bolhuis wrote:
>> On 01/07/14 16:26, Daniel Borkmann wrote:
>>> On 01/07/2014 04:16 PM, Norbert van Bolhuis wrote:
>>>> On 01/07/14 15:09, Jesper Dangaard Brouer wrote:
>>>>> On Tue, 07 Jan 2014 14:16:03 +0100
>>>>> Norbert van Bolhuis<nvbolhuis@...valley.nl> wrote:
>>>>>> On 01/07/14 11:06, Jesper Dangaard Brouer wrote:
>>>>>>> On Tue, 07 Jan 2014 10:32:01 +0100
>>>>>>> Daniel Borkmann<dborkman@...hat.com> wrote:
>>>>>>>
>>>>>>>> On 01/06/2014 11:58 PM, Norbert van Bolhuis wrote:
>>>>>>>>>
>>>>> [...]
>>>>>>>>
>>>>>>>>> I'd say it makes no sense to make the same process receive its
>>>>>>>>> own transmitted frames on that same interface (unless its lo).
>>>>>>>
>>>>>>> Have you setup:
>>>>>>> ring->s_ll.sll_protocol = 0
>>>>>>>
>>>>>>> This is what I did in trafgen to avoid this problem.
>>>>>>>
>>>>>>> See line 55 in netsniff-ng/ring.c:
>>>>>>> https://github.com/borkmann/netsniff-ng/blob/c3602a995b21e8133c7f4fd1fb1e7e21b6a844f1/ring.c#L55
>>>>>>>
>>>>>>> Commit:
>>>>>>> https://github.com/borkmann/netsniff-ng/commit/c3602a995b21e8133c7f4fd1fb1e7e21b6a844f1
>>>>>>>
>>>>>>
>>>>>>
>>>>>> No I did not do that, I was checking my code against netsniff-ng-0.5.8-rc4.
>>>>>>
>>>>>> But I just tried it, I believe I do the same as netsniff-ng-0.5.8-rc5, but it doesn't
>>>>>> work for me. Maybe because I have an old FC14 system (kernel 2.6.35.14-106.fc14.x86_64).
>>>>>>
>>>>>> So I tried to see whether netsniff-ng-0.5.8-rc5/trafgen still makes the
>>>>>> kernel call packet_rcv() on my FC14 system. So I build and run it, but I'm not sure
>>>>>> how to (easily) check that.
>>>>>
>>>>> The easiest way is to:
>>>>> cat /proc/net/ptype
>>>>> And look if someone registered a proto handler/function: packet_rcv (or tpacket_rcv).
>>>>>
>>>>> The more exact method is, to run "perf record -a -g" and then look (at
>>>>> the result with "perf report") for a lock contention, and "expand" the
>>>>> spin_lock and see if packet_rcv() is calling this spin lock.
>>>>>
>>>>
>>>>
>>>> I checked the easy way.
>>>> Even on my old FC14 system the "protocol=0 patch" seems to make a difference
>>>> for trafgen.
>>>> Without the patch I see for each CPU in use by trafgen a "packet_rcv entry" in
>>>> /proc/net/ptype.
>>>> With the patch I see no additional "packet_rcv entry".
>>>
>>> Yes, that is expected behaviour. ;-) See more below.
>>>
>>>> It could be my Appl is wrong or maybe the "protocol=0 patch" does not help.
>>>> I think the latter, afterall my Appl has, unlike trafgen, another RX
>>>> (AF_PACKET) socket.
>>>>
>>>>
>>>>>
>>>>>> In anyway, Wireshark does capture the trafgen generated
>>>>>> frames, does that say anything ?
>>>>>
>>>>> Be careful not to start a wireshark/tcpdump, at the sametime, as this
>>>>> will slow you down.
>>>>>
>>>>>> In the future, I can at least use PACKET_QDISC_BYPASS as a "workaround".
>>>>>
>>>>> And in the future with PACKET_QDISC_BYPASS, your wireshark will not
>>>>> catch these packets, remember that.
>>>>>
>>>>
>>>>
>>>> Yes, this is why I would love to see the "protocol=0 patch" work for my Appl.
>>>>
>>>> So I will try my Appl with the latest net-next kernel to see if that makes
>>>> it work. Hopefully I can find some time in the next coming days, I will keep
>>>> you informed.
>>>
>>> As long as there's at least one single PF_PACKET receive socket open and you
>>> do not make use of PACKET_QDISC_BYPASS on your tx socket, then those packets go
>>> back the dev_queue_xmit_nit() path, even if your tx socket uses protocol=0.
>>>
>>> If you make use of PACKET_QDISC_BYPASS [1] for your particular tx socket, then
>>> packets generated by that socket will not hit the dev_queue_xmit_nit() path
>>> back to other possible rx listeners that are present on your system (w/ the
>>> side-effects for tx as described in [1]).
>>>
>>> [1] Documentation/networking/packet_mmap.txt +960
>>>
>>
>>
>> Ok, that's clear.
>>
>> But this means my PF_PACKET socket application performs worse because of
>> using PACKET_MMAP. I expected the opposit.
>>
>> Afterall my old PF_PACKET socket application (which does not use PACKET_MMAP)
>> uses only one PF_PACKET socket (for TX and RX). Because packets are never sent
>> back to the socket they originated from, my old PF_PACKET socket application
>> performs better.
>>
>> Is there a way to use one PF_PACKET socket for both TX and RX and use PACKET_MMAP ?
>
> Yep:
>
> http://thread.gmane.org/gmane.linux.network/269129/focus=269188
>
> Feel free to make a patch and add this to Documentation/networking/packet_mmap.txt
> I think could be useful for others as well.
Good, it all works fine now, though performance is still not as good as I'd hoped.
I will sent a doc patch soon.
thanks for all help!
---
Norbert.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists