[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190901065456.GU2312@nanopsycho>
Date: Sun, 1 Sep 2019 08:54:56 +0200
From: Jiri Pirko <jiri@...nulli.us>
To: Andrew Lunn <andrew@...n.ch>
Cc: Ido Schimmel <idosch@...sch.org>,
David Miller <davem@...emloft.net>,
horatiu.vultur@...rochip.com, alexandre.belloni@...tlin.com,
UNGLinuxDriver@...rochip.com, allan.nielsen@...rochip.com,
ivecera@...hat.com, f.fainelli@...il.com, netdev@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 1/2] net: core: Notify on changes to dev->promiscuity.
Sat, Aug 31, 2019 at 09:35:56PM CEST, andrew@...n.ch wrote:
>> Also, what happens when I'm running these application without putting
>> the interface in promisc mode? On an offloaded interface I would not be
>> able to even capture packets addressed to my interface's MAC address.
>
>Sorry for rejoining the discussion late. I've been travelling and i'm
>now 3/4 of the way to Lisbon.
>
>That statement i don't get. If the frame has the MAC address of the
>interface, it has to be delivered to the CPU. And so pcap will see it
1) you cannot go to slowpath for all such packets (route)
2) you might be interested to examine packets with other dest mac too.
>when running on the interface. I can pretty much guarantee every DSA
>driver does that.
>
>But to address the bigger picture. My understanding is that we want to
>model offloading as a mechanism to accelerate what Linux can already
>do. The user should not have to care about these accelerators. The
>interface should work like a normal Linux interface. I can put an IP
>address on it and ping a peer. I can run a dhcp client and get an IP
>address from a dhcp server. I can add the interface to a bridge, and
>packets will get bridged. I as a user should not need to care if this
>is done in software, or accelerated by offloading it. I can add a
>route, and if the accelerate knows about L3, it can accelerate that as
>well. If not, the kernel will route it.
>
>So if i run wireshark on an interface, i expect the interface will be
>put into promisc mode and i see all packets ingressing the interface.
Again, you are merging 2 things together:
1) rx filter - this is needed for bridge, ovs, others (tc)
this is promisc setting
2) cpu trap - here one may be interested in:
a) only packets ASIC traps to CPU by default (ARPs, STP, BGP, etc)
b) all packets ingressing the port (note that those are only those
passed by rx filter)
Clearly 1) and 2) need separate knobs. In 2), there are valid usecases
for both a) and b). Only the user is the one who can tell which is he
interested in. This can't happen automagically.
Can we just have a knob for 2)?
>What the accelerator needs to do to achieve this, i as a user don't
>care.
>
>I can follow the argument that i won't necessarily see every
>packet. But that is always true. For many embedded systems, the CPU is
>too slow to receive at line rate, even when we are talking about 1G
>links. Packets do get dropped. And i hope tcpdump users understand
>that.
>
>For me, having tcpdump use tc trap is just wrong. It breaks the model
>that the user should not care about the accelerator. If anything, i
>think the driver needs to translate cBPF which pcap passes to the
>kernel to whatever internal format the accelerator can process. That
>is just another example of using hardware acceleration.
>
> Andrew
Powered by blists - more mailing lists