[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190531230008.GA15675@khorivan>
Date: Sat, 1 Jun 2019 02:00:10 +0300
From: Ivan Khoronzhuk <ivan.khoronzhuk@...aro.org>
To: Saeed Mahameed <saeedm@...lanox.com>
Cc: "brouer@...hat.com" <brouer@...hat.com>,
"daniel@...earbox.net" <daniel@...earbox.net>,
"xdp-newbies@...r.kernel.org" <xdp-newbies@...r.kernel.org>,
"davem@...emloft.net" <davem@...emloft.net>,
"john.fastabend@...il.com" <john.fastabend@...il.com>,
"ilias.apalodimas@...aro.org" <ilias.apalodimas@...aro.org>,
"grygorii.strashko@...com" <grygorii.strashko@...com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-omap@...r.kernel.org" <linux-omap@...r.kernel.org>,
"ast@...nel.org" <ast@...nel.org>,
"hawk@...nel.org" <hawk@...nel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"jakub.kicinski@...ronome.com" <jakub.kicinski@...ronome.com>
Subject: Re: [PATCH v2 net-next 7/7] net: ethernet: ti: cpsw: add XDP support
On Fri, May 31, 2019 at 10:08:03PM +0000, Saeed Mahameed wrote:
>On Fri, 2019-05-31 at 20:03 +0300, Ivan Khoronzhuk wrote:
>> On Fri, May 31, 2019 at 06:32:41PM +0200, Jesper Dangaard Brouer
>> wrote:
>> > On Fri, 31 May 2019 19:25:24 +0300 Ivan Khoronzhuk <
>> > ivan.khoronzhuk@...aro.org> wrote:
>> >
>> > > On Fri, May 31, 2019 at 05:46:43PM +0200, Jesper Dangaard Brouer
>> > > wrote:
>> > > > From below code snippets, it looks like you only allocated 1
>> > > > page_pool
>> > > > and sharing it with several RX-queues, as I don't have the full
>> > > > context
>> > > > and don't know this driver, I might be wrong?
>> > > >
>> > > > To be clear, a page_pool object is needed per RX-queue, as it
>> > > > is
>> > > > accessing a small RX page cache (which protected by
>> > > > NAPI/softirq).
>> > >
>> > > There is one RX interrupt and one RX NAPI for all rx channels.
>> >
>> > So, what are you saying?
>> >
>> > You _are_ sharing the page_pool between several RX-channels, but it
>> > is
>> > safe because this hardware only have one RX interrupt + NAPI
>> > instance??
>>
>> I can miss smth but in case of cpsw technically it means:
>> 1) RX interrupts are disabled while NAPI is scheduled,
>> not for particular CPU or channel, but at all, for whole cpsw
>> module.
>> 2) RX channels are handled one by one by priority.
>
>Hi Ivan, I got a silly question..
>
>What is the reason behind having multiple RX rings and one CPU/NAPI
>handling all of them ? priority ? how do you priorities ?
Several.
One of the reason, from what I know, it can handle for several cpus/napi but
because of errata on some SoCs or for all of them it was discarded, but idea was
it can. Second it uses same davinci_cpdma API as tx channels that can be rate
limited, and it's used not only by cpsw but also by other driver, so can't be
modified easily and no reason. And third one, h/w has ability to steer some
filtered traffic to rx queues and can be potentially configured with ethtool
ntuples or so, but it's not implemented....yet.
>
>> 3) After all of them handled and no more in budget - interrupts are
>> enabled.
>> 4) If page is returned to the pool, and it's within NAPI, no races as
>> it's
>> returned protected by softirq. If it's returned not in softirq
>> it's protected
>> by producer lock of the ring.
>>
>> Probably it's not good example for others how it should be used, not
>> a big
>> problem to move it to separate pools.., even don't remember why I
>> decided to
>> use shared pool, there was some more reasons... need search in
>> history.
>>
>> > --
>> > Best regards,
>> > Jesper Dangaard Brouer
>> > MSc.CS, Principal Kernel Engineer at Red Hat
>> > LinkedIn: http://www.linkedin.com/in/brouer
--
Regards,
Ivan Khoronzhuk
Powered by blists - more mailing lists