[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAF=yD-+wq4U0bwiSgHhfSiWXgE9JOefqp69Nxcwz5JsQcS7_Uw@mail.gmail.com>
Date: Fri, 28 Jun 2019 10:56:02 -0400
From: Willem de Bruijn <willemdebruijn.kernel@...il.com>
To: Benjamin Poirier <bpoirier@...e.com>
Cc: Manish Chopra <manishc@...vell.com>,
GR-Linux-NIC-Dev <GR-Linux-NIC-Dev@...vell.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>
Subject: Re: [EXT] [PATCH net-next 03/16] qlge: Deduplicate lbq_buf_size
On Fri, Jun 28, 2019 at 4:57 AM Benjamin Poirier <bpoirier@...e.com> wrote:
>
> On 2019/06/26 11:42, Willem de Bruijn wrote:
> > On Wed, Jun 26, 2019 at 7:37 AM Benjamin Poirier <bpoirier@...e.com> wrote:
> > >
> > > On 2019/06/26 09:24, Manish Chopra wrote:
> > > > > -----Original Message-----
> > > > > From: Benjamin Poirier <bpoirier@...e.com>
> > > > > Sent: Monday, June 17, 2019 1:19 PM
> > > > > To: Manish Chopra <manishc@...vell.com>; GR-Linux-NIC-Dev <GR-Linux-
> > > > > NIC-Dev@...vell.com>; netdev@...r.kernel.org
> > > > > Subject: [EXT] [PATCH net-next 03/16] qlge: Deduplicate lbq_buf_size
> > > > >
> > > > > External Email
> > > > >
> > > > > ----------------------------------------------------------------------
> > > > > lbq_buf_size is duplicated to every rx_ring structure whereas lbq_buf_order is
> > > > > present once in the ql_adapter structure. All rings use the same buf size, keep
> > > > > only one copy of it. Also factor out the calculation of lbq_buf_size instead of
> > > > > having two copies.
> > > > >
> > > > > Signed-off-by: Benjamin Poirier <bpoirier@...e.com>
> > > > > ---
> > > [...]
> > > >
> > > > Not sure if this change is really required, I think fields relevant to rx_ring should be present in the rx_ring structure.
> > > > There are various other fields like "lbq_len" and "lbq_size" which would be same for all rx rings but still under the relevant rx_ring structure.
> >
> > The one argument against deduplicating might be if the original fields
> > are in a hot cacheline and the new location adds a cacheline access to
> > a hot path. Not sure if that is relevant here. But maybe something to
> > double check.
> >
>
> Thanks for the hint. I didn't check before because my hunch was that
> this driver is not near that level of optimization but I checked now and
> got the following results.
Thanks for the data. I didn't mean to ask you to do a lot of extra work.
Sorry if it resulted in that.
Fully agreed on your point about optimization (see also.. that 784B
struct with holes). I support the patch and meant to argue against the
previous response: this cleanup makes sense to me, just take a second
look at struct layout. To be more crystal clear:
Acked-by: Willem de Bruijn <willemb@...gle.com>
Powered by blists - more mailing lists