[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1c757979de156d01dcc3f011af35a4895c7a7bb7.camel@redhat.com>
Date: Tue, 23 Apr 2024 11:54:53 +0200
From: Paolo Abeni <pabeni@...hat.com>
To: Eric Dumazet <edumazet@...gle.com>
Cc: Davide Caratti <dcaratti@...hat.com>, Saeed Mahameed
<saeedm@...dia.com>, Tariq Toukan <tariqt@...dia.com>,
netdev@...r.kernel.org, renmingshuai@...wei.com, jiri@...nulli.us,
xiyou.wangcong@...il.com, xmu@...hat.com, Christoph Paasch
<cpaasch@...le.com>, Jamal Hadi Salim <jhs@...atatu.com>, Maxim
Mikityanskiy <maxim@...valent.com>, Victor Nogueira <victor@...atatu.com>
Subject: Re: [PATCH net-next v2] net/sched: fix false lockdep warning on
qdisc root lock
On Tue, 2024-04-23 at 11:40 +0200, Eric Dumazet wrote:
> On Tue, Apr 23, 2024 at 11:21 AM Paolo Abeni <pabeni@...hat.com> wrote:
> >
> > On Thu, 2024-04-18 at 16:01 +0200, Davide Caratti wrote:
> > > hello,
> > >
> > > On Thu, Apr 18, 2024 at 3:50 PM Davide Caratti <dcaratti@...hat.com> wrote:
> > > >
> > >
> > > [...]
> > >
> > > > This happens when TC does a mirred egress redirect from the root qdisc of
> > > > device A to the root qdisc of device B. As long as these two locks aren't
> > > > protecting the same qdisc, they can be acquired in chain: add a per-qdisc
> > > > lockdep key to silence false warnings.
> > > > This dynamic key should safely replace the static key we have in sch_htb:
> > > > it was added to allow enqueueing to the device "direct qdisc" while still
> > > > holding the qdisc root lock.
> > > >
> > > > v2: don't use static keys anymore in HTB direct qdiscs (thanks Eric Dumazet)
> > >
> > > I didn't have the correct setup to test HTB offload, so any feedback
> > > for the HTB part is appreciated. On a debug kernel the extra time
> > > taken to register / de-register dynamic lockdep keys is very evident
> > > (more when qdisc are created: the time needed for "tc qdisc add ..."
> > > becomes an order of magnitude bigger, while the time for "tc qdisc del
> > > ..." doubles).
> >
> > @Eric: why do you think the lockdep slowdown would be critical? We
> > don't expect to see lockdep in production, right?
>
> I think you missed one of my update, where I said this was absolutely ok.
>
> https://lore.kernel.org/netdev/CANn89iJQZ5R=Cct494W0DbNXR3pxOj54zDY7bgtFFCiiC1abDg@mail.gmail.com/
Indeed I missed that, thanks for pointing out.
> > Enabling lockdep will defeat most/all cacheline optimization moving
> > around all fields after a lock, performances should be significantly
> > impacted anyway.
> >
> > WDYT?
> >
> > The HTB bits looks safe to me, but it would be great if someone @nvidia
> > could actually test it (AFAICS mlx5 is the only user of such
> > annotation).
Let's wait a bit for some feedback here.
Thanks,
Paolo
Powered by blists - more mailing lists