lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250417104736.pD2sMYXv@linutronix.de>
Date: Thu, 17 Apr 2025 12:47:36 +0200
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: Paolo Abeni <pabeni@...hat.com>
Cc: netdev@...r.kernel.org, linux-rt-devel@...ts.linux.dev,
	"David S. Miller" <davem@...emloft.net>,
	Eric Dumazet <edumazet@...gle.com>,
	Jakub Kicinski <kuba@...nel.org>, Simon Horman <horms@...nel.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Jamal Hadi Salim <jhs@...atatu.com>,
	Cong Wang <xiyou.wangcong@...il.com>, Jiri Pirko <jiri@...nulli.us>,
	Ingo Molnar <mingo@...hat.com>,
	Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH net-next v2 13/18] net/sched: act_mirred: Move the
 recursion counter struct netdev_xmit

+ Ingo/ PeterZ for sched, see below.

On 2025-04-17 10:29:05 [+0200], Paolo Abeni wrote:
> 
> How many of such recursion counters do you foresee will be needed?

I audited the static per-CPU variables and I am done with this series. I
need to go through the dynamic allocations of per-CPU but I don't expect
to see any there.

> AFAICS this one does not fit the existing hole anymore; the binary
> layout before this series is:
> 
>  struct netdev_xmit {
>                 /* typedef u16 -> __u16 */ short unsigned int recursion;
>                 /*  2442     2 */
>                 /* typedef u8 -> __u8 */ unsigned char      more;
>                 /*  2444     1 */
>                 /* typedef u8 -> __u8 */ unsigned char
> skip_txqueue;                /*  2445     1 */
>         } net_xmit; /*  2442     4 */
> 
>         /* XXX 2 bytes hole, try to pack */
> 
> and this series already added 2 u8 fields. Since all the recursion
> counters could be represented with less than 8 bits, perhaps using a
> bitfield here could be worthy?!?

The u8 is nice as the CPU can access in one go. The :4 counting fields
(or so) are usually loaded and shifted so there is a bit more assembly.
We should be able to shorten "recursion" down to an u8 as goes to 8
only.

I still used holes according to pahole on my RT build (the non-RT
shouldn't change):

Before the series:
task_struct:
|         /* XXX 5 bits hole, try to pack */
|         /* Bitfield combined with next fields */
|
|         struct netdev_xmit         net_xmit;             /*  2378     4 */
|
|         /* XXX 2 bytes hole, try to pack */
|
|         long unsigned int          atomic_flags;         /*  2384     8 */

struct netdev_xmit {
|         u16                        recursion;            /*     0     2 */
|         u8                         more;                 /*     2     1 */
|         u8                         skip_txqueue;         /*     3     1 */
|
|         /* size: 4, cachelines: 1, members: 3 */
|         /* last cacheline: 4 bytes */

after the series:
|         unsigned int               in_nf_duplicate:1;    /*  2376:11  4 */
|         /* XXX 4 bits hole, try to pack */
|         /* Bitfield combined with next fields */
| 
|         struct netdev_xmit         net_xmit;             /*  2378     6 */
|         long unsigned int          atomic_flags;         /*  2384     8 */

struct netdev_xmit
|         u16                        recursion;            /*     0     2 */
|         u8                         more;                 /*     2     1 */
|         u8                         skip_txqueue;         /*     3     1 */
|         u8                         nf_dup_skb_recursion; /*     4     1 */
|         u8                         sched_mirred_nest;    /*     5     1 */
| 
|         /* size: 6, cachelines: 1, members: 5 */
|         /* last cacheline: 6 bytes */

I don't understand why in the first case there is a warning about a 2
byte hole while there is a 4 byte hole due to the long alignment.
After the series there is still a 2 byte hole before atomic_flags.

> In any case I think we need explicit ack from the sched people.

I added PeterZ and Ingo.

> > diff --git a/net/sched/act_mirred.c b/net/sched/act_mirred.c
> > index 5b38143659249..5f01f567c934d 100644
> > --- a/net/sched/act_mirred.c
> > +++ b/net/sched/act_mirred.c
> > @@ -30,7 +30,29 @@ static LIST_HEAD(mirred_list);
> >  static DEFINE_SPINLOCK(mirred_list_lock);
> >  
> >  #define MIRRED_NEST_LIMIT    4
> > -static DEFINE_PER_CPU(unsigned int, mirred_nest_level);
> > +
> > +#ifndef CONFIG_PREEMPT_RT
> > +static u8 tcf_mirred_nest_level_inc_return(void)
> > +{
> > +	return __this_cpu_inc_return(softnet_data.xmit.sched_mirred_nest);
> > +}
> > +
> > +static void tcf_mirred_nest_level_dec(void)
> > +{
> > +	__this_cpu_dec(softnet_data.xmit.sched_mirred_nest);
> > +}
> > +
> > +#else
> > +static u8 tcf_mirred_nest_level_inc_return(void)
> > +{
> > +	return current->net_xmit.sched_mirred_nest++;
> > +}
> > +
> > +static void tcf_mirred_nest_level_dec(void)
> > +{
> > +	current->net_xmit.sched_mirred_nest--;
> > +}
> > +#endif
> 
> There are already a few of this construct. Perhaps it would be worthy to
> implement a netdev_xmit() helper returning a ptr to the whole struct and
> use it to reduce the number of #ifdef

While I introduced this in the beginning, Jakub asked if there would be
much difference doing this and I said on x86 at least one opcode because
it replaces "var++" with "get-var, inc-var". I didn't hear back on this
so I assumed "keep it".

If you want the helper, then just say if you want it at the begin of the
series, at the end or independent for evaluation purpose and I make it.

> Thanks,
> 
> Paolo

Sebastian

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ