[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CADvbK_fXGCEwuHX5PCU1-+dTTG4ZMLGLXY8A_AqJpDoR2uV-cA@mail.gmail.com>
Date: Tue, 31 Jan 2023 12:55:10 -0500
From: Xin Long <lucien.xin@...il.com>
To: Paolo Abeni <pabeni@...hat.com>
Cc: network dev <netdev@...r.kernel.org>, davem@...emloft.net,
kuba@...nel.org, Eric Dumazet <edumazet@...gle.com>,
David Ahern <dsahern@...il.com>,
Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>,
Pravin B Shelar <pshelar@....org>,
Jamal Hadi Salim <jhs@...atatu.com>,
Cong Wang <xiyou.wangcong@...il.com>,
Jiri Pirko <jiri@...nulli.us>,
Pablo Neira Ayuso <pablo@...filter.org>,
Florian Westphal <fw@...len.de>,
Marcelo Ricardo Leitner <marcelo.leitner@...il.com>,
Ilya Maximets <i.maximets@....org>,
Aaron Conole <aconole@...hat.com>,
Roopa Prabhu <roopa@...dia.com>,
Nikolay Aleksandrov <razor@...ckwall.org>,
Mahesh Bandewar <maheshb@...gle.com>,
Paul Moore <paul@...l-moore.com>,
Guillaume Nault <gnault@...hat.com>
Subject: Re: [PATCHv4 net-next 09/10] net: add gso_ipv4_max_size and
gro_ipv4_max_size per device
On Tue, Jan 31, 2023 at 9:59 AM Paolo Abeni <pabeni@...hat.com> wrote:
>
> On Sat, 2023-01-28 at 10:58 -0500, Xin Long wrote:
> > This patch introduces gso_ipv4_max_size and gro_ipv4_max_size
> > per device and adds netlink attributes for them, so that IPV4
> > BIG TCP can be guarded by a separate tunable in the next patch.
> >
> > To not break the old application using "gso/gro_max_size" for
> > IPv4 GSO packets, this patch updates "gso/gro_ipv4_max_size"
> > in netif_set_gso/gro_max_size() if the new size isn't greater
> > than GSO_LEGACY_MAX_SIZE, so that nothing will change even if
> > userspace doesn't realize the new netlink attributes.
>
> Not a big deal, but I think it would be nice to include the pahole info
> showing where the new fields are located and why that are good
> locations.
>
> No need to send a new version for just for the above, unless Eric asks
> otherwise ;)
>
The the pahole info without and with the patch shows below:
- Without the Patch:
# pahole --hex -C net_device vmlinux
struct net_device {
...
long unsigned int gro_flush_timeout; /* 0x330 0x8 */
int napi_defer_hard_irqs; /* 0x338 0x4 */
unsigned int gro_max_size; /* 0x33c 0x4 */ <---------
/* --- cacheline 13 boundary (832 bytes) --- */
rx_handler_func_t * rx_handler; /* 0x340 0x8 */
void * rx_handler_data; /* 0x348 0x8 */
struct mini_Qdisc * miniq_ingress; /* 0x350 0x8 */
struct netdev_queue * ingress_queue; /* 0x358 0x8 */
struct nf_hook_entries * nf_hooks_ingress; /* 0x360 0x8 */
unsigned char broadcast[32]; /* 0x368 0x20 */
/* --- cacheline 14 boundary (896 bytes) was 8 bytes ago --- */
struct cpu_rmap * rx_cpu_rmap; /* 0x388 0x8 */
struct hlist_node index_hlist; /* 0x390 0x10 */
/* XXX 32 bytes hole, try to pack */
/* --- cacheline 15 boundary (960 bytes) --- */
struct netdev_queue * _tx __attribute__((__aligned__(64))); /*
0x3c0 0x8 */
...
/* --- cacheline 32 boundary (2048 bytes) was 24 bytes ago --- */
const struct attribute_group * sysfs_groups[4]; /* 0x818 0x20 */
const struct attribute_group * sysfs_rx_queue_group; /* 0x838 0x8 */
/* --- cacheline 33 boundary (2112 bytes) --- */
const struct rtnl_link_ops * rtnl_link_ops; /* 0x840 0x8 */
unsigned int gso_max_size; /* 0x848 0x4 */
unsigned int tso_max_size; /* 0x84c 0x4 */
u16 gso_max_segs; /* 0x850 0x2 */
u16 tso_max_segs; /* 0x852 0x2 */ <---------
/* XXX 4 bytes hole, try to pack */
const struct dcbnl_rtnl_ops * dcbnl_ops; /* 0x858 0x8 */
s16 num_tc; /* 0x860 0x2 */
struct netdev_tc_txq tc_to_txq[16]; /* 0x862 0x40 */
/* --- cacheline 34 boundary (2176 bytes) was 34 bytes ago --- */
u8 prio_tc_map[16]; /* 0x8a2 0x10 */
...
}
- With the Patch:
For "gso_ipv4_max_size", it filled the hole as expected.
/* --- cacheline 33 boundary (2112 bytes) --- */
const struct rtnl_link_ops * rtnl_link_ops; /* 0x840 0x8 */
unsigned int gso_max_size; /* 0x848 0x4 */
unsigned int tso_max_size; /* 0x84c 0x4 */
u16 gso_max_segs; /* 0x850 0x2 */
u16 tso_max_segs; /* 0x852 0x2 */
unsigned int gso_ipv4_max_size; /* 0x854 0x4 */ <-------
const struct dcbnl_rtnl_ops * dcbnl_ops; /* 0x858 0x8 */
s16 num_tc; /* 0x860 0x2 */
struct netdev_tc_txq tc_to_txq[16]; /* 0x862 0x40 */
/* --- cacheline 34 boundary (2176 bytes) was 34 bytes ago --- */
u8 prio_tc_map[16]; /* 0x8a2 0x10 */
For "gro_ipv4_max_size", these are no byte holes, I just put it
in the "Cache lines mostly used on receive path" area, and
next to gro_max_size.
long unsigned int gro_flush_timeout; /* 0x330 0x8 */
int napi_defer_hard_irqs; /* 0x338 0x4 */
unsigned int gro_max_size; /* 0x33c 0x4 */
/* --- cacheline 13 boundary (832 bytes) --- */
unsigned int gro_ipv4_max_size; /* 0x340 0x4 */ <------
/* XXX 4 bytes hole, try to pack */
rx_handler_func_t * rx_handler; /* 0x348 0x8 */
void * rx_handler_data; /* 0x350 0x8 */
struct mini_Qdisc * miniq_ingress; /* 0x358 0x8 */
struct netdev_queue * ingress_queue; /* 0x360 0x8 */
struct nf_hook_entries * nf_hooks_ingress; /* 0x368 0x8 */
unsigned char broadcast[32]; /* 0x370 0x20 */
/* --- cacheline 14 boundary (896 bytes) was 16 bytes ago --- */
struct cpu_rmap * rx_cpu_rmap; /* 0x390 0x8 */
struct hlist_node index_hlist; /* 0x398 0x10 */
/* XXX 24 bytes hole, try to pack */
/* --- cacheline 15 boundary (960 bytes) --- */
struct netdev_queue * _tx __attribute__((__aligned__(64))); /*
0x3c0 0x8 */
Thanks.
Powered by blists - more mailing lists