[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <613fb55f-b754-433a-9f27-7c66391116d9@intel.com>
Date: Thu, 22 Aug 2024 17:13:57 +0200
From: Alexander Lobakin <aleksander.lobakin@...el.com>
To: Jakub Kicinski <kuba@...nel.org>
CC: Tony Nguyen <anthony.l.nguyen@...el.com>, <davem@...emloft.net>,
<pabeni@...hat.com>, <edumazet@...gle.com>, <netdev@...r.kernel.org>,
<przemyslaw.kitszel@...el.com>, <joshua.a.hay@...el.com>,
<michal.kubiak@...el.com>, <nex.sw.ncis.osdt.itp.upstreaming@...el.com>
Subject: Re: [PATCH net-next v2 2/9] libeth: add common queue stats
From: Jakub Kicinski <kuba@...nel.org>
Date: Tue, 20 Aug 2024 18:17:57 -0700
> On Mon, 19 Aug 2024 15:34:34 -0700 Tony Nguyen wrote:
>> + * Return: new &net_device on success, %NULL on error.
>> + */
>> +struct net_device *__libeth_netdev_alloc(u32 priv, u32 rqs, u32 sqs,
>> + u32 xdpsqs)
>
> The netdev alloc / free / set num queues can be a separate patch
Ok sure.
>
>> +MODULE_DESCRIPTION("Common Ethernet library");
I just moved this line from libeth/rx.c :D
>
> BTW for Intel? Or you want this to be part of the core?
> I thought Intel, but you should tell us if you have broader plans.
For now it's done as a lib inside Intel folder, BUT if any other vendor
would like to use this, that would be great and then we could move it
level up or some of these APIs can go into the core.
IOW depends on users.
libie in contrary contains HW-specific code and will always be
Intel-specific.
>
>> + const struct libeth_netdev_priv *priv = netdev_priv(dev); \
>> + \
>> + memset(stats, 0, sizeof(*stats)); \
>> + u64_stats_init(&stats->syncp); \
>> + \
>> + mutex_init(&priv->base_##pfx##s[qid].lock); \
>
> the mutex is only for writes or for reads of base too?
> mutex is a bad idea for rtnl stats
Base stats are written only on ifdown, read anytime, mutex is used
everywhere.
Hmm maybe a bad idea, what would be better, spinlock or just use
u64_sync as well?
>
>> +/* Netlink stats. Exported fields have the same names as in the NL structs */
>> +
>> +struct libeth_stats_export {
>> + u16 li;
>> + u16 gi;
>> +};
>> +
>> +#define LIBETH_STATS_EXPORT(lpfx, gpfx, field) { \
>> + .li = (offsetof(struct libeth_##lpfx##_stats, field) - \
>> + offsetof(struct libeth_##lpfx##_stats, raw)) / \
>> + sizeof_field(struct libeth_##lpfx##_stats, field), \
>> + .gi = offsetof(struct netdev_queue_stats_##gpfx, field) / \
>> + sizeof_field(struct netdev_queue_stats_##gpfx, field) \
>> +}
>
> humpf
Compression :D
>
>> +#define LIBETH_STATS_DEFINE_EXPORT(pfx, gpfx) \
>> +static void \
>> +libeth_get_queue_stats_##gpfx(struct net_device *dev, int idx, \
>> + struct netdev_queue_stats_##gpfx *stats) \
>> +{ \
>> + const struct libeth_netdev_priv *priv = netdev_priv(dev); \
>> + const struct libeth_##pfx##_stats *qs; \
>> + u64 *raw = (u64 *)stats; \
>> + u32 start; \
>> + \
>> + qs = READ_ONCE(priv->live_##pfx##s[idx]); \
>> + if (!qs) \
>> + return; \
>> + \
>> + do { \
>> + start = u64_stats_fetch_begin(&qs->syncp); \
>> + \
>> + libeth_stats_foreach_export(pfx, exp) \
>> + raw[exp->gi] = u64_stats_read(&qs->raw[exp->li]); \
>> + } while (u64_stats_fetch_retry(&qs->syncp, start)); \
>> +} \
>
> ugh. Please no
So you mean just open-code reads/writes per each field than to compress
it that way? Sure, that would be no problem. Object code doesn't even
change (my first approach was per-field).
>
>> + \
>> +static void \
>> +libeth_get_##pfx##_base_stats(const struct net_device *dev, \
>> + struct netdev_queue_stats_##gpfx *stats) \
>> +{ \
>> + const struct libeth_netdev_priv *priv = netdev_priv(dev); \
>> + u64 *raw = (u64 *)stats; \
>> + \
>> + memset(stats, 0, sizeof(*(stats))); \
>
> Have you read the docs for any of the recent stats APIs?
You mean to leave 0xffs for unsupported fields?
>
> Nack. Just implement the APIs in the driver, this does not seem like
> a sane starting point _at all_. You're going to waste more time coming
> up with such abstraction than you'd save implementing it for 10 drivers.
I believe this nack is for generic Netlink stats, not the whole, right?
In general, I wasn't sure about whether it would be better to leave
Netlink stats per driver or write it in libeth, so I wanted to see
opinions of others. I'm fine with either way.
Thanks,
Olek
Powered by blists - more mailing lists