[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <66ccbb841794c98b91d9e8aba48b90c63caa45e7.camel@nvidia.com>
Date: Fri, 4 Oct 2024 09:32:11 +0000
From: Cosmin Ratiu <cratiu@...dia.com>
To: Tariq Toukan <tariqt@...dia.com>, "horms@...nel.org" <horms@...nel.org>
CC: "davem@...emloft.net" <davem@...emloft.net>, "netdev@...r.kernel.org"
<netdev@...r.kernel.org>, Gal Pressman <gal@...dia.com>, Leon Romanovsky
<leonro@...dia.com>, "kuba@...nel.org" <kuba@...nel.org>,
"edumazet@...gle.com" <edumazet@...gle.com>, Saeed Mahameed
<saeedm@...dia.com>, "pabeni@...hat.com" <pabeni@...hat.com>
Subject: Re: [PATCH net-next V2 3/6] net/mlx5: hw counters: Replace IDR+lists
with xarray
On Fri, 2024-10-04 at 09:58 +0100, Simon Horman wrote:
> On Tue, Oct 01, 2024 at 01:37:06PM +0300, Tariq Toukan wrote:
> > From: Cosmin Ratiu <cratiu@...dia.com>
>
> ...
>
> > +/* Synchronization notes
> > + *
> > + * Access to counter array:
> > + * - create - mlx5_fc_create() (user context)
> > + * - inserts the counter into the xarray.
> > + *
> > + * - destroy - mlx5_fc_destroy() (user context)
> > + * - erases the counter from the xarray and releases it.
> > + *
> > + * - query mlx5_fc_query(), mlx5_fc_query_cached{,_raw}() (user context)
> > + * - user should not access a counter after destroy.
> > + *
> > + * - bulk query (single thread workqueue context)
> > + * - create: query relies on 'lastuse' to avoid updating counters added
> > + * around the same time as the current bulk cmd.
> > + * - destroy: destroyed counters will not be accessed, even if they are
> > + * destroyed during a bulk query command.
> > + */
> > +static void mlx5_fc_stats_query_all_counters(struct mlx5_core_dev *dev)
> > {
> > struct mlx5_fc_stats *fc_stats = dev->priv.fc_stats;
> > - bool query_more_counters = (first->id <= last_id);
> > - int cur_bulk_len = fc_stats->bulk_query_len;
> > + u32 bulk_len = fc_stats->bulk_query_len;
> > + XA_STATE(xas, &fc_stats->counters, 0);
> > u32 *data = fc_stats->bulk_query_out;
> > - struct mlx5_fc *counter = first;
> > + struct mlx5_fc *counter;
> > + u32 last_bulk_id = 0;
> > + u64 bulk_query_time;
> > u32 bulk_base_id;
> > - int bulk_len;
> > int err;
> >
> > - while (query_more_counters) {
> > - /* first id must be aligned to 4 when using bulk query */
> > - bulk_base_id = counter->id & ~0x3;
> > -
> > - /* number of counters to query inc. the last counter */
> > - bulk_len = min_t(int, cur_bulk_len,
> > - ALIGN(last_id - bulk_base_id + 1, 4));
> > -
> > - err = mlx5_cmd_fc_bulk_query(dev, bulk_base_id, bulk_len,
> > - data);
> > - if (err) {
> > - mlx5_core_err(dev, "Error doing bulk query: %d\n", err);
> > - return;
> > - }
> > - query_more_counters = false;
> > -
> > - list_for_each_entry_from(counter, &fc_stats->counters, list) {
> > - int counter_index = counter->id - bulk_base_id;
> > - struct mlx5_fc_cache *cache = &counter->cache;
> > -
> > - if (counter->id >= bulk_base_id + bulk_len) {
> > - query_more_counters = true;
> > - break;
> > + xas_lock(&xas);
> > + xas_for_each(&xas, counter, U32_MAX) {
> > + if (xas_retry(&xas, counter))
> > + continue;
> > + if (unlikely(counter->id >= last_bulk_id)) {
> > + /* Start new bulk query. */
> > + /* First id must be aligned to 4 when using bulk query. */
> > + bulk_base_id = counter->id & ~0x3;
> > + last_bulk_id = bulk_base_id + bulk_len;
> > + /* The lock is released while querying the hw and reacquired after. */
> > + xas_unlock(&xas);
> > + /* The same id needs to be processed again in the next loop iteration. */
> > + xas_reset(&xas);
> > + bulk_query_time = jiffies;
> > + err = mlx5_cmd_fc_bulk_query(dev, bulk_base_id, bulk_len, data);
> > + if (err) {
> > + mlx5_core_err(dev, "Error doing bulk query: %d\n", err);
> > + return;
> > }
> > -
> > - update_counter_cache(counter_index, data, cache);
> > + xas_lock(&xas);
> > + continue;
> > }
> > + /* Do not update counters added after bulk query was started. */
>
> Hi Cosmin and Tariq,
>
> I'm sorry if it is obvious, but I'm wondering if you could explain further
> the relationship between the if block above, where bulk_query_time (and
> bulk_base_id) is initialised and if block below, which is conditional on
> bulk_query_time.
>
> > + if (time_after64(bulk_query_time, counter->cache.lastuse))
> > + update_counter_cache(counter->id - bulk_base_id, data,
> > + &counter->cache);
> > }
> > + xas_unlock(&xas);
> > }
>
> ...
Hi Simon. Of course.
The first if (with 'unlikely') is the one that starts a bulk query.
The second if is the one that updates a counter's cached value with the
output from the bulk query. Bulks are usually ~32K counters, if I
remember correctly. In any case, a large number.
The first if sets up the bulk query params and executes it without the
lock held. During that time, counters could be added/removed. We don't
want to update counter values for counters added between when the bulk
query was executed and when the lock was reacquired. bulk_query_time
with jiffy granularity is used for that purpose. When a counter is
added, its 'cache.lastuse' is initialized to jiffies. Only counters
with ids between [bulk_base_id, last_bulk_id) added strictly before the
jiffy when bulk_query_time was set will be updated because the hw might
not have set newer counter values in the bulk result and values might
be garbage.
I also have this blurb in the commit description, but it is probably
lost in the wall of text:
"
Counters could be added/deleted while the HW is queried. This is safe,
as the HW API simply returns unknown values for counters not in HW, but
those values won't be accessed. Only counters present in xarray before
bulk query will actually read queried cache values.
"
There's also a comment bit in the "Synchronization notes" section:
* - bulk query (single thread workqueue context)
* - create: query relies on 'lastuse' to avoid updating counters
added
* around the same time as the current bulk cmd.
Hope this clears things out, let us know if you'd like something
improved.
Cosmin.
Powered by blists - more mailing lists