[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <2942098.x5hDSQIzYV@localhost.localdomain>
Date: Wed, 01 Sep 2021 17:39:46 +0200
From: "Fabio M. De Francesco" <fmdefrancesco@...il.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: Johan Hovold <johan@...nel.org>, Alex Elder <elder@...aro.org>,
Alex Elder <elder@...nel.org>, linux-staging@...ts.linux.dev,
linux-kernel@...r.kernel.org, greybus-dev@...ts.linaro.org
Subject: Re: [greybus-dev] [PATCH v4] staging: greybus: Convert uart.c from IDR to XArray
On Wednesday, September 1, 2021 4:29:26 PM CEST Matthew Wilcox wrote:
> On Wed, Sep 01, 2021 at 03:56:20PM +0200, Fabio M. De Francesco wrote:
> > Anyway I tried to reason about it. I perfectly know what is required to
> > protect critical sections of code, but I don't know how drivers work; I
mean
> > I don't know whether or not different threads that run concurrently could
> > really interfere in that specific section. This is because I simply
reason in
> > terms of general rules of protection of critical section but I really
don't
> > know how Greybus works or (more in general) how drivers work.
>
> From a quick look, it is indeed possible to get rid of the mutex.
> If this were a high-performance path which needed to have many threads
> simultaneously looking up the gb_tty, or it were core kernel code, I
> would say that you should use kfree_rcu() to free gb_tty and then
> you could replace the mutex_lock() on lookup with a rcu_read_lock().
>
> Since this is low-performance and driver code, I think you're better off
> simply doing this:
>
> xa_lock((&tty_minors);
> gb_tty = xa_load(&tty_minors, minor);
> ... establish a refcount ...
> xa_unlock(&tty_minors);
>
> EXCEPT ...
>
> establishing a refcount currently involves taking a mutex. So you can't
> do that. First, you have to convert the gb_tty mutex to a spinlock
> (which looks fine to me), and then you can do this. But this is where
> you need to work with the driver authors to make sure it's OK.
Dear Matthew,
I think that I understand your suggestion and, as far as my experience with
concurrency in userspace may have any relevance here, I often use reference
counting with objects that are shared by multiple threads.
Unfortunately, this is not the point. The *real* issue is that I am not able
to provide good reasons for doing IDR to XArray conversions in Greybus code.
I tried to provide some sort of motivation by using your own words that I
still remember from a message you sent a couple of months ago:
More or less you wrote:
"The abstract data type XArray is more memory-efficient, parallelisable, and
cache friendly. It takes advantage of RCU to perform lookups without locking.
Furthermore, IDR is deprecated because XArray has a better (cleaner and more
consistent) API.".
I can reason on the "cleaner and more consistent API" and for what I
understand from the grand design of the implementation of XArray I may also
attempt to discuss some of the other parts of the above-mentioned statement.
Anyway I must respect the point of view of Johan H.: "And remember that a
good commit message describing the motivation for the change (avoiding
boiler-plate marketing terms) is a good start if you want to get your patches
accepted.". That's absolutely fair and, I say that seriously, I must follow
this rule. Since I'm not able to do that I understand that I have to desist.
If it depended on me I'd like to convert as many possible drivers from IDR to
XArray, but it seems that many maintainers don't care (even if the work was
perfect in every detail since v1). I guess they have their reason to think
that the trade-off isn't even worth the time to review. I'm yet not sure that
IDA to XArray is worth as much as IDR to XArray is. But for the latter I
would bet it is.
Please, nobody should misunderstand me. I know that I'm going well beyond
what my experience may permit to say about this matter. I'm just expressing
my opinion with no intentions to push anybody in any direction. Please
forgive me if this is what it may seem to the readers that are following this
thread.
Thanks,
Fabio
Powered by blists - more mailing lists