[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YW8xtA7ZU+80W/N7@shinobu>
Date: Wed, 20 Oct 2021 05:59:32 +0900
From: William Breathitt Gray <vilhelm.gray@...il.com>
To: David Lechner <david@...hnology.com>
Cc: Greg KH <gregkh@...uxfoundation.org>, linux-iio@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] counter: drop chrdev_lock
On Tue, Oct 19, 2021 at 09:44:04AM -0500, David Lechner wrote:
> On 10/19/21 2:18 AM, William Breathitt Gray wrote:
> > On Tue, Oct 19, 2021 at 09:07:48AM +0200, Greg KH wrote:
> >> On Tue, Oct 19, 2021 at 03:53:08PM +0900, William Breathitt Gray wrote:
> >>> On Mon, Oct 18, 2021 at 11:03:49AM -0500, David Lechner wrote:
> >>>> On 10/18/21 4:14 AM, William Breathitt Gray wrote:
> >>>>> On Sun, Oct 17, 2021 at 01:55:21PM -0500, David Lechner wrote:
> >>>>>> diff --git a/drivers/counter/counter-sysfs.c b/drivers/counter/counter-sysfs.c
> >>>>>> index 1ccd771da25f..7bf8882ff54d 100644
> >>>>>> --- a/drivers/counter/counter-sysfs.c
> >>>>>> +++ b/drivers/counter/counter-sysfs.c
> >>>>>> @@ -796,25 +796,18 @@ static int counter_events_queue_size_write(struct counter_device *counter,
> >>>>>> u64 val)
> >>>>>> {
> >>>>>> DECLARE_KFIFO_PTR(events, struct counter_event);
> >>>>>> - int err = 0;
> >>>>>> -
> >>>>>> - /* Ensure chrdev is not opened more than 1 at a time */
> >>>>>> - if (!atomic_add_unless(&counter->chrdev_lock, 1, 1))
> >>>>>> - return -EBUSY;
> >>>>>> + int err;
> >>>>>>
> >>>>>> /* Allocate new events queue */
> >>>>>> err = kfifo_alloc(&events, val, GFP_KERNEL);
> >>>>>> if (err)
> >>>>>> - goto exit_early;
> >>>>>> + return err;
> >>>>>>
> >>>>>> /* Swap in new events queue */
> >>>>>> kfifo_free(&counter->events);
> >>>>>> counter->events.kfifo = events.kfifo;
> >>>>>
> >>>>> Do we need to hold the events_lock mutex here for this swap in case
> >>>>> counter_chrdev_read() is in the middle of reading the kfifo to
> >>>>> userspace, or do the kfifo macros already protect us from a race
> >>>>> condition here?
> >>>>>
> >>>> Another possibility might be to disallow changing the size while
> >>>> events are enabled. Otherwise, we also need to protect against
> >>>> write after free.
> >>>>
> >>>> I considered this:
> >>>>
> >>>> swap(counter->events.kfifo, events.kfifo);
> >>>> kfifo_free(&events);
> >>>>
> >>>> But I'm not sure that would be safe enough.
> >>>
> >>> I think it depends on whether it's safe to call kfifo_free() while other
> >>> kfifo_*() calls are executing. I suspect it is not safe because I don't
> >>> think kfifo_free() waits until all kfifo read/write operations are
> >>> finished before freeing -- but if I'm wrong here please let me know.
> >>>
> >>> Because of that, will need to hold the counter->events_lock afterall so
> >>> that we don't modify the events fifo while a kfifo read/write is going
> >>> on, lest we suffer an address fault. This can happen regardless of
> >>> whether you swap before or after the kfifo_free() because the old fifo
> >>> address could still be in use within those uncompleted kfifo_*() calls
> >>> if they were called before the swap but don't complete before the
> >>> kfifo_free().
> >>>
> >>> So we have a problem now that I think you have already noticed: the
> >>> kfifo_in() call in counter_push_events() also needs protection, but it's
> >>> executing within an interrupt context so we can't try to lock a mutex
> >>> lest we end up sleeping.
> >>>
> >>> One option we have is as you suggested: we disallow changing size while
> >>> events are enabled. However, that will require us to keep track of when
> >>> events are disabled and implement a spinlock to ensure that we don't
> >>> disable events in the middle of a kfifo_in().
> >>>
> >>> Alternatively, we could change events_lock to a spinlock and use it to
> >>> protect all these operations on the counter->events fifo. Would this
> >>> alternative be a better option so that we avoid creating another
> >>> separate lock?
> >>
> >> I would recommend just having a single lock here if at all possible,
> >> until you determine that there a performance problem that can be
> >> measured that would require it to be split up.
> >>
> >> thanks,
> >>
> >> greg k-h
> >
> > All right let's go with a single events_lock spinlock then. David, if
> > you make those changes and submit a v2, I'll be okay with this patch and
> > can provide my ack for it.
> >
>
> We can't use a spin lock for everything since there are operations
> that can sleep that need to be in the critical sections. Likewise,
> we can't use a mutex for everything since some critical sections
> are in interrupt handlers. But, I suppose we can try combining
> the existing mutexes. Since the kfifo is accessed from both
> contexts, it seems like it still needs more consideration than
> just a mutex or a spin lock, e.g. if events are enabled, don't
> allow swapping out the kfifo buffer.
I think there is a deadlock case if we combine the ops_exists_lock with
the n_events_list_lock, so this will need further thought. However, at
the very least the swap occuring in counter_events_queue_size_write()
and the kfifo_in() in counter_push_events() require some sort of
locking; it is trivial to cause a page fault with the code in its
current state.
I think this can be fixed if just events_lock is changed to spinlock for
now without modifying the other locks. We can try to combine the
remaining locks in a subsequent patch, if they are capable of being
combined.
William Breathitt Gray
Download attachment "signature.asc" of type "application/pgp-signature" (834 bytes)
Powered by blists - more mailing lists