[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZUPnlsm91R72MBs7@dev>
Date: Thu, 2 Nov 2023 14:16:54 -0400
From: Jeremy Cline <jeremy@...ine.org>
To: Edward Adam Davis <eadavis@...com>
Cc: habetsm.xilinx@...il.com, davem@...emloft.net,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
reibax@...il.com, richardcochran@...il.com,
syzbot+df3f3ef31f60781fa911@...kaller.appspotmail.com,
syzkaller-bugs@...glegroups.com
Subject: Re: [PATCH net-next V2] ptp: fix corrupted list in ptp_open
Hi Edward,
On Tue, Oct 31, 2023 at 06:25:42PM +0800, Edward Adam Davis wrote:
> There is no lock protection when writing ptp->tsevqs in ptp_open(),
> ptp_release(), which can cause data corruption, use mutex lock to avoid this
> issue.
>
> Moreover, ptp_release() should not be used to release the queue in ptp_read(),
> and it should be deleted together.
>
> Reported-and-tested-by: syzbot+df3f3ef31f60781fa911@...kaller.appspotmail.com
> Fixes: 8f5de6fb2453 ("ptp: support multiple timestamp event readers")
> Signed-off-by: Edward Adam Davis <eadavis@...com>
> ---
> drivers/ptp/ptp_chardev.c | 11 +++++++++--
> drivers/ptp/ptp_clock.c | 3 +++
> drivers/ptp/ptp_private.h | 1 +
> 3 files changed, 13 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/ptp/ptp_chardev.c b/drivers/ptp/ptp_chardev.c
> index 282cd7d24077..e31551d2697d 100644
> --- a/drivers/ptp/ptp_chardev.c
> +++ b/drivers/ptp/ptp_chardev.c
> @@ -109,6 +109,9 @@ int ptp_open(struct posix_clock_context *pccontext, fmode_t fmode)
> struct timestamp_event_queue *queue;
> char debugfsname[32];
>
> + if (mutex_lock_interruptible(&ptp->tsevq_mux))
> + return -ERESTARTSYS;
> +
> queue = kzalloc(sizeof(*queue), GFP_KERNEL);
> if (!queue)
> return -EINVAL;
> @@ -132,15 +135,20 @@ int ptp_open(struct posix_clock_context *pccontext, fmode_t fmode)
> debugfs_create_u32_array("mask", 0444, queue->debugfs_instance,
> &queue->dfs_bitmap);
>
> + mutex_unlock(&ptp->tsevq_mux);
The lock doesn't need to be held so long here. Doing so causes a bit of
an issue, actually, because the memory allocation for the queue can fail
which will cause the function to return early without releasing the
mutex.
The lock only needs to be held for the list_add_tail() call.
> return 0;
> }
>
> int ptp_release(struct posix_clock_context *pccontext)
> {
> struct timestamp_event_queue *queue = pccontext->private_clkdata;
> + struct ptp_clock *ptp =
> + container_of(pccontext->clk, struct ptp_clock, clock);
> unsigned long flags;
>
> if (queue) {
> + if (mutex_lock_interruptible(&ptp->tsevq_mux))
> + return -ERESTARTSYS;
> debugfs_remove(queue->debugfs_instance);
> pccontext->private_clkdata = NULL;
> spin_lock_irqsave(&queue->lock, flags);
> @@ -148,6 +156,7 @@ int ptp_release(struct posix_clock_context *pccontext)
> spin_unlock_irqrestore(&queue->lock, flags);
> bitmap_free(queue->mask);
> kfree(queue);
> + mutex_unlock(&ptp->tsevq_mux);
Similar to the above note, you don't want to hold the lock any longer
than you must.
While this patch looks to cover adding and removing items from the list,
the code that iterates over the list isn't covered which can be
problematic. If the list is modified while it is being iterated, the
iterating code could chase an invalid pointer.
Regards,
Jeremy
Powered by blists - more mailing lists