[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190626212220.239897-2-evgreen@chromium.org>
Date: Wed, 26 Jun 2019 14:22:19 -0700
From: Evan Green <evgreen@...omium.org>
To: Takashi Iwai <tiwai@...e.com>
Cc: Evan Green <evgreen@...omium.org>,
Jaroslav Kysela <perex@...ex.cz>, alsa-devel@...a-project.org,
Amadeusz Sławiński
<amadeuszx.slawinski@...ux.intel.com>,
linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Subject: [PATCH v2 1/2] ALSA: hda: Fix widget_mutex incomplete protection
The widget_mutex was introduced to serialize callers to
hda_widget_sysfs_{re}init. However, its protection of the sysfs widget array
is incomplete. For example, it is acquired around the call to
hda_widget_sysfs_reinit(), which actually creates the new array, but isn't
still acquired when codec->num_nodes and codec->start_nid is updated. So
the lock ensures one thread sets up the new array at a time, but doesn't
ensure which thread's value will end up in codec->num_nodes. If a larger
num_nodes wins but a smaller array was set up, the next call to
refresh_widgets() will touch free memory as it iterates over codec->num_nodes
that aren't there.
The widget_lock really protects both the tree as well as codec->num_nodes,
start_nid, and end_nid, so make sure it's held across that update. It should
also be held during snd_hdac_get_sub_nodes(), so that a very old read from that
function doesn't end up clobbering a later update.
While in there, move the exit mutex call inside the function. This moves the
mutex closer to the data structure it protects and removes a requirement of
acquiring the somewhat internal widget_lock before calling sysfs_exit.
Fixes: ed180abba7f1 ("ALSA: hda: Fix race between creating and refreshing sysfs entries")
Signed-off-by: Evan Green <evgreen@...omium.org>
---
Changes in v2:
- Introduced widget_mutex relocation
sound/hda/hdac_device.c | 19 +++++++++++++------
sound/hda/hdac_sysfs.c | 4 ++--
2 files changed, 15 insertions(+), 8 deletions(-)
diff --git a/sound/hda/hdac_device.c b/sound/hda/hdac_device.c
index 6907dbefd08c..ff3420c5cdc8 100644
--- a/sound/hda/hdac_device.c
+++ b/sound/hda/hdac_device.c
@@ -162,9 +162,7 @@ EXPORT_SYMBOL_GPL(snd_hdac_device_register);
void snd_hdac_device_unregister(struct hdac_device *codec)
{
if (device_is_registered(&codec->dev)) {
- mutex_lock(&codec->widget_lock);
hda_widget_sysfs_exit(codec);
- mutex_unlock(&codec->widget_lock);
device_del(&codec->dev);
snd_hdac_bus_remove_device(codec->bus, codec);
}
@@ -402,25 +400,34 @@ int snd_hdac_refresh_widgets(struct hdac_device *codec, bool sysfs)
hda_nid_t start_nid;
int nums, err;
+ /*
+ * Serialize against multiple threads trying to update the sysfs
+ * widgets array.
+ */
+ mutex_lock(&codec->widget_lock);
nums = snd_hdac_get_sub_nodes(codec, codec->afg, &start_nid);
if (!start_nid || nums <= 0 || nums >= 0xff) {
dev_err(&codec->dev, "cannot read sub nodes for FG 0x%02x\n",
codec->afg);
- return -EINVAL;
+ err = -EINVAL;
+ goto unlock;
}
if (sysfs) {
- mutex_lock(&codec->widget_lock);
err = hda_widget_sysfs_reinit(codec, start_nid, nums);
- mutex_unlock(&codec->widget_lock);
if (err < 0)
- return err;
+ goto unlock;
}
codec->num_nodes = nums;
codec->start_nid = start_nid;
codec->end_nid = start_nid + nums;
+ mutex_unlock(&codec->widget_lock);
return 0;
+
+unlock:
+ mutex_unlock(&codec->widget_lock);
+ return err;
}
EXPORT_SYMBOL_GPL(snd_hdac_refresh_widgets);
diff --git a/sound/hda/hdac_sysfs.c b/sound/hda/hdac_sysfs.c
index 909d5ef1179c..400ca262e2f8 100644
--- a/sound/hda/hdac_sysfs.c
+++ b/sound/hda/hdac_sysfs.c
@@ -412,13 +412,13 @@ int hda_widget_sysfs_init(struct hdac_device *codec)
return 0;
}
-/* call with codec->widget_lock held */
void hda_widget_sysfs_exit(struct hdac_device *codec)
{
+ mutex_lock(&codec->widget_lock);
widget_tree_free(codec);
+ mutex_unlock(&codec->widget_lock);
}
-/* call with codec->widget_lock held */
int hda_widget_sysfs_reinit(struct hdac_device *codec,
hda_nid_t start_nid, int num_nodes)
{
--
2.20.1
Powered by blists - more mailing lists