lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <s5hwnxgjysq.wl-tiwai@suse.de>
Date:   Thu, 17 Dec 2020 15:24:21 +0100
From:   Takashi Iwai <tiwai@...e.de>
To:     Lars-Peter Clausen <lars@...afoo.de>
Cc:     alsa-devel@...a-project.org, gustavoars@...nel.org,
        linux-kernel@...r.kernel.org, shengjiu.wang@....com,
        tiwai@...e.com, pierre-louis.bossart@...ux.intel.com,
        xiang@...nel.org, Robin Gong <yibin.gong@....com>,
        akpm@...ux-foundation.org
Subject: Re: [PATCH v1 ] ALSA: core: memalloc: add page alignment for iram

On Thu, 17 Dec 2020 14:16:48 +0100,
Lars-Peter Clausen wrote:
> 
> On 12/17/20 12:06 PM, Takashi Iwai wrote:
> > On Thu, 17 Dec 2020 11:59:23 +0100,
> > Lars-Peter Clausen wrote:
> >> On 12/17/20 10:55 AM, Takashi Iwai wrote:
> >>> On Thu, 17 Dec 2020 10:43:45 +0100,
> >>> Lars-Peter Clausen wrote:
> >>>> On 12/17/20 5:15 PM, Robin Gong wrote:
> >>>>> Since mmap for userspace is based on page alignment, add page alignment
> >>>>> for iram alloc from pool, otherwise, some good data located in the same
> >>>>> page of dmab->area maybe touched wrongly by userspace like pulseaudio.
> >>>>>
> >>>> I wonder, do we also have to align size to be a multiple of PAGE_SIZE
> >>>> to avoid leaking unrelated data?
> >>> Hm, a good question.  Basically the PCM buffer size itself shouldn't
> >>> be influenced by that (i.e. no hw-constraint or such is needed), but
> >>> the padding should be cleared indeed.  I somehow left those to the
> >>> allocator side, but maybe it's safer to clear the whole buffer in
> >>> sound/core/memalloc.c commonly.
> >> What I meant was that most of the APIs that we use to allocate memory
> >> work on a PAGE_SIZE granularity. I.e. if you request a buffer that
> >> where the size is not a multiple of PAGE_SIZE internally they will
> >> still allocate a buffer that is a multiple of PAGE_SIZE and mark the
> >> unused bytes as reserved.
> >>
> >> But I believe that is not the case gen_pool_dma_alloc(). It will
> >> happily allocate those extra bytes to some other allocation request.
> >>
> >> That we need to zero out the reserved bytes even for those other APIs
> >> is a very good additional point!
> >>
> >> I looked at this a few years ago and I'm pretty sure that we cleared
> >> out the allocated area, but I can't find that anymore in the current
> >> code. Which is not so great I guess.
> > IIRC, we used GFP_ZERO in the past for the normal page allocations,
> > but this was dropped as it's no longer supported or so.
> >
> > Also, we clear out the PCM buffer in hw_params call, but this is for
> > the requested size, not the actual allocated size, hence the padding
> > bytes will remain uncleared.
> Ah! That memset() in hw_params is new.
> >
> > So I believe it's safer to add an extra memset() like my test patch.
> 
> Yea, we definitely want that.
> 
> Do we care about leaking audio samples from a previous
> application. I.e. application 'A' allocates a buffer plays back some
> data and then closes the device again. Application 'B' then opens the
> same audio devices allocates a slightly smaller buffer, so that it
> still uses the same number of pages. The buffer from the previous
> allocation get reused, but the remainder of the last page wont get
> cleared in hw_params().

That's true.  On the second though, it might be better to extend that
memset() in hw_params to assure clearing the whole allocated buffer.
We can check runtime->dma_buffer_p->bytes for the actual size.

Also, in the PCM memory allocator, we make sure that the allocation is
performed for page size.

Below is another untested patch.


thanks,

Takashi

---
--- a/sound/core/pcm_memory.c
+++ b/sound/core/pcm_memory.c
@@ -36,6 +36,7 @@ static int do_alloc_pages(struct snd_card *card, int type, struct device *dev,
 {
 	int err;
 
+	size = PAGE_ALIGN(size);
 	if (max_alloc_per_card &&
 	    card->total_pcm_alloc_bytes + size > max_alloc_per_card)
 		return -ENOMEM;
@@ -187,7 +188,7 @@ static void snd_pcm_lib_preallocate_proc_write(struct snd_info_entry *entry,
 				buffer->error = -ENOMEM;
 				return;
 			}
-			substream->buffer_bytes_max = size;
+			substream->buffer_bytes_max = new_dmab.bytes;
 		} else {
 			substream->buffer_bytes_max = UINT_MAX;
 		}
diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
index 47b155a49226..6aabad070abf 100644
--- a/sound/core/pcm_native.c
+++ b/sound/core/pcm_native.c
@@ -755,8 +755,15 @@ static int snd_pcm_hw_params(struct snd_pcm_substream *substream,
 		runtime->boundary *= 2;
 
 	/* clear the buffer for avoiding possible kernel info leaks */
-	if (runtime->dma_area && !substream->ops->copy_user)
-		memset(runtime->dma_area, 0, runtime->dma_bytes);
+	if (runtime->dma_area && !substream->ops->copy_user) {
+		size_t size;
+
+		if (runtime->dma_buffer_p)
+			size = runtime->dma_buffer_p->bytes;
+		else
+			size = runtime->dma_bytes;
+		memset(runtime->dma_area, 0, size);
+	}
 
 	snd_pcm_timer_resolution_change(substream);
 	snd_pcm_set_state(substream, SNDRV_PCM_STATE_SETUP);

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ