lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <1597224363.32469.12.camel@mtkswgap22>
Date:   Wed, 12 Aug 2020 17:26:03 +0800
From:   Chinwen Chang <chinwen.chang@...iatek.com>
To:     Steven Price <steven.price@....com>
CC:     Matthias Brugger <matthias.bgg@...il.com>,
        Michel Lespinasse <walken@...gle.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        "Vlastimil Babka" <vbabka@...e.cz>,
        Daniel Jordan <daniel.m.jordan@...cle.com>,
        Davidlohr Bueso <dbueso@...e.de>,
        Alexey Dobriyan <adobriyan@...il.com>,
        "Matthew Wilcox (Oracle)" <willy@...radead.org>,
        Jason Gunthorpe <jgg@...pe.ca>,
        Song Liu <songliubraving@...com>,
        Jimmy Assarsson <jimmyassarsson@...il.com>,
        Huang Ying <ying.huang@...el.com>,
        <linux-kernel@...r.kernel.org>,
        <linux-arm-kernel@...ts.infradead.org>,
        <linux-mediatek@...ts.infradead.org>,
        <linux-fsdevel@...r.kernel.org>, <wsd_upstream@...iatek.com>
Subject: Re: [PATCH 2/2] mm: proc: smaps_rollup: do not stall write attempts
 on mmap_lock

On Wed, 2020-08-12 at 09:39 +0100, Steven Price wrote:
> On 11/08/2020 05:42, Chinwen Chang wrote:
> > smaps_rollup will try to grab mmap_lock and go through the whole vma
> > list until it finishes the iterating. When encountering large processes,
> > the mmap_lock will be held for a longer time, which may block other
> > write requests like mmap and munmap from progressing smoothly.
> > 
> > There are upcoming mmap_lock optimizations like range-based locks, but
> > the lock applied to smaps_rollup would be the coarse type, which doesn't
> > avoid the occurrence of unpleasant contention.
> > 
> > To solve aforementioned issue, we add a check which detects whether
> > anyone wants to grab mmap_lock for write attempts.
> > 
> > Signed-off-by: Chinwen Chang <chinwen.chang@...iatek.com>
> > ---
> >   fs/proc/task_mmu.c | 21 +++++++++++++++++++++
> >   1 file changed, 21 insertions(+)
> > 
> > diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> > index dbda449..4b51f25 100644
> > --- a/fs/proc/task_mmu.c
> > +++ b/fs/proc/task_mmu.c
> > @@ -856,6 +856,27 @@ static int show_smaps_rollup(struct seq_file *m, void *v)
> >   	for (vma = priv->mm->mmap; vma; vma = vma->vm_next) {
> >   		smap_gather_stats(vma, &mss);
> >   		last_vma_end = vma->vm_end;
> > +
> > +		/*
> > +		 * Release mmap_lock temporarily if someone wants to
> > +		 * access it for write request.
> > +		 */
> > +		if (mmap_lock_is_contended(mm)) {
> > +			mmap_read_unlock(mm);
> > +			ret = mmap_read_lock_killable(mm);
> > +			if (ret) {
> > +				release_task_mempolicy(priv);
> > +				goto out_put_mm;
> > +			}
> > +
> > +			/* Check whether current vma is available */
> > +			vma = find_vma(mm, last_vma_end - 1);
> > +			if (vma && vma->vm_start < last_vma_end)
> 
> I may be wrong, but this looks like it could return incorrect results. 
> For example if we start reading with the following VMAs:
> 
>   +------+------+-----------+
>   | VMA1 | VMA2 | VMA3      |
>   +------+------+-----------+
>   |      |      |           |
> 4k     8k     16k         400k
> 
> Then after reading VMA2 we drop the lock due to contention. So:
> 
>    last_vma_end = 16k
> 
> Then if VMA2 is freed while the lock is dropped, so we have:
> 
>   +------+      +-----------+
>   | VMA1 |      | VMA3      |
>   +------+      +-----------+
>   |      |      |           |
> 4k     8k     16k         400k
> 
> find_vma(mm, 16k-1) will then return VMA3 and the condition vm_start < 
> last_vma_end will be false.
> 
Hi Steve,

Thank you for reviewing this patch.

You are correct. If the contention is detected and the current vma(here
is VMA2) is freed while the lock is dropped, it will report an
incomplete result.

> > +				continue;
> > +
> > +			/* Current vma is not available, just break */
> > +			break;
> 
> Which means we break out here and report an incomplete output (the 
> numbers will be much smaller than reality).
> 
> Would it be better to have a loop like:
> 
> 	for (vma = priv->mm->mmap; vma;) {
> 		smap_gather_stats(vma, &mss);
> 		last_vma_end = vma->vm_end;
> 
> 		if (contended) {
> 			/* drop/acquire lock */
> 
> 			vma = find_vma(mm, last_vma_end - 1);
> 			if (!vma)
> 				break;
> 			if (vma->vm_start >= last_vma_end)
> 				continue;
> 		}
> 		vma = vma->vm_next;
> 	}
> 
> that way if the VMA is removed while the lock is dropped the loop can 
> just continue from the next VMA.
> 
Thanks a lot for your great suggestion.

> Or perhaps I missed something obvious? I haven't actually tested 
> anything above.
> 
> Steve

I will prepare new patch series for further reviews.

Thank you.
Chinwen
> 
> > +		}
> >   	}
> >   
> >   	show_vma_header_prefix(m, priv->mm->mmap->vm_start,
> > 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ