lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 5 Jul 2023 22:06:01 +0300
From:   "Kirill A. Shutemov" <kirill@...temov.name>
To:     "Liam R. Howlett" <Liam.Howlett@...cle.com>,
        Yu Ma <yu.ma@...el.com>, akpm@...ux-foundation.org,
        tim.c.chen@...el.com, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, dave.hansen@...el.com,
        dan.j.williams@...el.com, shakeelb@...gle.com, pan.deng@...el.com,
        tianyou.li@...el.com, lipeng.zhu@...el.com,
        tim.c.chen@...ux.intel.com
Subject: Re: [PATCH] mm/mmap: move vma operations to mm_struct out of the
 critical section of file mapping lock

On Wed, Jul 05, 2023 at 01:33:48PM -0400, Liam R. Howlett wrote:
> * Kirill A. Shutemov <kirill@...temov.name> [230705 12:54]:
> > On Tue, Jun 06, 2023 at 03:20:13PM -0400, Liam R. Howlett wrote:
> > > * Yu Ma <yu.ma@...el.com> [230606 08:23]:
> > > > UnixBench/Execl represents a class of workload where bash scripts are
> > > > spawned frequently to do some short jobs. When running multiple parallel
> > > > tasks, hot osq_lock is observed from do_mmap and exit_mmap. Both of them
> > > > come from load_elf_binary through the call chain
> > > > "execl->do_execveat_common->bprm_execve->load_elf_binary". In do_mmap,it will
> > > > call mmap_region to create vma node, initialize it and insert it to vma
> > > > maintain structure in mm_struct and i_mmap tree of the mapping file, then
> > > > increase map_count to record the number of vma nodes used. The hot osq_lock
> > > > is to protect operations on file’s i_mmap tree. For the mm_struct member
> > > > change like vma insertion and map_count update, they do not affect i_mmap
> > > > tree. Move those operations out of the lock's critical section, to reduce
> > > > hold time on the lock.
> > > > 
> > > > With this change, on Intel Sapphire Rapids 112C/224T platform, based on
> > > > v6.0-rc6, the 160 parallel score improves by 12%. The patch has no
> > > > obvious performance gain on v6.4-rc4 due to regression of this benchmark
> > > > from this commit f1a7941243c102a44e8847e3b94ff4ff3ec56f25 (mm: convert 
> > > > mm's rss stats into percpu_counter).
> > > 
> > > I didn't think it was safe to insert a VMA into the VMA tree without
> > > holding this write lock?  We now have a window of time where a file
> > > mapping doesn't exist for a vma that's in the tree?  Is this always
> > > safe?  Does the locking order in mm/rmap.c need to change?
> > 
> > We hold mmap lock on write here, right?
> 
> Yes.
> 
> >Who can observe the VMA until the
> > lock is released?
> 
> With CONFIG_PER_VMA_LOCK we can have the VMA read under the rcu
> read lock for page faults from the tree.  I am not sure if the vma is
> initialized to avoid page fault issues - vma_start_write() should either
> be taken or initialise the vma as this is the case.

Right, with CONFIG_PER_VMA_LOCK the vma has to be unusable until it is
fully initialized, effectively providing the same guarantees as mmap write
lock. If it is not the case, it is CONFIG_PER_VMA_LOCK bug.

> There is also a possibility of a driver mapping a VMA and having entry
> points from other locations.  It isn't accessed through the tree though
> so I don't think this change will introduce new races?

Right.

> > It cannot be retrieved from the VMA tree as it requires at least read mmap
> > lock. And the VMA doesn't exist anywhere else.
> > 
> > I believe the change is safe.
> 
> I guess insert_vm_struct(), and vma_link() callers should be checked and
> updated accordingly?

Yep.

-- 
  Kiryl Shutsemau / Kirill A. Shutemov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ