lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cd91527d-e6e0-4900-a368-dfc9812546da@lucifer.local>
Date:   Mon, 20 Mar 2023 08:35:11 +0000
From:   Lorenzo Stoakes <lstoakes@...il.com>
To:     Uladzislau Rezki <urezki@...il.com>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        linux-fsdevel@...r.kernel.org,
        Andrew Morton <akpm@...ux-foundation.org>,
        Baoquan He <bhe@...hat.com>,
        Matthew Wilcox <willy@...radead.org>,
        David Hildenbrand <david@...hat.com>,
        Liu Shixin <liushixin2@...wei.com>,
        Jiri Olsa <jolsa@...nel.org>
Subject: Re: [PATCH v2 2/4] mm: vmalloc: use rwsem, mutex for vmap_area_lock
 and vmap_block->lock

On Mon, Mar 20, 2023 at 09:32:06AM +0100, Uladzislau Rezki wrote:
> On Mon, Mar 20, 2023 at 08:25:32AM +0000, Lorenzo Stoakes wrote:
> > On Mon, Mar 20, 2023 at 08:54:33AM +0100, Uladzislau Rezki wrote:
> > > > vmalloc() is, by design, not permitted to be used in atomic context and
> > > > already contains components which may sleep, so avoiding spin locks is not
> > > > a problem from the perspective of atomic context.
> > > >
> > > > The global vmap_area_lock is held when the red/black tree rooted in
> > > > vmap_are_root is accessed and thus is rather long-held and under
> > > > potentially high contention. It is likely to be under contention for reads
> > > > rather than write, so replace it with a rwsem.
> > > >
> > > > Each individual vmap_block->lock is likely to be held for less time but
> > > > under low contention, so a mutex is not an outrageous choice here.
> > > >
> > > > A subset of test_vmalloc.sh performance results:-
> > > >
> > > > fix_size_alloc_test             0.40%
> > > > full_fit_alloc_test		2.08%
> > > > long_busy_list_alloc_test	0.34%
> > > > random_size_alloc_test		-0.25%
> > > > random_size_align_alloc_test	0.06%
> > > > ...
> > > > all tests cycles                0.2%
> > > >
> > > > This represents a tiny reduction in performance that sits barely above
> > > > noise.
> > > >
> > > How important to have many simultaneous users of vread()? I do not see a
> > > big reason to switch into mutexes due to performance impact and making it
> > > less atomic.
> >
> > It's less about simultaneous users of vread() and more about being able to write
> > direct to user memory rather than via a bounce buffer and not hold a spinlock
> > over possible page faults.
> >
> > The performance impact is barely above noise (I got fairly widely varying
> > results), so I don't think it's really much of a cost at all. I can't imagine
> > there are many users critically dependent on a sub-single digit % reduction in
> > speed in vmalloc() allocation.
> >
> > As I was saying to Willy, the code is already not atomic, or rather needs rework
> > to become atomic-safe (there are a smattering of might_sleep()'s throughout)
> >
> > However, given your objection alongside Willy's, let me examine Willy's
> > suggestion that we instead of doing this, prefault the user memory in advance of
> > the vread call.
> >
> Just a quick perf tests shows regression around 6%. 10 workers test_mask is 31:
>
> # default
> [  140.349731] All test took worker0=485061693537 cycles
> [  140.386065] All test took worker1=486504572954 cycles
> [  140.418452] All test took worker2=467204082542 cycles
> [  140.435895] All test took worker3=512591010219 cycles
> [  140.458316] All test took worker4=448583324125 cycles
> [  140.494244] All test took worker5=501018129647 cycles
> [  140.518144] All test took worker6=516224787767 cycles
> [  140.535472] All test took worker7=442025617137 cycles
> [  140.558249] All test took worker8=503337286539 cycles
> [  140.590571] All test took worker9=494369561574 cycles
>
> # patch
> [  144.464916] All test took worker0=530373399067 cycles
> [  144.492904] All test took worker1=522641540924 cycles
> [  144.528999] All test took worker2=529711158267 cycles
> [  144.552963] All test took worker3=527389011775 cycles
> [  144.592951] All test took worker4=529583252449 cycles
> [  144.610286] All test took worker5=523605706016 cycles
> [  144.627690] All test took worker6=531494777011 cycles
> [  144.653046] All test took worker7=527150114726 cycles
> [  144.669818] All test took worker8=526599712235 cycles
> [  144.693428] All test took worker9=526057490851 cycles
>

OK ouch, that's worse than I observed! Let me try this prefault approach and
then we can revert back to spinlocks.

> > >
> > > So, how important for you to have this change?
> > >
> >
> > Personally, always very important :)
> >
> This is good. Personal opinion always wins :)
>

The heart always wins ;) well, an adaption here can make everybody's hearts
happy I think.

> --
> Uladzislau Rezki

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ