lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YfAvkKZlVQYukays@pc638.lan>
Date:   Tue, 25 Jan 2022 18:12:48 +0100
From:   Uladzislau Rezki <urezki@...il.com>
To:     Matthew Wilcox <willy@...radead.org>
Cc:     "Uladzislau Rezki (Sony)" <urezki@...il.com>,
        Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
        LKML <linux-kernel@...r.kernel.org>,
        Christoph Hellwig <hch@...radead.org>,
        Nicholas Piggin <npiggin@...il.com>,
        Oleksiy Avramchenko <oleksiy.avramchenko@...ymobile.com>
Subject: Re: [PATCH v2 1/1] mm/vmalloc: Move draining areas out of caller
 context

On Tue, Jan 25, 2022 at 04:50:14PM +0000, Matthew Wilcox wrote:
> On Tue, Jan 25, 2022 at 05:39:12PM +0100, Uladzislau Rezki (Sony) wrote:
> > @@ -1768,7 +1776,8 @@ static void free_vmap_area_noflush(struct vmap_area *va)
> >  
> >  	/* After this point, we may free va at any time */
> >  	if (unlikely(nr_lazy > lazy_max_pages()))
> > -		try_purge_vmap_area_lazy();
> > +		if (!atomic_xchg(&drain_vmap_work_in_progress, 1))
> > +			schedule_work(&drain_vmap_work);
> >  }
> 
> Is it necessary to have drain_vmap_work_in_progress?  The documentation
> says:
> 
>  * This puts a job in the kernel-global workqueue if it was not already
>  * queued and leaves it in the same position on the kernel-global
>  * workqueue otherwise.
> 
> and the implementation seems to use test_and_set_bit() to ensure this
> is true.
>
It checks pending state, if the work is in run-queue you can place it
one more time. The motivation of having it is to prevent the drain work
of being placed several times at once what i see on my stress testing.

CPU_1: invokes vfree() -> queues the drain work -> TASK_RUNNING
CPU_2: invokes vfree() -> queues the drain work one more time since it was not pending
...

Instead of drain_vmap_work_in_progress hack we can make use of work_busy()
helper. The main concern with that was the comment around that function:

/**
 * work_busy - test whether a work is currently pending or running
 * @work: the work to be tested
 *
 * Test whether @work is currently pending or running.  There is no
 * synchronization around this function and the test result is
 * unreliable and only useful as advisory hints or for debugging.
 *
 * Return:
 * OR'd bitmask of WORK_BUSY_* bits.
 */

i am not sure how reliable this is.

Thoughts?

--
Vlad Rezki

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ