lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 14 Oct 2019 18:30:22 +0200
From:   Michal Hocko <mhocko@...nel.org>
To:     Uladzislau Rezki <urezki@...il.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Steven Rostedt <rostedt@...dmis.org>,
        Daniel Wagner <dwagner@...e.de>,
        Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
        Thomas Gleixner <tglx@...utronix.de>, linux-mm@...ck.org,
        LKML <linux-kernel@...r.kernel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Hillf Danton <hdanton@...a.com>,
        Matthew Wilcox <willy@...radead.org>,
        Oleksiy Avramchenko <oleksiy.avramchenko@...ymobile.com>
Subject: Re: [PATCH 1/1] mm/vmalloc: remove preempt_disable/enable when do
 preloading

On Mon 14-10-19 16:27:19, Uladzislau Rezki wrote:
> On Fri, Oct 11, 2019 at 04:55:15PM -0700, Andrew Morton wrote:
> > On Thu, 10 Oct 2019 17:17:49 +0200 Uladzislau Rezki <urezki@...il.com> wrote:
> > 
> > > > > : 	 * The preload is done in non-atomic context, thus it allows us
> > > > > : 	 * to use more permissive allocation masks to be more stable under
> > > > > : 	 * low memory condition and high memory pressure.
> > > > > : 	 *
> > > > > : 	 * Even if it fails we do not really care about that. Just proceed
> > > > > : 	 * as it is. "overflow" path will refill the cache we allocate from.
> > > > > : 	 */
> > > > > : 	if (!this_cpu_read(ne_fit_preload_node)) {
> > > > > 
> > > > > Readability nit: local `pva' should be defined here, rather than having
> > > > > function-wide scope.
> > > > > 
> > > > > : 		pva = kmem_cache_alloc_node(vmap_area_cachep, GFP_KERNEL, node);
> > > > > 
> > > > > Why doesn't this honour gfp_mask?  If it's not a bug, please add
> > > > > comment explaining this.
> > > > > 
> > > But there is a comment, if understand you correctly:
> > > 
> > > <snip>
> > > * Even if it fails we do not really care about that. Just proceed
> > > * as it is. "overflow" path will refill the cache we allocate from.
> > > <snip>
> > 
> > My point is that the alloc_vmap_area() caller passed us a gfp_t but
> > this code ignores it, as does adjust_va_to_fit_type().  These *look*
> > like potential bugs.  If not, they should be commented so they don't
> > look like bugs any more ;)
> > 
> I got it, there was misunderstanding from my side :) I agree.
> 
> In the first case i should have used and respect the passed "gfp_mask",
> like below:
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index f48cd0711478..880b6e8cdeae 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -1113,7 +1113,8 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
>                  * Just proceed as it is. If needed "overflow" path
>                  * will refill the cache we allocate from.
>                  */
> -               pva = kmem_cache_alloc_node(vmap_area_cachep, GFP_KERNEL, node);
> +               pva = kmem_cache_alloc_node(vmap_area_cachep,
> +                               gfp_mask & GFP_RECLAIM_MASK, node);
>  
>         spin_lock(&vmap_area_lock);
> 
> It should be sent as a separate patch, i think.

Yes. I do not think this would make any real difference because that
battle is lost long ago. vmalloc is simply not gfp mask friendly. There
are places like page table allocation which are hardcoded GFP_KERNEL so
GFP_NOWAIT semantic is not going to work, really. The above makes sense
from a pure aesthetic POV, though, I would say.
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ