lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 24 Apr 2018 20:26:23 +0100
From:   Steven Whitehouse <swhiteho@...hat.com>
To:     Michal Hocko <mhocko@...nel.org>,
        LKML <linux-kernel@...r.kernel.org>
Cc:     Artem Bityutskiy <dedekind1@...il.com>,
        Richard Weinberger <richard@....at>,
        David Woodhouse <dwmw2@...radead.org>,
        Brian Norris <computersforpeace@...il.com>,
        Boris Brezillon <boris.brezillon@...e-electrons.com>,
        Marek Vasut <marek.vasut@...il.com>,
        Cyrille Pitchen <cyrille.pitchen@...ev4u.fr>,
        Theodore Ts'o <tytso@....edu>,
        Andreas Dilger <adilger.kernel@...ger.ca>,
        Bob Peterson <rpeterso@...hat.com>,
        Trond Myklebust <trond.myklebust@...marydata.com>,
        Anna Schumaker <anna.schumaker@...app.com>,
        Adrian Hunter <adrian.hunter@...el.com>,
        Philippe Ombredanne <pombredanne@...b.com>,
        Kate Stewart <kstewart@...uxfoundation.org>,
        Mikulas Patocka <mpatocka@...hat.com>,
        linux-mtd@...ts.infradead.org, linux-ext4@...r.kernel.org,
        cluster-devel@...hat.com, linux-nfs@...r.kernel.org,
        linux-mm@...ck.org
Subject: Re: vmalloc with GFP_NOFS

Hi,


On 24/04/18 17:27, Michal Hocko wrote:
> Hi,
> it seems that we still have few vmalloc users who perform GFP_NOFS
> allocation:
> drivers/mtd/ubi/io.c
> fs/ext4/xattr.c
> fs/gfs2/dir.c
> fs/gfs2/quota.c
> fs/nfs/blocklayout/extent_tree.c
> fs/ubifs/debug.c
> fs/ubifs/lprops.c
> fs/ubifs/lpt_commit.c
> fs/ubifs/orphan.c
>
> Unfortunatelly vmalloc doesn't suppoer GFP_NOFS semantinc properly
> because we do have hardocded GFP_KERNEL allocations deep inside the
> vmalloc layers. That means that if GFP_NOFS really protects from
> recursion into the fs deadlocks then the vmalloc call is broken.
>
> What to do about this? Well, there are two things. Firstly, it would be
> really great to double check whether the GFP_NOFS is really needed. I
> cannot judge that because I am not familiar with the code. It would be
> great if the respective maintainers (hopefully get_maintainer.sh pointed
> me to all relevant ones). If there is not reclaim recursion issue then
> simply use the standard vmalloc (aka GFP_KERNEL request).
For GFS2, and I suspect for other fs too, it is really needed. We don't 
want to enter reclaim while holding filesystem locks.

> If the use is really valid then we have a way to do the vmalloc
> allocation properly. We have memalloc_nofs_{save,restore} scope api. How
> does that work? You simply call memalloc_nofs_save when the reclaim
> recursion critical section starts (e.g. when you take a lock which is
> then used in the reclaim path - e.g. shrinker) and memalloc_nofs_restore
> when the critical section ends. _All_ allocations within that scope
> will get GFP_NOFS semantic automagically. If you are not sure about the
> scope itself then the easiest workaround is to wrap the vmalloc itself
> with a big fat comment that this should be revisited.
>
> Does that sound like something that can be done in a reasonable time?
> I have tried to bring this up in the past but our speed is glacial and
> there are attempts to do hacks like checking for abusers inside the
> vmalloc which is just too ugly to live.
>
> Please do not hesitate to get back to me if something is not clear.
>
> Thanks!

It would be good to fix this, and it has been known as an issue for a 
long time. We might well be able to make use of the new API though. It 
might be as simple as adding the calls when we get & release glocks, but 
I'd have to check the code to be sure,

Steve.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ