lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 27 Feb 2017 09:01:09 -0800
From:   Tahsin Erdogan <tahsin@...gle.com>
To:     Michal Hocko <mhocko@...nel.org>
Cc:     Tejun Heo <tj@...nel.org>, Christoph Lameter <cl@...ux.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Chris Wilson <chris@...is-wilson.co.uk>,
        Andrey Ryabinin <aryabinin@...tuozzo.com>,
        Roman Pen <r.peniaev@...il.com>,
        Joonas Lahtinen <joonas.lahtinen@...ux.intel.com>,
        zijun_hu <zijun_hu@....com>,
        Joonsoo Kim <iamjoonsoo.kim@....com>,
        David Rientjes <rientjes@...gle.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 3/3] percpu: improve allocation success rate for
 non-GFP_KERNEL callers

On Mon, Feb 27, 2017 at 7:25 AM, Michal Hocko <mhocko@...nel.org> wrote:
>         /*
>          * No space left.  Create a new chunk.  We don't want multiple
>          * tasks to create chunks simultaneously.  Serialize and create iff
>          * there's still no empty chunk after grabbing the mutex.
>          */
>         if (is_atomic)
>                 goto fail;
>
> right before pcpu_populate_chunk so is this actually a problem?

Yes, this prevents adding more pcpu chunks and so cause "atomic" allocations
to fail more easily.

>> By the way, I now noticed the might_sleep() in alloc_vmap_area() which makes
>> it unsafe to call vmalloc* in GFP_ATOMIC contexts. It was added recently:
>
> Do we call alloc_vmap_area from true atomic contexts (aka from under
> spinlocks etc)? I thought this was a nogo and GFP_NOWAIT resp.
> GFP_ATOMIC was more about optimistic request resp. access to memory
> reserves rather than true atomicity requirements.

In the call path that I am trying to fix, the caller uses GFP_NOWAIT mask.
The caller is holding a spinlock (request_queue->queue_lock) so we can't afford
to sleep.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ