lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 12 Aug 2020 14:01:03 +0200
From:   Uladzislau Rezki <>
To:     Thomas Gleixner <>
Cc:     Michal Hocko <>,
        Uladzislau Rezki <>,
        LKML <>, RCU <>,, Andrew Morton <>,
        Vlastimil Babka <>,
        "Paul E . McKenney" <>,
        Matthew Wilcox <>,
        "Theodore Y . Ts'o" <>,
        Joel Fernandes <>,
        Sebastian Andrzej Siewior <>,
        Oleksiy Avramchenko <>
Subject: Re: [RFC-PATCH 1/2] mm: Add __GFP_NO_LOCKS flag

On Wed, Aug 12, 2020 at 01:38:35PM +0200, Thomas Gleixner wrote:
> Thomas Gleixner <> writes:
> > Thomas Gleixner <> writes:
> >> Michal Hocko <> writes:
> >>> zone->lock should be held for a very limited amount of time.
> >>
> >> Emphasis on should. free_pcppages_bulk() can hold it for quite some time
> >> when a large amount of pages are purged. We surely would have converted
> >> it to a raw lock long time ago otherwise.
> >>
> >> For regular enterprise stuff a few hundred microseconds might qualify as
> >> a limited amount of time. For advanced RT applications that's way beyond
> >> tolerable..
> >
> > Sebastian just tried with zone lock converted to a raw lock and maximum
> > latencies go up by a factor of 7 when putting a bit of stress on the
> > memory subsytem. Just a regular kernel compile kicks them up by a factor
> > of 5. Way out of tolerance.
> >
> > We'll have a look whether it's solely free_pcppages_bulk() and if so we
> > could get away with dropping the lock in the loop.
> So even on !RT and just doing a kernel compile the time spent in
> free_pcppages_bulk() is up to 270 usec.
I suspect if you measure the latency of the zone->lock and its contention
on any embedded device, i mean not powerful devices like PC, it could be
milliseconds. IMHO.

> It's not only the loop which processes a large pile of pages, part of it
> is caused by lock contention on zone->lock. Dropping the lock after a
> processing a couple of pages does not make it much better if enough CPUs
> are contending on the lock.
Initially i have not proposed to convert the lock, because i suspected that
from the RT point of view there could be problems. Also, like i mentioned before, 
the GFP_ATOMIC is not meaningful anymore, that is a bit out of what GFP_ATOMIC
stands for. But i see your point about "where is a stop line". 

That is why i proposed to bail out as later as possible: mm: Add __GFP_NO_LOCKS flag
>From the other hand we have been discussing other options, like converting. Just
to cover as much as possible :)

Thanks Thomas for valuable comments!

Vlad Rezki

Powered by blists - more mailing lists