lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALq1K=KYYXgtK5mRvBO_+Kdxt8nHmq-cquo1Qqj=UdB+TDrueA@mail.gmail.com>
Date:	Thu, 11 Sep 2014 15:50:06 +0300
From:	Leon Romanovsky <leon@...n.nu>
To:	Johannes Weiner <hannes@...xchg.org>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Mel Gorman <mgorman@...e.de>, Vlastimil Babka <vbabka@...e.cz>,
	Linux-MM <linux-mm@...ck.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [patch resend] mm: page_alloc: fix zone allocation fairness on UP

On Thu, Sep 11, 2014 at 3:36 PM, Johannes Weiner <hannes@...xchg.org> wrote:
> On Wed, Sep 10, 2014 at 07:32:20AM +0300, Leon Romanovsky wrote:
>> Hi Johaness,
>>
>>
>> On Tue, Sep 9, 2014 at 4:15 PM, Johannes Weiner <hannes@...xchg.org> wrote:
>>
>> > The zone allocation batches can easily underflow due to higher-order
>> > allocations or spills to remote nodes.  On SMP that's fine, because
>> > underflows are expected from concurrency and dealt with by returning
>> > 0.  But on UP, zone_page_state will just return a wrapped unsigned
>> > long, which will get past the <= 0 check and then consider the zone
>> > eligible until its watermarks are hit.
>> >
>> > 3a025760fc15 ("mm: page_alloc: spill to remote nodes before waking
>> > kswapd") already made the counter-resetting use atomic_long_read() to
>> > accomodate underflows from remote spills, but it didn't go all the way
>> > with it.  Make it clear that these batches are expected to go negative
>> > regardless of concurrency, and use atomic_long_read() everywhere.
>> >
>> > Fixes: 81c0a2bb515f ("mm: page_alloc: fair zone allocator policy")
>> > Reported-by: Vlastimil Babka <vbabka@...e.cz>
>> > Reported-by: Leon Romanovsky <leon@...n.nu>
>> > Signed-off-by: Johannes Weiner <hannes@...xchg.org>
>> > Acked-by: Mel Gorman <mgorman@...e.de>
>> > Cc: "3.12+" <stable@...nel.org>
>> > ---
>> >  mm/page_alloc.c | 7 +++----
>> >  1 file changed, 3 insertions(+), 4 deletions(-)
>> >
>> > Sorry I forgot to CC you, Leon.  Resend with updated Tags.
>> >
>> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> > index 18cee0d4c8a2..eee961958021 100644
>> > --- a/mm/page_alloc.c
>> > +++ b/mm/page_alloc.c
>> > @@ -1612,7 +1612,7 @@ again:
>> >         }
>> >
>> >         __mod_zone_page_state(zone, NR_ALLOC_BATCH, -(1 << order));
>> > -       if (zone_page_state(zone, NR_ALLOC_BATCH) == 0 &&
>> > +       if (atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH]) <= 0 &&
>> >             !zone_is_fair_depleted(zone))
>> >                 zone_set_flag(zone, ZONE_FAIR_DEPLETED);
>> >
>> > @@ -5701,9 +5701,8 @@ static void __setup_per_zone_wmarks(void)
>> >                 zone->watermark[WMARK_HIGH] = min_wmark_pages(zone) + (tmp
>> > >> 1);
>> >
>> >                 __mod_zone_page_state(zone, NR_ALLOC_BATCH,
>> > -                                     high_wmark_pages(zone) -
>> > -                                     low_wmark_pages(zone) -
>> > -                                     zone_page_state(zone,
>> > NR_ALLOC_BATCH));
>> > +                       high_wmark_pages(zone) - low_wmark_pages(zone) -
>> > +                       atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH]));
>> >
>> >                 setup_zone_migrate_reserve(zone);
>> >                 spin_unlock_irqrestore(&zone->lock, flags);
>> >
>>
>> I think the better way will be to apply Mel's patch
>> https://lkml.org/lkml/2014/9/8/214 which fix zone_page_state shadow casting
>> issue and convert all atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH])) to
>> zone_page__state(zone, NR_ALLOC_BATCH). This move will unify access to
>> vm_stat.
>
> It's not as simple.  The counter can go way negative and we need that
> negative number, not 0, to calculate the reset delta.  As I said in
> response to Mel's patch, we could make the vmstat API signed but I'm
> not convinced that is reasonable, given the 99% majority of usecases.
You are right, I missed that NR_ALLOC_BATCH is in use as a part of calculations
+                       high_wmark_pages(zone) - low_wmark_pages(zone) -
+                       atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH]));
Sorry


-- 
Leon Romanovsky | Independent Linux Consultant
        www.leon.nu | leon@...n.nu
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ