lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGWkznE+NJECwdcyb0d5MOrgjA5Qt-o4-5SmsCstZuYLhunKew@mail.gmail.com>
Date:   Tue, 17 Oct 2023 10:32:59 +0800
From:   Zhaoyang Huang <huangzhaoyang@...il.com>
To:     Andrew Morton <akpm@...ux-foundation.org>
Cc:     "zhaoyang.huang" <zhaoyang.huang@...soc.com>,
        Johannes Weiner <hannes@...xchg.org>,
        Roman Gushchin <roman.gushchin@...ux.dev>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, steve.kang@...soc.com
Subject: Re: [PATCHv6 1/1] mm: optimization on page allocation when CMA enabled

On Tue, Oct 17, 2023 at 6:40 AM Andrew Morton <akpm@...ux-foundation.org> wrote:
>
> On Mon, 16 Oct 2023 15:12:45 +0800 "zhaoyang.huang" <zhaoyang.huang@...soc.com> wrote:
>
> > From: Zhaoyang Huang <zhaoyang.huang@...soc.com>
> >
> > According to current CMA utilization policy, an alloc_pages(GFP_USER)
> > could 'steal' UNMOVABLE & RECLAIMABLE page blocks via the help of
> > CMA(pass zone_watermark_ok by counting CMA in but use U&R in rmqueue),
> > which could lead to following alloc_pages(GFP_KERNEL) fail.
> > Solving this by introducing second watermark checking for GFP_MOVABLE,
> > which could have the allocation use CMA when proper.
> >
> > -- Free_pages(30MB)
> > |
> > |
> > -- WMARK_LOW(25MB)
> > |
> > -- Free_CMA(12MB)
> > |
> > |
> > --
> >
> > Signed-off-by: Zhaoyang Huang <zhaoyang.huang@...soc.com>
> > ---
> > v6: update comments
>
> The patch itself is identical to the v5 patch.  So either you meant
> "update changelog" above or you sent the wrong diff?
sorry, should be update changelog
>
> Also, have we resolved any concerns regarding possible performance
> impacts of this change?
I don't think this commit could introduce performance impact as it
actually adds one more path for using CMA page blocks in advance.

 __rmqueue(struct zone *zone, unsigned int order, int migratetype,
        if (IS_ENABLED(CONFIG_CMA)) {
                if (alloc_flags & ALLOC_CMA &&
-                   zone_page_state(zone, NR_FREE_CMA_PAGES) >
-                   zone_page_state(zone, NR_FREE_PAGES) / 2) {
+                       use_cma_first(zone, order, alloc_flags)) {
          //current '1/2' logic is kept while add a path for using CMA
in advance than now.
                        page = __rmqueue_cma_fallback(zone, order);
                        if (page)
                                return page;
                }
        }
retry:
                               //normal rmqueue_smallest path is not
affected which could deemed as a fallback path for
__rmqueue_cma_fallback failure
        page = __rmqueue_smallest(zone, order, migratetype);
        if (unlikely(!page)) {
                if (alloc_flags & ALLOC_CMA)
                        page = __rmqueue_cma_fallback(zone, order);

                if (!page && __rmqueue_fallback(zone, order, migratetype,
                                                                alloc_flags))
                        goto retry;
        }
        return page;
}

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ