lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAJuCfpGRW3bro7Be6p_MmaqrZgv01GyJfe_T3WfaHj1T0o+3mA@mail.gmail.com>
Date:   Wed, 23 Feb 2022 11:49:37 -0800
From:   Suren Baghdasaryan <surenb@...gle.com>
To:     akpm@...ux-foundation.org
Cc:     hannes@...xchg.org, mhocko@...e.com, pmladek@...e.com,
        peterz@...radead.org, guro@...com, shakeelb@...gle.com,
        minchan@...nel.org, timmurray@...gle.com, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org, kernel-team@...roid.com
Subject: Re: [PATCH v2 1/1] mm: count time in drain_all_pages during direct
 reclaim as memory pressure

On Wed, Feb 23, 2022 at 11:43 AM Suren Baghdasaryan <surenb@...gle.com> wrote:
>
> On Wed, Feb 23, 2022 at 11:40 AM Suren Baghdasaryan <surenb@...gle.com> wrote:
> >
> > When page allocation in direct reclaim path fails, the system will
> > make one attempt to shrink per-cpu page lists and free pages from
> > high alloc reserves. Draining per-cpu pages into buddy allocator can
> > be a very slow operation because it's done using workqueues and the
> > task in direct reclaim waits for all of them to finish before
> > proceeding. Currently this time is not accounted as psi memory stall.
> >
> > While testing mobile devices under extreme memory pressure, when
> > allocations are failing during direct reclaim, we notices that psi
> > events which would be expected in such conditions were not triggered.
> > After profiling these cases it was determined that the reason for
> > missing psi events was that a big chunk of time spent in direct
> > reclaim is not accounted as memory stall, therefore psi would not
> > reach the levels at which an event is generated. Further investigation
> > revealed that the bulk of that unaccounted time was spent inside
> > drain_all_pages call.
> >
> > A typical captured case when drain_all_pages path gets activated:
> >
> > __alloc_pages_slowpath  took 44.644.613ns
> >     __perform_reclaim   took    751.668ns (1.7%)
> >     drain_all_pages     took 43.887.167ns (98.3%)
> >
> > PSI in this case records the time spent in __perform_reclaim but
> > ignores drain_all_pages, IOW it misses 98.3% of the time spent in
> > __alloc_pages_slowpath.
> >
> > Annotate __alloc_pages_direct_reclaim in its entirety so that delays
> > from handling page allocation failure in the direct reclaim path are
> > accounted as memory stall.
> >
> > Reported-by: Tim Murray <timmurray@...gle.com>
> > Signed-off-by: Suren Baghdasaryan <surenb@...gle.com>
> > Acked-by: Johannes Weiner <hannes@...xchg.org>
> > ---
> > changes in v2:
> > - Added captured sample case to show the delay numbers, per Michal Hocko
> > - Moved annotation from __perform_reclaim into __alloc_pages_direct_reclaim,
> > per Minchan Kim
> >
> >  mm/page_alloc.c | 11 ++++++-----
> >  1 file changed, 6 insertions(+), 5 deletions(-)
> >
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 3589febc6d31..2e9fbf28938f 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -4595,13 +4595,12 @@ __perform_reclaim(gfp_t gfp_mask, unsigned int order,
> >                                         const struct alloc_context *ac)
> >  {
> >         unsigned int noreclaim_flag;
> > -       unsigned long pflags, progress;
> > +       unsigned long progress;
> >
> >         cond_resched();
> >
> >         /* We now go into synchronous reclaim */
> >         cpuset_memory_pressure_bump();
> > -       psi_memstall_enter(&pflags);
> >         fs_reclaim_acquire(gfp_mask);
> >         noreclaim_flag = memalloc_noreclaim_save();
> >
> > @@ -4610,7 +4609,6 @@ __perform_reclaim(gfp_t gfp_mask, unsigned int order,
> >
> >         memalloc_noreclaim_restore(noreclaim_flag);
> >         fs_reclaim_release(gfp_mask);
> > -       psi_memstall_leave(&pflags);
> >
> >         cond_resched();
> >
> > @@ -4624,11 +4622,13 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
> >                 unsigned long *did_some_progress)
> >  {
> >         struct page *page = NULL;
> > +       unsigned long pflags;
> >         bool drained = false;
> >
> > +       psi_memstall_enter(&pflags);
> >         *did_some_progress = __perform_reclaim(gfp_mask, order, ac);
> >         if (unlikely(!(*did_some_progress)))
> > -               return NULL;
> > +               goto out;
> >
> >  retry:
> >         page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac);
> > @@ -4644,7 +4644,8 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
> >                 drained = true;
> >                 goto retry;
> >         }
> > -
> > +       psi_memstall_leave(&pflags);
>
> Oh, psi_memstall_leave should have been *after* the "out" label. Will
> fix and repost.

Fixed in v3: https://lore.kernel.org/all/20220223194812.1299646-1-surenb@google.com/

>
> > +out:
> >         return page;
> >  }
> >
> > --
> > 2.35.1.473.g83b2b277ed-goog
> >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ