lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHbLzkq20hzLdYM-EMOfWRqPOr+OQF8uq5yWR=Yb6vQY51LKwg@mail.gmail.com>
Date:   Wed, 19 Feb 2020 09:01:23 -0800
From:   Yang Shi <shy828301@...il.com>
To:     David Rientjes <rientjes@...gle.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
        Mike Rapoport <rppt@...ux.ibm.com>,
        Jeremy Cline <jcline@...hat.com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Linux MM <linux-mm@...ck.org>
Subject: Re: [patch 1/2] mm, shmem: add thp fault alloc and fallback stats

On Tue, Feb 18, 2020 at 7:44 PM David Rientjes <rientjes@...gle.com> wrote:
>
> On Tue, 18 Feb 2020, Yang Shi wrote:
>
> > > diff --git a/mm/shmem.c b/mm/shmem.c
> > > --- a/mm/shmem.c
> > > +++ b/mm/shmem.c
> > > @@ -1502,9 +1502,8 @@ static struct page *shmem_alloc_page(gfp_t gfp,
> > >         return page;
> > >  }
> > >
> > > -static struct page *shmem_alloc_and_acct_page(gfp_t gfp,
> > > -               struct inode *inode,
> > > -               pgoff_t index, bool huge)
> > > +static struct page *shmem_alloc_and_acct_page(gfp_t gfp, struct inode *inode,
> > > +               pgoff_t index, bool fault, bool huge)
> > >  {
> > >         struct shmem_inode_info *info = SHMEM_I(inode);
> > >         struct page *page;
> > > @@ -1518,9 +1517,11 @@ static struct page *shmem_alloc_and_acct_page(gfp_t gfp,
> > >         if (!shmem_inode_acct_block(inode, nr))
> > >                 goto failed;
> > >
> > > -       if (huge)
> > > +       if (huge) {
> > >                 page = shmem_alloc_hugepage(gfp, info, index);
> > > -       else
> > > +               if (!page && fault)
> > > +                       count_vm_event(THP_FAULT_FALLBACK);
> > > +       } else
> > >                 page = shmem_alloc_page(gfp, info, index);
> > >         if (page) {
> > >                 __SetPageLocked(page);
> > > @@ -1832,11 +1833,10 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
> > >         }
> > >
> > >  alloc_huge:
> > > -       page = shmem_alloc_and_acct_page(gfp, inode, index, true);
> > > +       page = shmem_alloc_and_acct_page(gfp, inode, index, vmf, true);
> > >         if (IS_ERR(page)) {
> > >  alloc_nohuge:
> > > -               page = shmem_alloc_and_acct_page(gfp, inode,
> > > -                                                index, false);
> > > +               page = shmem_alloc_and_acct_page(gfp, inode, index, vmf, false);
> > >         }
> > >         if (IS_ERR(page)) {
> > >                 int retry = 5;
> > > @@ -1871,8 +1871,11 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
> > >
> > >         error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg,
> > >                                             PageTransHuge(page));
> > > -       if (error)
> > > +       if (error) {
> > > +               if (vmf && PageTransHuge(page))
> > > +                       count_vm_event(THP_FAULT_FALLBACK);
> > >                 goto unacct;
> > > +       }
> > >         error = shmem_add_to_page_cache(page, mapping, hindex,
> > >                                         NULL, gfp & GFP_RECLAIM_MASK);
> > >         if (error) {
> > > @@ -1883,6 +1886,8 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
> > >         mem_cgroup_commit_charge(page, memcg, false,
> > >                                  PageTransHuge(page));
> > >         lru_cache_add_anon(page);
> > > +       if (vmf && PageTransHuge(page))
> > > +               count_vm_event(THP_FAULT_ALLOC);
> >
> > I think shmem THP alloc is accounted to THP_FILE_ALLOC. And it has
> > been accounted by shmem_add_to_page_cache(). So, it sounds like a
> > double count.
> >
>
> I think we can choose to either include file allocations into both
> thp_fault_alloc and thp_fault_fallback or we can exclude them from both of
> them.  I don't think we can account for only one of them.

How's about the 3rd option, adding THP_FILE_FALLBACK.

According to the past discussion with Hugh and Kirill, basically
shmem/file THP is treated differently and separately from anonymous
THP, and they have separate enabling knobs
(/sys/kernel/mm/transparent_hugepage/enabled just enables anonymous
THP). Since we already have THP_FILE_ALLOC for shmem THP allocation,
IMHO it makes more sense to have dedicated FALLBACK counter. And, this
won't change the old behavior either.

>
> > >
> > >         spin_lock_irq(&info->lock);
> > >         info->alloced += compound_nr(page);
> > >
> >

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ