lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 13 Mar 2021 00:46:11 +0800
From:   Muchun Song <songmuchun@...edance.com>
To:     Johannes Weiner <hannes@...xchg.org>
Cc:     Roman Gushchin <guro@...com>, Michal Hocko <mhocko@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Shakeel Butt <shakeelb@...gle.com>,
        Vladimir Davydov <vdavydov.dev@...il.com>,
        LKML <linux-kernel@...r.kernel.org>,
        Linux Memory Management List <linux-mm@...ck.org>,
        Xiongchun duan <duanxiongchun@...edance.com>
Subject: Re: [External] Re: [PATCH v3 3/4] mm: memcontrol: use obj_cgroup APIs
 to charge kmem pages

On Fri, Mar 12, 2021 at 11:59 PM Johannes Weiner <hannes@...xchg.org> wrote:
>
> On Fri, Mar 12, 2021 at 05:22:55PM +0800, Muchun Song wrote:
> > On Thu, Mar 11, 2021 at 6:05 AM Johannes Weiner <hannes@...xchg.org> wrote:
> > > > @@ -6828,7 +6857,7 @@ static void uncharge_batch(const struct uncharge_gather *ug)
> > > >
> > > >  static void uncharge_page(struct page *page, struct uncharge_gather *ug)
> > > >  {
> > > > -     unsigned long nr_pages;
> > > > +     unsigned long nr_pages, nr_kmem;
> > > >       struct mem_cgroup *memcg;
> > > >
> > > >       VM_BUG_ON_PAGE(PageLRU(page), page);
> > > > @@ -6836,34 +6865,44 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug)
> > > >       if (!page_memcg_charged(page))
> > > >               return;
> > > >
> > > > +     nr_pages = compound_nr(page);
> > > >       /*
> > > >        * Nobody should be changing or seriously looking at
> > > > -      * page memcg at this point, we have fully exclusive
> > > > -      * access to the page.
> > > > +      * page memcg or objcg at this point, we have fully
> > > > +      * exclusive access to the page.
> > > >        */
> > > > -     memcg = page_memcg_check(page);
> > > > +     if (PageMemcgKmem(page)) {
> > > > +             struct obj_cgroup *objcg;
> > > > +
> > > > +             objcg = page_objcg(page);
> > > > +             memcg = obj_cgroup_memcg_get(objcg);
> > > > +
> > > > +             page->memcg_data = 0;
> > > > +             obj_cgroup_put(objcg);
> > > > +             nr_kmem = nr_pages;
> > > > +     } else {
> > > > +             memcg = page_memcg(page);
> > > > +             page->memcg_data = 0;
> > > > +             nr_kmem = 0;
> > > > +     }
> > >
> > > Why is all this moved above the uncharge_batch() call?
> >
> > Before calling obj_cgroup_put(), we need set page->memcg_data
> > to zero. So I move "page->memcg_data = 0" to here.
>
> Yeah, it makes sense to keep those together, but we can move them both
> down to after the uncharge, right?

Right. I am doing this.

>
> > > It separates the pointer manipulations from the refcounting, which
> > > makes the code very difficult to follow.
> > >
> > > > +
> > > >       if (ug->memcg != memcg) {
> > > >               if (ug->memcg) {
> > > >                       uncharge_batch(ug);
> > > >                       uncharge_gather_clear(ug);
> > > >               }
> > > >               ug->memcg = memcg;
> > > > +             ug->dummy_page = page;
> > >
> > > Why this change?
> >
> > Just like ug->memcg, we do not need to set
> > ug->dummy_page in every loop.
>
> Ah, okay. That's a reasonable change, it's just confusing because I
> thought this was a requirement for the new code to work. But I didn't
> see how it relied on that, and it made me think I'm not understanding
> your code ;) It's better to split that into a separate patch.

Sorry for confusing you. I will split that into a separate patch.
Thanks.

>
> > I will rework the code in the next version.
>
> Thanks!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ