lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20180222163015.GQ30681@dhcp22.suse.cz>
Date:   Thu, 22 Feb 2018 17:30:15 +0100
From:   Michal Hocko <mhocko@...nel.org>
To:     Andrey Ryabinin <aryabinin@...tuozzo.com>
Cc:     Andrew Morton <akpm@...ux-foundation.org>,
        Shakeel Butt <shakeelb@...gle.com>,
        Cgroups <cgroups@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Linux MM <linux-mm@...ck.org>,
        Johannes Weiner <hannes@...xchg.org>,
        Vladimir Davydov <vdavydov.dev@...il.com>
Subject: Re: [PATCH v5 2/2] mm/memcontrol.c: Reduce reclaim retries in
 mem_cgroup_resize_limit()

On Thu 22-02-18 19:01:58, Andrey Ryabinin wrote:
> 
> 
> On 02/22/2018 06:44 PM, Michal Hocko wrote:
> > On Thu 22-02-18 18:38:11, Andrey Ryabinin wrote:
> 
> >>>>
> >>>> with the patch:
> >>>> best: 1.04  secs, 9.7G reclaimed
> >>>> worst: 2.2 secs, 16G reclaimed.
> >>>>
> >>>> without:
> >>>> best: 5.4 sec, 35G reclaimed
> >>>> worst: 22.2 sec, 136G reclaimed
> >>>
> >>> Could you also compare how much memory do we reclaim with/without the
> >>> patch?
> >>>
> >>
> >> I did and I wrote the results. Please look again.
> > 
> > I must have forgotten. Care to point me to the message-id?
> 
> The results are quoted right above, literally above. Raise your eyes
> up. message-id 0927bcab-7e2c-c6f9-d16a-315ac436ba98@...tuozzo.com

OK, I see. We were talking about 2 different things I guess.

> I write it here again:
> 
> with the patch:
>  best: 9.7G reclaimed
>  worst: 16G reclaimed
> 
> without:
>  best: 35G reclaimed
>  worst: 136G reclaimed
> 
> Or you asking about something else? If so, I don't understand what you
> want.

Well, those numbers do not tell us much, right? You have 4 concurrent
readers each an own 1G file in a loop. The longer you keep running that
the more pages you are reclaiming of course. But you are not comparing
the same amount of work.

My main concern about the patch is that it might over-reclaim a lot if
we have workload which also frees memory rahther than constantly add
more easily reclaimable page cache. I realize such a test is not easy
to make.

I have already said that I will not block the patch but it should be at
least explained why a larger batch makes a difference.
-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ