lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55C40C08.8010706@jp.fujitsu.com>
Date:	Fri, 7 Aug 2015 10:38:16 +0900
From:	Kamezawa Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
To:	Vladimir Davydov <vdavydov@...allels.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	Johannes Weiner <hannes@...xchg.org>,
	Michal Hocko <mhocko@...nel.org>,
	Minchan Kim <minchan@...nel.org>,
	Rik van Riel <riel@...hat.com>, Mel Gorman <mgorman@...e.de>,
	linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/3] Make workingset detection logic memcg aware

On 2015/08/06 17:59, Vladimir Davydov wrote:
> On Wed, Aug 05, 2015 at 10:34:58AM +0900, Kamezawa Hiroyuki wrote:
>
>> Reading discussion, I feel storing more data is difficult, too.
>
> Yep, even with the current 16-bit memcg id. Things would get even worse
> if we wanted to extend it one day (will we?)
>
>>
>> I wonder, rather than collecting more data, rough calculation can help the situation.
>> for example,
>>
>>     (refault_disatance calculated in zone) * memcg_reclaim_ratio < memcg's active list
>>
>> If one of per-zone calc or per-memcg calc returns true, refault should be true.
>>
>> memcg_reclaim_ratio is the percentage of scan in a memcg against in a zone.
>
> This particular formula wouldn't work I'm afraid. If there are two
> isolated cgroups issuing local reclaim on the same zone, the refault
> distance needed for activation would be reduced by half for no apparent
> reason.

Hmm, you mean activation in memcg means activation in global LRU, and it's not a
valid reason. Current implementation does have the same issue, right ?

i.e. when a container has been hitting its limit for a while, and then, a file cache is
pushed out but came back soon, it can be easily activated.

I'd like to confirm what you want to do.

  1) avoid activating a file cache when it was kicked out because of memcg's local limit.
  2) maintain acitve/inactive ratio in memcg properly as global LRU does.
  3) reclaim shadow entry at proper timing.

All ? hmm. It seems that mixture of record of global memory pressure and of local memory
pressure is just wrong.

Now, the record is
    
    eviction | node | zone | 2bit.

How about changing this as

         0 |eviction | node | zone | 2bit
         1 |eviction |  memcgid    | 2bit

Assume each memcg has an eviction counter, which ignoring node/zone.
i.e. memcg local reclaim happens against memcg not against zone.

At page-in,
         if (the 1st bit is 0)
                 compare eviction counter with zone's counter and activate the page if needed.
         else if (the 1st bit is 1)
                 compare eviction counter with the memcg (if exists)
                 if (current memcg == recorded memcg && eviction distance is okay)
                      activate page.
                 else
                      inactivate
       
At page-out
         if (global memory pressure)
                 record eviction id with using zone's counter.
         else if (memcg local memory pressure)
                 record eviction id with memcg's counter.

By this,
    1) locally reclaimed pages cannot be activated unless it's refaulted in the same memcg.
       In this case, activating in the memcg has some meaning.

    2) At global memory pressure, distance is properly calculated based on global system status.
       global memory pressure can ignore memcg's behavior.

about shadow entries, kmemcg should take care of it....


Thanks,
-Kame




--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ