lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 9 Jul 2009 20:07:53 +0900
From:	Minchan Kim <minchan.kim@...il.com>
To:	Wu Fengguang <fengguang.wu@...el.com>
Cc:	KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
	Rik van Riel <riel@...hat.com>,
	LKML <linux-kernel@...r.kernel.org>,
	linux-mm <linux-mm@...ck.org>,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [RFC PATCH 1/2] vmscan don't isolate too many pages in a zone

Hi, Wu.

On Thu, Jul 9, 2009 at 5:42 PM, Wu Fengguang<fengguang.wu@...el.com> wrote:
> On Thu, Jul 09, 2009 at 03:01:26PM +0800, KOSAKI Motohiro wrote:
>> Hi
>>
>> > I tried the semaphore based concurrent direct reclaim throttling, and
>> > get these numbers. The run time is normal 30s, but can sometimes go up
>> > by many folds. It seems that there are more hidden problems..
>>
>> Hmm....
>> I think I and you have different priority list. May I explain why Rik
>> and decide to use half of LRU pages?
>>
>> the system have 4GB (=1M pages) memory. my patch allow 1M/2/32=16384
>> threads. I agree this is very large and inefficient. However IOW
>> this is very conservative.
>> I believe it don't makes too strong restriction problem.
>
> Sorry if I made confusions. I agree on the NR_ISOLATED based throttling.
> It risks much less than to limit the concurrency of direct reclaim.
> Isolating half LRU pages normally costs nothing.
>
>> In the other hand, your patch's concurrent restriction is small constant
>> value (=32).
>> it can be more efficient and it also can makes regression. IOW it is more
>> aggressive approach.
>>
>> e.g.
>> if the system have >100 CPU, my patch can get enough much reclaimer but
>> your patch makes tons idle cpus.
>
> That's a quick (and clueless) hack to check if the (very unstable)
> reclaim behavior can be improved by limiting the concurrency. I didn't
> mean to push it further more :)
>
>> And, To recall original issue tearch us this is rarely and a bit insane
>> workload issue.
>> Then, I priotize to
>>
>> 1. prevent unnecessary OOM
>> 2. no regression to typical workload
>> 3. msgctl11 performance
>
> I totally agree on the above priorities.
>
>>
>> IOW, I don't think msgctl11 performance is so important.
>> May I ask why do you think msgctl11 performance is so important?
>
> Now that we have addressed (1)/(2) with your patch, naturally the
> msgctl11 performance problem catches my eyes. Strictly speaking
> I'm not particularly interested in the performance itself, but
> the obviously high _fluctuations_ of performance. Something bad

Me, too. I also have a looked into this problem.
But unfortunately, I can't devote my attention to the problem until
this weekend.
If you know the cause, let me know it :)

-- 
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ