lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 23 Sep 2014 14:28:49 +0000
From:	朱辉 <zhuhui@...omi.com>
To:	Weijie Yang <weijie.yang.kh@...il.com>
CC:	Greg KH <gregkh@...uxfoundation.org>,
	"rientjes@...gle.com" <rientjes@...gle.com>,
	"vinayakm.list@...il.com" <vinayakm.list@...il.com>,
	"weijie.yang@...sung.com" <weijie.yang@...sung.com>,
	"devel@...verdev.osuosl.org" <devel@...verdev.osuosl.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"teawater@...il.com" <teawater@...il.com>
Subject: Re: [PATCH] Fix the issue that lowmemkiller fell into a cycle that
 try to kill a task

On 09/23/14 16:00, Weijie Yang wrote:
> On Tue, Sep 23, 2014 at 12:48 PM, 朱辉 <zhuhui@...omi.com> wrote:
>>
>>
>> On 09/23/14 12:18, Greg KH wrote:
>>> On Tue, Sep 23, 2014 at 10:57:09AM +0800, Hui Zhu wrote:
>>>> The cause of this issue is when free memroy size is low and a lot of task is
>>>> trying to shrink the memory, the task that is killed by lowmemkiller cannot get
>>>> CPU to exit itself.
>>>>
>>>> Fix this issue with change the scheduling policy to SCHED_FIFO if a task's flag
>>>> is TIF_MEMDIE in lowmemkiller.
>>>>
>>>> Signed-off-by: Hui Zhu <zhuhui@...omi.com>
>>>> ---
>>>>    drivers/staging/android/lowmemorykiller.c | 4 ++++
>>>>    1 file changed, 4 insertions(+)
>>>>
>>>> diff --git a/drivers/staging/android/lowmemorykiller.c b/drivers/staging/android/lowmemorykiller.c
>>>> index b545d3d..ca1ffac 100644
>>>> --- a/drivers/staging/android/lowmemorykiller.c
>>>> +++ b/drivers/staging/android/lowmemorykiller.c
>>>> @@ -129,6 +129,10 @@ static unsigned long lowmem_scan(struct shrinker *s, struct shrink_control *sc)
>>>>
>>>>               if (test_tsk_thread_flag(p, TIF_MEMDIE) &&
>>>>                   time_before_eq(jiffies, lowmem_deathpending_timeout)) {
>>>> +                    struct sched_param param = { .sched_priority = 1 };
>>>> +
>>>> +                    if (p->policy == SCHED_NORMAL)
>>>> +                            sched_setscheduler(p, SCHED_FIFO, &param);
>>>
>>> This seems really specific to a specific scheduler pattern now.  Isn't
>>> there some other way to resolve this?
>
> hui, how about modify lowmem_deathpending_timeout if we don't
> touch scheduler pattern?

I tried to change line "lowmem_deathpending_timeout = jiffies + HZ" to 
"lowmem_deathpending_timeout = jiffies + HZ * 10".
But the issue can be reproduced sometimes.
Could you give me some comments on this part?

>
>> I tried to let the task that call lowmemkiller sleep some time when it
>> try to kill same task.  But it doesn't work.
>> I think the issue is that the free memroy size is too low to make more
>> and more tasks come to call lowmemkiller.
>
> I am not opposed to the idea that the task which is selected to be killed
> should exit ASAP.
>
> I want to make it clear, what is problem for the existing code and which
> effect we can get by applying this patch.
> 1. LMK count is increased, which can be reduced by applying this patch?
I think the free mem size will grow faster than without this patch.
> 2. app become more sluggish?
I didn't get that in my part.

>
> By the way, whether we need to modify out_of_memory() which also
> try to kill task?

I am not sure because LMK handle the memory issue early than OOM.
But I think this issue will not affect OOM because OOM has oom_zonelist_trylock and oom_zonelist_unlock.

Thanks,
Hui

>
>> Thanks,
>> Hui
>>
>>>
>>> thanks,
>>>
>>> greg k-h
>>>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ