lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <dfbf05d1-daff-e855-f4fd-e802614b79c4@bytedance.com>
Date:   Fri, 4 Aug 2023 21:15:57 +0800
From:   Chuyi Zhou <zhouchuyi@...edance.com>
To:     Michal Hocko <mhocko@...e.com>
Cc:     hannes@...xchg.org, roman.gushchin@...ux.dev, ast@...nel.org,
        daniel@...earbox.net, andrii@...nel.org, muchun.song@...ux.dev,
        bpf@...r.kernel.org, linux-kernel@...r.kernel.org,
        wuyun.abel@...edance.com, robin.lu@...edance.com
Subject: Re: [RFC PATCH 1/2] mm, oom: Introduce bpf_select_task

Hello,

在 2023/8/4 19:29, Michal Hocko 写道:
> On Fri 04-08-23 17:38:03, Chuyi Zhou wrote:
>> This patch adds a new hook bpf_select_task in oom_evaluate_task. It
>> takes oc and current iterating task as parameters and returns a result
>> indicating which one is selected by bpf program.
>>
>> Although bpf_select_task is used to bypass the default method, there are
>> some existing rules should be obeyed. Specifically, we skip these
>> "unkillable" tasks(e.g., kthread, MMF_OOM_SKIP, in_vfork()).So we do not
>> consider tasks with lowest score returned by oom_badness except it was
>> caused by OOM_SCORE_ADJ_MIN.
> 
> Is this really necessary? I do get why we need to preserve
> OOM_SCORE_ADJ_* semantic for in-kernel oom selection logic but why
> should an arbitrary oom policy care. Look at it from an arbitrary user
> space based policy. It just picks a task or memcg and kills taks by
> sending SIG_KILL (or maybe SIG_TERM first) signal. oom_score constrains
> will not prevent anybody from doing that.

Sorry, some of my expressions may have misled you.

I do agree bpf interface should bypass the current OOM_SCORE_ADJ_* 
logic. What I meant to say is that bpf can select a task even it was
setted OOM_SCORE_ADJ_MIN.

> 
> tsk_is_oom_victim (and MMF_OOM_SKIP) is a slightly different case but
> not too much. The primary motivation is to prevent new oom victims
> while there is one already being killed. This is a reasonable heuristic
> especially with the async oom reclaim (oom_reaper). It also reduces
> amount of oom emergency memory reserves to some degree but since those
> are not absolute this is no longer the primary motivation. _But_ I can
> imagine that some policies might be much more aggresive and allow to
> select new victims if preexisting are not being killed in time.
> 
> oom_unkillable_task is a general sanity check so it should remain in
> place.
> 
> I am not really sure about oom_task_origin. That is just a very weird
> case and I guess it wouldn't hurt to keep it in generic path.
> 
> All that being said I think we want something like the following (very
> pseudo-code). I have no idea what is the proper way how to define BPF
> hooks though so a help from BPF maintainers would be more then handy
> ---
> diff --git a/include/linux/nmi.h b/include/linux/nmi.h
> index 00982b133dc1..9f1743ee2b28 100644
> --- a/include/linux/nmi.h
> +++ b/include/linux/nmi.h
> @@ -190,10 +190,6 @@ static inline bool trigger_all_cpu_backtrace(void)
>   {
>   	return false;
>   }
> -static inline bool trigger_allbutself_cpu_backtrace(void)
> -{
> -	return false;
> -}
>   static inline bool trigger_cpumask_backtrace(struct cpumask *mask)
>   {
>   	return false;
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index 612b5597d3af..c9e04be52700 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -317,6 +317,22 @@ static int oom_evaluate_task(struct task_struct *task, void *arg)
>   	if (!is_memcg_oom(oc) && !oom_cpuset_eligible(task, oc))
>   		goto next;
>   
> +	/*
> +	 * If task is allocating a lot of memory and has been marked to be
> +	 * killed first if it triggers an oom, then select it.
> +	 */
> +	if (oom_task_origin(task)) {
> +		points = LONG_MAX;
> +		goto select;
> +	}
> +
> +	switch (bpf_oom_evaluate_task(task, oc, &points)) {
> +		case -EOPNOTSUPP: break; /* No BPF policy */
> +		case -EBUSY: goto abort; /* abort search process */
> +		case 0: goto next; /* ignore process */
> +		default: goto select; /* note the task */
> +	}

Why we need to change the *points* value if we do not care about 
oom_badness ? Is it used to record some state? If so, we could record it 
through bpf map.

> +
>   	/*
>   	 * This task already has access to memory reserves and is being killed.
>   	 * Don't allow any other task to have access to the reserves unless
> @@ -329,15 +345,6 @@ static int oom_evaluate_task(struct task_struct *task, void *arg)
>   		goto abort;
>   	}
>   
> -	/*
> -	 * If task is allocating a lot of memory and has been marked to be
> -	 * killed first if it triggers an oom, then select it.
> -	 */
> -	if (oom_task_origin(task)) {
> -		points = LONG_MAX;
> -		goto select;
> -	}
> -
>   	points = oom_badness(task, oc->totalpages);
>   	if (points == LONG_MIN || points < oc->chosen_points)
>   		goto next;
Thanks for your advice, I'm very glad to follow your suggestions for the 
next version of development.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ