lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5c71ec04-9148-0587-c495-11dbd8f77d67@linux.intel.com>
Date:   Fri, 1 Oct 2021 11:32:16 +0100
From:   Tvrtko Ursulin <tvrtko.ursulin@...ux.intel.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     Intel-gfx@...ts.freedesktop.org, dri-devel@...ts.freedesktop.org,
        linux-kernel@...r.kernel.org,
        Tvrtko Ursulin <tvrtko.ursulin@...el.com>,
        Ingo Molnar <mingo@...hat.com>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>
Subject: Re: [RFC 1/6] sched: Add nice value change notifier


On 01/10/2021 10:04, Tvrtko Ursulin wrote:
> 
> Hi Peter,
> 
> On 30/09/2021 19:33, Peter Zijlstra wrote:
>> On Thu, Sep 30, 2021 at 06:15:47PM +0100, Tvrtko Ursulin wrote:
>>>   void set_user_nice(struct task_struct *p, long nice)
>>>   {
>>>       bool queued, running;
>>> -    int old_prio;
>>> +    int old_prio, ret;
>>>       struct rq_flags rf;
>>>       struct rq *rq;
>>> @@ -6913,6 +6945,9 @@ void set_user_nice(struct task_struct *p, long 
>>> nice)
>>>        */
>>>       p->sched_class->prio_changed(rq, p, old_prio);
>>> +    ret = atomic_notifier_call_chain(&user_nice_notifier_list, nice, 
>>> p);
>>> +    WARN_ON_ONCE(ret != NOTIFY_DONE);
>>> +
>>>   out_unlock:
>>>       task_rq_unlock(rq, p, &rf);
>>>   }
>>
>> No, we're not going to call out to exported, and potentially unbounded,
>> functions under scheduler locks.
> 
> Agreed, that's another good point why it is even more hairy, as I have 
> generally alluded in the cover letter.
> 
> Do you have any immediate thoughts on possible alternatives?
> 
> Like for instance if I did a queue_work from set_user_nice and then ran 
> a notifier chain async from a worker? I haven't looked at yet what 
> repercussion would that have in terms of having to cancel the pending 
> workers when tasks exit. I can try and prototype that and see how it 
> would look.

Hm or I simply move calling the notifier chain to after task_rq_unlock? 
That would leave it run under the tasklist lock so probably still quite bad.

Or another option - I stash aside the tasks on a private list (adding 
new list_head to trask_struct), with elevated task ref count, and run 
the notifier chain outside any locked sections, at the end of the 
setpriority syscall. This way only the sycall caller pays the cost of 
any misbehaving notifiers in the chain. Further improvement could be per 
task notifiers but that would grow the task_struct more.

Regards,

Tvrtko

> There is of course an example ioprio which solves the runtime 
> adjustments via a dedicated system call. But I don't currently feel that 
> a third one would be a good solution. At least I don't see a case for 
> being able to decouple the priority of CPU and GPU and computations.
> 
> Have I opened a large can of worms? :)
> 
> Regards,
> 
> Tvrtko

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ