lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a04733af-5ff1-a1b4-527d-68b28a037231@collabora.com>
Date:   Tue, 12 Apr 2022 21:20:29 +0300
From:   Dmitry Osipenko <dmitry.osipenko@...labora.com>
To:     Andrey Grodzovsky <andrey.grodzovsky@....com>,
        David Airlie <airlied@...ux.ie>,
        Daniel Vetter <daniel@...ll.ch>,
        Tomeu Vizoso <tomeu.vizoso@...labora.com>,
        Steven Price <steven.price@....com>,
        Rob Herring <robh@...nel.org>,
        Alyssa Rosenzweig <alyssa.rosenzweig@...labora.com>,
        Rob Clark <robdclark@...il.com>
Cc:     dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org,
        Dmitry Osipenko <digetx@...il.com>
Subject: Re: [PATCH v1] drm/scheduler: Don't kill jobs in interrupt context

On 4/12/22 19:51, Andrey Grodzovsky wrote:
> 
> On 2022-04-11 18:15, Dmitry Osipenko wrote:
>> Interrupt context can't sleep. Drivers like Panfrost and MSM are taking
>> mutex when job is released, and thus, that code can sleep. This results
>> into "BUG: scheduling while atomic" if locks are contented while job is
>> freed. There is no good reason for releasing scheduler's jobs in IRQ
>> context, hence use normal context to fix the trouble.
> 
> 
> I am not sure this is the beast Idea to leave job's sw fence signalling
> to be
> executed in system_wq context which is prone to delays of executing
> various work items from around the system. Seems better to me to leave the
> fence signaling within the IRQ context and offload only the job freeing or,
> maybe handle rescheduling to thread context within drivers implemention
> of .free_job cb. Not really sure which is the better.

We're talking here about killing jobs when driver destroys context,
which doesn't feel like it needs to be a fast path. I could move the
signalling into drm_sched_entity_kill_jobs_cb() and use unbound wq, but
do we really need this for a slow path?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ