[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e6aea551ec14bcece31c3cbb861afee361547f84.camel@redhat.com>
Date: Thu, 23 Jan 2025 08:34:30 +0100
From: Philipp Stanner <pstanner@...hat.com>
To: Matthew Brost <matthew.brost@...el.com>, Boris Brezillon
<boris.brezillon@...labora.com>
Cc: Tvrtko Ursulin <tursulin@...ulin.net>, Philipp Stanner
<phasta@...nel.org>, Alex Deucher <alexander.deucher@....com>, Christian
König <christian.koenig@....com>, Xinhui Pan
<Xinhui.Pan@....com>, David Airlie <airlied@...il.com>, Simona Vetter
<simona@...ll.ch>, Lucas Stach <l.stach@...gutronix.de>, Russell King
<linux+etnaviv@...linux.org.uk>, Christian Gmeiner
<christian.gmeiner@...il.com>, Frank Binns <frank.binns@...tec.com>, Matt
Coster <matt.coster@...tec.com>, Maarten Lankhorst
<maarten.lankhorst@...ux.intel.com>, Maxime Ripard <mripard@...nel.org>,
Thomas Zimmermann <tzimmermann@...e.de>, Qiang Yu <yuq825@...il.com>, Rob
Clark <robdclark@...il.com>, Sean Paul <sean@...rly.run>, Konrad Dybcio
<konradybcio@...nel.org>, Abhinav Kumar <quic_abhinavk@...cinc.com>,
Dmitry Baryshkov <dmitry.baryshkov@...aro.org>, Marijn Suijten
<marijn.suijten@...ainline.org>, Karol Herbst <kherbst@...hat.com>, Lyude
Paul <lyude@...hat.com>, Danilo Krummrich <dakr@...nel.org>, Rob Herring
<robh@...nel.org>, Steven Price <steven.price@....com>, Liviu Dudau
<liviu.dudau@....com>, Luben Tuikov <ltuikov89@...il.com>, Melissa Wen
<mwen@...lia.com>, Maíra Canal <mcanal@...lia.com>,
Lucas De Marchi <lucas.demarchi@...el.com>, Thomas
Hellström <thomas.hellstrom@...ux.intel.com>, Rodrigo
Vivi <rodrigo.vivi@...el.com>, Sunil Khatri <sunil.khatri@....com>, Lijo
Lazar <lijo.lazar@....com>, Mario Limonciello <mario.limonciello@....com>,
Ma Jun <Jun.Ma2@....com>, Yunxiang Li <Yunxiang.Li@....com>,
amd-gfx@...ts.freedesktop.org, dri-devel@...ts.freedesktop.org,
linux-kernel@...r.kernel.org, etnaviv@...ts.freedesktop.org,
lima@...ts.freedesktop.org, linux-arm-msm@...r.kernel.org,
freedreno@...ts.freedesktop.org, nouveau@...ts.freedesktop.org,
intel-xe@...ts.freedesktop.org
Subject: Re: [PATCH] drm/sched: Use struct for drm_sched_init() params
On Wed, 2025-01-22 at 20:37 -0800, Matthew Brost wrote:
> On Wed, Jan 22, 2025 at 06:04:58PM +0100, Boris Brezillon wrote:
> > On Wed, 22 Jan 2025 16:14:59 +0000
> > Tvrtko Ursulin <tursulin@...ulin.net> wrote:
> >
> > > On 22/01/2025 15:51, Boris Brezillon wrote:
> > > > On Wed, 22 Jan 2025 15:08:20 +0100
> > > > Philipp Stanner <phasta@...nel.org> wrote:
> > > >
> > > > > --- a/drivers/gpu/drm/panthor/panthor_sched.c
> > > > > +++ b/drivers/gpu/drm/panthor/panthor_sched.c
> > > > > @@ -3272,6 +3272,7 @@ group_create_queue(struct panthor_group
> > > > > *group,
> > > > > const struct drm_panthor_queue_create
> > > > > *args)
> > > > > {
> > > > > struct drm_gpu_scheduler *drm_sched;
> > > > > + struct drm_sched_init_params sched_params;
> > > >
> > > > nit: Could we use a struct initializer instead of a
> > > > memset(0)+field-assignment?
> > > >
> > > > struct drm_sched_init_params sched_params = {
> >
> > Actually, you can even make it const if it's not modified after the
> > declaration.
> >
> > > > .ops = &panthor_queue_sched_ops,
> > > > .submit_wq = group->ptdev->scheduler->wq,
> > > > .num_rqs = 1,
> > > > .credit_limit = args->ringbuf_size /
> > > > sizeof(u64),
> > > > .hang_limit = 0,
> > > > .timeout = msecs_to_jiffies(JOB_TIMEOUT_MS),
> > > > .timeout_wq = group->ptdev->reset.wq,
> > > > .name = "panthor-queue",
> > > > .dev = group->ptdev->base.dev,
> > > > };
> > >
>
> +2
Yup, getting rid of memset() similar to Danilo's suggestion is surely a
good idea.
I personally don't like mixing initialization and declaration when
possible (readability), but having it const is probably a good
argument.
P.
>
> Matt
>
> > > +1 on this as a general approach for the whole series. And I'd
> > > drop the
> > > explicit zeros and NULLs. Memsets could then go too.
> > >
> > > Regards,
> > >
> > > Tvrtko
> > >
> > > >
> > > > The same comment applies the panfrost changes BTW.
> > > >
> > > > > struct panthor_queue *queue;
> > > > > int ret;
> > > > >
> > > > > @@ -3289,6 +3290,8 @@ group_create_queue(struct panthor_group
> > > > > *group,
> > > > > if (!queue)
> > > > > return ERR_PTR(-ENOMEM);
> > > > >
> > > > > + memset(&sched_params, 0, sizeof(struct
> > > > > drm_sched_init_params));
> > > > > +
> > > > > queue->fence_ctx.id = dma_fence_context_alloc(1);
> > > > > spin_lock_init(&queue->fence_ctx.lock);
> > > > > INIT_LIST_HEAD(&queue->fence_ctx.in_flight_jobs);
> > > > > @@ -3341,17 +3344,23 @@ group_create_queue(struct
> > > > > panthor_group *group,
> > > > > if (ret)
> > > > > goto err_free_queue;
> > > > >
> > > > > + sched_params.ops = &panthor_queue_sched_ops;
> > > > > + sched_params.submit_wq = group->ptdev->scheduler-
> > > > > >wq;
> > > > > + sched_params.num_rqs = 1;
> > > > > /*
> > > > > - * Credit limit argument tells us the total number
> > > > > of instructions
> > > > > + * The credit limit argument tells us the total
> > > > > number of instructions
> > > > > * across all CS slots in the ringbuffer, with some
> > > > > jobs requiring
> > > > > * twice as many as others, depending on their
> > > > > profiling status.
> > > > > */
> > > > > - ret = drm_sched_init(&queue->scheduler,
> > > > > &panthor_queue_sched_ops,
> > > > > - group->ptdev->scheduler->wq, 1,
> > > > > - args->ringbuf_size /
> > > > > sizeof(u64),
> > > > > - 0,
> > > > > msecs_to_jiffies(JOB_TIMEOUT_MS),
> > > > > - group->ptdev->reset.wq,
> > > > > - NULL, "panthor-queue", group-
> > > > > >ptdev->base.dev);
> > > > > + sched_params.credit_limit = args->ringbuf_size /
> > > > > sizeof(u64);
> > > > > + sched_params.hang_limit = 0;
> > > > > + sched_params.timeout =
> > > > > msecs_to_jiffies(JOB_TIMEOUT_MS);
> > > > > + sched_params.timeout_wq = group->ptdev->reset.wq;
> > > > > + sched_params.score = NULL;
> > > > > + sched_params.name = "panthor-queue";
> > > > > + sched_params.dev = group->ptdev->base.dev;
> > > > > +
> > > > > + ret = drm_sched_init(&queue->scheduler,
> > > > > &sched_params);
> > > > > if (ret)
> > > > > goto err_free_queue;
> >
Powered by blists - more mailing lists