lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <817d44ea-0f1e-b13a-86d4-3da6a47752bd@amd.com>
Date:   Tue, 12 Jan 2021 12:01:35 +0100
From:   Christian König <christian.koenig@....com>
To:     Jeremy Cline <jcline@...hat.com>
Cc:     Harry Wentland <harry.wentland@....com>,
        Leo Li <sunpeng.li@....com>,
        Alex Deucher <alexander.deucher@....com>,
        David Airlie <airlied@...ux.ie>,
        Daniel Vetter <daniel@...ll.ch>, amd-gfx@...ts.freedesktop.org,
        dri-devel@...ts.freedesktop.org, linux-kernel@...r.kernel.org,
        Timothy Pearson <tpearson@...torengineering.com>
Subject: Re: [PATCH] amdgpu: Avoid sleeping during FPU critical sections

Am 11.01.21 um 16:39 schrieb Jeremy Cline:
> Hi,
>
> On Mon, Jan 11, 2021 at 09:53:56AM +0100, Christian König wrote:
>> Am 08.01.21 um 22:58 schrieb Jeremy Cline:
>>> dcn20_resource_construct() includes a number of kzalloc(GFP_KERNEL)
>>> calls which can sleep, but kernel_fpu_begin() disables preemption and
>>> sleeping in this context is invalid.
>>>
>>> The only places the FPU appears to be required is in the
>>> init_soc_bounding_box() function and when calculating the
>>> {min,max}_fill_clk_mhz. Narrow the scope to just these two parts to
>>> avoid sleeping while using the FPU.
>>>
>>> Fixes: 7a8a3430be15 ("amdgpu: Wrap FPU dependent functions in dc20")
>>> Cc: Timothy Pearson <tpearson@...torengineering.com>
>>> Signed-off-by: Jeremy Cline <jcline@...hat.com>
>> Good catch, but I would rather replace the kzalloc(GFP_KERNEL) with a
>> kzalloc(GFP_ATOMIC) for now.
>>
>> We have tons of problems with this DC_FP_START()/DC_FP_END() annotations and
>> are even in the process of moving them out of the file because the compiles
>> tend to clutter FP registers even outside of the annotated ranges on some
>> architectures.
>>
> Thanks for the review. Is it acceptable to move the DC_FP_END()
> annotation up to the last usage? Keeping it where it is is probably
> do-able, but covers things like calls to resource_construct() which
> makes use of struct resource_create_funcs. I'm guessing only a sub-set
> of the implementations are called via this function, but having an
> interface which can't sleep sometimes doesn't sound appealing.
>
> Happy to do it, but before I go down that road I just wanted to make
> sure that's what you had in mind.

I can't fully judge that either. Harry and the rest of our DC team needs 
to decide that.

Thanks,
Christian.

>
> Thanks,
> Jeremy
>
>>> ---
>>>    drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c | 8 ++++----
>>>    1 file changed, 4 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
>>> index e04ecf0fc0db..a4fa5bf016c1 100644
>>> --- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
>>> +++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
>>> @@ -3622,6 +3622,7 @@ static bool init_soc_bounding_box(struct dc *dc,
>>>    	if (bb && ASICREV_IS_NAVI12_P(dc->ctx->asic_id.hw_internal_rev)) {
>>>    		int i;
>>> +		DC_FP_START();
>>>    		dcn2_0_nv12_soc.sr_exit_time_us =
>>>    				fixed16_to_double_to_cpu(bb->sr_exit_time_us);
>>>    		dcn2_0_nv12_soc.sr_enter_plus_exit_time_us =
>>> @@ -3721,6 +3722,7 @@ static bool init_soc_bounding_box(struct dc *dc,
>>>    			dcn2_0_nv12_soc.clock_limits[i].dram_speed_mts =
>>>    					fixed16_to_double_to_cpu(bb->clock_limits[i].dram_speed_mts);
>>>    		}
>>> +		DC_FP_END();
>>>    	}
>>>    	if (pool->base.pp_smu) {
>>> @@ -3777,8 +3779,6 @@ static bool dcn20_resource_construct(
>>>    	enum dml_project dml_project_version =
>>>    			get_dml_project_version(ctx->asic_id.hw_internal_rev);
>>> -	DC_FP_START();
>>> -
>>>    	ctx->dc_bios->regs = &bios_regs;
>>>    	pool->base.funcs = &dcn20_res_pool_funcs;
>>> @@ -3959,8 +3959,10 @@ static bool dcn20_resource_construct(
>>>    				ranges.reader_wm_sets[i].wm_inst = i;
>>>    				ranges.reader_wm_sets[i].min_drain_clk_mhz = PP_SMU_WM_SET_RANGE_CLK_UNCONSTRAINED_MIN;
>>>    				ranges.reader_wm_sets[i].max_drain_clk_mhz = PP_SMU_WM_SET_RANGE_CLK_UNCONSTRAINED_MAX;
>>> +				DC_FP_START();
>>>    				ranges.reader_wm_sets[i].min_fill_clk_mhz = (i > 0) ? (loaded_bb->clock_limits[i - 1].dram_speed_mts / 16) + 1 : 0;
>>>    				ranges.reader_wm_sets[i].max_fill_clk_mhz = loaded_bb->clock_limits[i].dram_speed_mts / 16;
>>> +				DC_FP_END();
>>>    				ranges.num_reader_wm_sets = i + 1;
>>>    			}
>>> @@ -4125,12 +4127,10 @@ static bool dcn20_resource_construct(
>>>    		pool->base.oem_device = NULL;
>>>    	}
>>> -	DC_FP_END();
>>>    	return true;
>>>    create_fail:
>>> -	DC_FP_END();
>>>    	dcn20_resource_destruct(pool);
>>>    	return false;

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ