[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <D89D60253BB73A4E8C62F9FD18A939CA01031826@storexdag02.amd.com>
Date: Fri, 11 Jul 2014 17:07:50 +0000
From: "Bridgman, John" <John.Bridgman@....com>
To: Alex Deucher <alexdeucher@...il.com>,
"Koenig, Christian" <Christian.Koenig@....com>
CC: Oded Gabbay <oded.gabbay@...il.com>,
"Lewycky, Andrew" <Andrew.Lewycky@....com>,
LKML <linux-kernel@...r.kernel.org>,
"Maling list - DRI developers" <dri-devel@...ts.freedesktop.org>,
"Deucher, Alexander" <Alexander.Deucher@....com>
Subject: RE: [PATCH 02/83] drm/radeon: reduce number of free VMIDs and pipes
in KV
>-----Original Message-----
>From: dri-devel [mailto:dri-devel-bounces@...ts.freedesktop.org] On Behalf
>Of Alex Deucher
>Sent: Friday, July 11, 2014 12:23 PM
>To: Koenig, Christian
>Cc: Oded Gabbay; Lewycky, Andrew; LKML; Maling list - DRI developers;
>Deucher, Alexander
>Subject: Re: [PATCH 02/83] drm/radeon: reduce number of free VMIDs and
>pipes in KV
>
>On Fri, Jul 11, 2014 at 12:18 PM, Christian König <christian.koenig@....com>
>wrote:
>> Am 11.07.2014 18:05, schrieb Jerome Glisse:
>>
>>> On Fri, Jul 11, 2014 at 12:50:02AM +0300, Oded Gabbay wrote:
>>>>
>>>> To support HSA on KV, we need to limit the number of vmids and pipes
>>>> that are available for radeon's use with KV.
>>>>
>>>> This patch reserves VMIDs 8-15 for KFD (so radeon can only use VMIDs
>>>> 0-7) and also makes radeon thinks that KV has only a single MEC with
>>>> a single pipe in it
>>>>
>>>> Signed-off-by: Oded Gabbay <oded.gabbay@....com>
>>>
>>> Reviewed-by: Jérôme Glisse <jglisse@...hat.com>
>>
>>
>> At least fro the VMIDs on demand allocation should be trivial to
>> implement, so I would rather prefer this instead of a fixed assignment.
>
>IIRC, the way the CP hw scheduler works you have to give it a range of vmids
>and it assigns them dynamically as queues are mapped so effectively they
>are potentially in use once the CP scheduler is set up.
>
>Alex
Right. The SET_RESOURCES packet (kfd_pm4_headers.h, added in patch 49) allocates a range of HW queues, VMIDs and GDS to the HW scheduler, then the scheduler uses the allocated VMIDs to support a potentially larger number of user processes by dynamically mapping PASIDs to VMIDs and memory queue descriptors (MQDs) to HW queues.
BTW Oded I think we have some duplicated defines at the end of kfd_pm4_headers.h, if they are really duplicates it would be great to remove those before the pull request.
Thanks,
JB
>
>
>>
>> Christian.
>>
>>
>>>
>>>> ---
>>>> drivers/gpu/drm/radeon/cik.c | 48
>>>> ++++++++++++++++++++++----------------------
>>>> 1 file changed, 24 insertions(+), 24 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/radeon/cik.c
>>>> b/drivers/gpu/drm/radeon/cik.c index 4bfc2c0..e0c8052 100644
>>>> --- a/drivers/gpu/drm/radeon/cik.c
>>>> +++ b/drivers/gpu/drm/radeon/cik.c
>>>> @@ -4662,12 +4662,11 @@ static int cik_mec_init(struct radeon_device
>>>> *rdev)
>>>> /*
>>>> * KV: 2 MEC, 4 Pipes/MEC, 8 Queues/Pipe - 64 Queues total
>>>> * CI/KB: 1 MEC, 4 Pipes/MEC, 8 Queues/Pipe - 32 Queues
>>>> total
>>>> + * Nonetheless, we assign only 1 pipe because all other
>>>> + pipes
>>>> will
>>>> + * be handled by KFD
>>>> */
>>>> - if (rdev->family == CHIP_KAVERI)
>>>> - rdev->mec.num_mec = 2;
>>>> - else
>>>> - rdev->mec.num_mec = 1;
>>>> - rdev->mec.num_pipe = 4;
>>>> + rdev->mec.num_mec = 1;
>>>> + rdev->mec.num_pipe = 1;
>>>> rdev->mec.num_queue = rdev->mec.num_mec * rdev-
>>mec.num_pipe * 8;
>>>> if (rdev->mec.hpd_eop_obj == NULL) { @@ -4809,28 +4808,24 @@
>>>> static int cik_cp_compute_resume(struct radeon_device *rdev)
>>>> /* init the pipes */
>>>> mutex_lock(&rdev->srbm_mutex);
>>>> - for (i = 0; i < (rdev->mec.num_pipe * rdev->mec.num_mec); i++) {
>>>> - int me = (i < 4) ? 1 : 2;
>>>> - int pipe = (i < 4) ? i : (i - 4);
>>>> - eop_gpu_addr = rdev->mec.hpd_eop_gpu_addr + (i *
>>>> MEC_HPD_SIZE * 2);
>>>> + eop_gpu_addr = rdev->mec.hpd_eop_gpu_addr;
>>>> - cik_srbm_select(rdev, me, pipe, 0, 0);
>>>> + cik_srbm_select(rdev, 0, 0, 0, 0);
>>>> - /* write the EOP addr */
>>>> - WREG32(CP_HPD_EOP_BASE_ADDR, eop_gpu_addr >> 8);
>>>> - WREG32(CP_HPD_EOP_BASE_ADDR_HI,
>>>> upper_32_bits(eop_gpu_addr) >> 8);
>>>> + /* write the EOP addr */
>>>> + WREG32(CP_HPD_EOP_BASE_ADDR, eop_gpu_addr >> 8);
>>>> + WREG32(CP_HPD_EOP_BASE_ADDR_HI,
>upper_32_bits(eop_gpu_addr)
>>>> + >>
>>>> 8);
>>>> - /* set the VMID assigned */
>>>> - WREG32(CP_HPD_EOP_VMID, 0);
>>>> + /* set the VMID assigned */
>>>> + WREG32(CP_HPD_EOP_VMID, 0);
>>>> +
>>>> + /* set the EOP size, register value is 2^(EOP_SIZE+1) dwords */
>>>> + tmp = RREG32(CP_HPD_EOP_CONTROL);
>>>> + tmp &= ~EOP_SIZE_MASK;
>>>> + tmp |= order_base_2(MEC_HPD_SIZE / 8);
>>>> + WREG32(CP_HPD_EOP_CONTROL, tmp);
>>>> - /* set the EOP size, register value is 2^(EOP_SIZE+1)
>>>> dwords */
>>>> - tmp = RREG32(CP_HPD_EOP_CONTROL);
>>>> - tmp &= ~EOP_SIZE_MASK;
>>>> - tmp |= order_base_2(MEC_HPD_SIZE / 8);
>>>> - WREG32(CP_HPD_EOP_CONTROL, tmp);
>>>> - }
>>>> - cik_srbm_select(rdev, 0, 0, 0, 0);
>>>> mutex_unlock(&rdev->srbm_mutex);
>>>> /* init the queues. Just two for now. */ @@ -5876,8
>>>> +5871,13 @@ int cik_ib_parse(struct radeon_device *rdev, struct
>>>> radeon_ib *ib)
>>>> */
>>>> int cik_vm_init(struct radeon_device *rdev)
>>>> {
>>>> - /* number of VMs */
>>>> - rdev->vm_manager.nvm = 16;
>>>> + /*
>>>> + * number of VMs
>>>> + * VMID 0 is reserved for Graphics
>>>> + * radeon compute will use VMIDs 1-7
>>>> + * KFD will use VMIDs 8-15
>>>> + */
>>>> + rdev->vm_manager.nvm = 8;
>>>> /* base offset of vram pages */
>>>> if (rdev->flags & RADEON_IS_IGP) {
>>>> u64 tmp = RREG32(MC_VM_FB_OFFSET);
>>>> --
>>>> 1.9.1
>>>>
>>
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@...ts.freedesktop.org
>> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>_______________________________________________
>dri-devel mailing list
>dri-devel@...ts.freedesktop.org
>http://lists.freedesktop.org/mailman/listinfo/dri-devel
Powered by blists - more mailing lists