[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <508025FD.80703@intel.com>
Date: Thu, 18 Oct 2012 08:53:33 -0700
From: Alexander Duyck <alexander.h.duyck@...el.com>
To: Konrad Rzeszutek Wilk <konrad@...nel.org>
CC: Hillf Danton <dhillf@...il.com>, konrad.wilk@...cle.com,
tglx@...utronix.de, mingo@...hat.com, hpa@...or.com,
rob@...dley.net, akpm@...ux-foundation.org, joerg.roedel@....com,
bhelgaas@...gle.com, shuahkhan@...il.com,
fujita.tomonori@....ntt.co.jp, linux-kernel@...r.kernel.org,
x86@...nel.org
Subject: Re: [PATCH v2 1/7] swiotlb: Make io_tlb_end a physical address instead
of a virtual one
On 10/18/2012 05:41 AM, Konrad Rzeszutek Wilk wrote:
> On Mon, Oct 15, 2012 at 08:43:28AM -0700, Alexander Duyck wrote:
>> On 10/13/2012 05:52 AM, Hillf Danton wrote:
>>> Hi Alexander,
>>>
>>> On Fri, Oct 12, 2012 at 4:34 AM, Alexander Duyck
>>> <alexander.h.duyck@...el.com> wrote:
>>>> This change replaces all references to the virtual address for io_tlb_end
>>>> with references to the physical address io_tlb_end. The main advantage of
>>>> replacing the virtual address with a physical address is that we can avoid
>>>> having to do multiple translations from the virtual address to the physical
>>>> one needed for testing an existing DMA address.
>>>>
>>>> Signed-off-by: Alexander Duyck <alexander.h.duyck@...el.com>
>>>> ---
>>>>
>>>> lib/swiotlb.c | 24 +++++++++++++-----------
>>>> 1 files changed, 13 insertions(+), 11 deletions(-)
>>>>
>>>> diff --git a/lib/swiotlb.c b/lib/swiotlb.c
>>>> index f114bf6..19aac9f 100644
>>>> --- a/lib/swiotlb.c
>>>> +++ b/lib/swiotlb.c
>>>> @@ -57,7 +57,8 @@ int swiotlb_force;
>>>> * swiotlb_tbl_sync_single_*, to see if the memory was in fact allocated by this
>>>> * API.
>>>> */
>>>> -static char *io_tlb_start, *io_tlb_end;
>>>> +static char *io_tlb_start;
>>>> +phys_addr_t io_tlb_end;
>>> If add io_tlb_start_phy and io_tlb_end_phy, could we get same results
>>> with less hunks?
>>>
>>> Hillf
>> What do you mean by less hunks? Are you referring to the memory space?
> As in less patch movements.
>> If so, then the patches I am submitting do not impact how much space is
>> used for the bounce buffer. The only real result of these patches is
>> that the total code path is significantly reduced since we don't have to
>> perform any virtual to physical translations in the hot-path.
> No. He is referring that you can keep io_tlb_end still here. Just
> do the computation of the physical address in the init path (of the end).
> Then you don't need to do the shifting in the 'is-this-swiotlb-buffer'
> and can just do a simple:
> if (dma_addr >= io_tlb_start && dma_addr <= io_tlb_end)
>
That is how the code ends up. The v2 and v3 version of these patches
leave the end value there. As this patch says I am just changing the
end to be physical instead of virtual. I reviewed the code and realized
that I wasn't saving anything by removing it since the overall code was
larger as a result so I just converted it to a physical address. There
are no users of io_tlb_end that are accessing it as a virtual value so
all I did is just change it to a physical one and drop the virt_to_phys
calls that were made on it. If I am not mistaken by the second patch
the is_swiotlb_buffer call is literally what you have described above.
Here is the snippet from the 2nd patch:
static int is_swiotlb_buffer(phys_addr_t paddr)
{
- return paddr >= virt_to_phys(io_tlb_start) && paddr < io_tlb_end;
+ return paddr >= io_tlb_start && paddr < io_tlb_end;
}
As far as the number of patches I decided to do this incrementally
instead of trying to do it all at once. That way it is clearer to the
reviewer what I am doing in each step and it can be more easily bisected
in case I messed up somewhere. If you want fewer patches I can do that
but I don't see the point in combining patches since they are all just
going to result in the same total change anyway.
Thanks,
Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists