[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YrxdCHRTRS62pAON@infradead.org>
Date: Wed, 29 Jun 2022 07:09:12 -0700
From: Christoph Hellwig <hch@...radead.org>
To: Tianyu Lan <ltykernel@...il.com>
Cc: corbet@....net, rafael@...nel.org, len.brown@...el.com,
pavel@....cz, tglx@...utronix.de, mingo@...hat.com, bp@...en8.de,
dave.hansen@...ux.intel.com, x86@...nel.org, hpa@...or.com,
hch@...radead.org, m.szyprowski@...sung.com, robin.murphy@....com,
paulmck@...nel.org, akpm@...ux-foundation.org,
keescook@...omium.org, songmuchun@...edance.com,
rdunlap@...radead.org, damien.lemoal@...nsource.wdc.com,
michael.h.kelley@...rosoft.com, kys@...rosoft.com,
Tianyu Lan <Tianyu.Lan@...rosoft.com>,
iommu@...ts.linux-foundation.org, linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org,
vkuznets@...hat.com, wei.liu@...nel.org, parri.andrea@...il.com,
thomas.lendacky@....com, linux-hyperv@...r.kernel.org,
kirill.shutemov@...el.com, andi.kleen@...el.com,
Andi Kleen <ak@...ux.intel.com>
Subject: Re: [PATCH 1/2] swiotlb: Split up single swiotlb lock
On Mon, Jun 27, 2022 at 11:31:49AM -0400, Tianyu Lan wrote:
> +/**
> + * struct io_tlb_area - IO TLB memory area descriptor
> + *
> + * This is a single area with a single lock.
> + *
> + * @used: The number of used IO TLB block.
> + * @index: The slot index to start searching in this area for next round.
> + * @lock: The lock to protect the above data structures in the map and
> + * unmap calls.
> + */
> +struct io_tlb_area {
> + unsigned long used;
> + unsigned int index;
> + spinlock_t lock;
> +};
As already mentioned last time, please move this into swiotlb.c,
swiotlb.h only uses a pointer to this structure.
> static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
> - unsigned long nslabs, unsigned int flags, bool late_alloc)
> + unsigned long nslabs, unsigned int flags,
> + bool late_alloc, unsigned int nareas)
Nit: the two tab indentation for prototype continuations is a lot easier
to maintain, so don't graciously switch away from it.
> + alloc_size - (offset + ((i - slot_index) << IO_TLB_SHIFT));
Overly long line here.
Powered by blists - more mailing lists