lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <65d32f61-7d23-b7b7-cfb3-8ec4653a0e82@amd.com>
Date:	Tue, 16 Aug 2016 09:43:00 +0700
From:	Suravee Suthikulpanit <Suravee.Suthikulpanit@....com>
To:	Joerg Roedel <joro@...tes.org>
CC:	<rkrcmar@...hat.com>, <pbonzini@...hat.com>,
	<alex.williamson@...hat.com>, <kvm@...r.kernel.org>,
	<linux-kernel@...r.kernel.org>, <sherry.hurwitz@....com>
Subject: Re: [PART2 PATCH v5 06/12] iommu/amd: Adding GALOG interrupt handler

Hi Joerg,

On 8/9/16 21:43, Joerg Roedel wrote:
> On Mon, Jul 25, 2016 at 04:32:05AM -0500, Suthikulpanit, Suravee wrote:
>> From: Suravee Suthikulpanit <Suravee.Suthikulpanit@....com>
>>
>> This patch adds AMD IOMMU guest virtual APIC log (GALOG) handler.
>> When IOMMU hardware receives an interrupt targeting a blocking vcpu,
>> it creates an entry in the GALOG, and generates an interrupt to notify
>> the AMD IOMMU driver.
>>
>> At this point, the driver processes the log entry, and notify the SVM
>> driver via the registered iommu_ga_log_notifier function.
>>
>> Signed-off-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@....com>
>> ---
>>  drivers/iommu/amd_iommu.c | 77 +++++++++++++++++++++++++++++++++++++++++++++--
>>  include/linux/amd-iommu.h | 20 ++++++++++--
>>  2 files changed, 91 insertions(+), 6 deletions(-)
>>
>> diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
>> index abfb2b7..861d723 100644
>> --- a/drivers/iommu/amd_iommu.c
>> +++ b/drivers/iommu/amd_iommu.c
>> @@ -741,14 +741,78 @@ static void iommu_poll_ppr_log(struct amd_iommu *iommu)
>>  	}
>>  }
>>
>> +#ifdef CONFIG_IRQ_REMAP
>> +static int (*iommu_ga_log_notifier)(u32);
>> +
>> +int amd_iommu_register_ga_log_notifier(int (*notifier)(u32))
>> +{
>> +	iommu_ga_log_notifier = notifier;
>> +
>> +	return 0;
>> +}
>> +EXPORT_SYMBOL(amd_iommu_register_ga_log_notifier);
>> +
>> +static void iommu_poll_ga_log(struct amd_iommu *iommu)
>> +{
>> +	u32 head, tail, cnt = 0;
>> +
>> +	if (iommu->ga_log == NULL)
>> +		return;
>> +
>> +	head = readl(iommu->mmio_base + MMIO_GA_HEAD_OFFSET);
>> +	tail = readl(iommu->mmio_base + MMIO_GA_TAIL_OFFSET);
>> +
>> +	while (head != tail) {
>> +		volatile u64 *raw;
>> +		u64 log_entry;
>> +
>> +		raw = (u64 *)(iommu->ga_log + head);
>> +		cnt++;
>> +
>> +		/* Avoid memcpy function-call overhead */
>> +		log_entry = *raw;
>> +
>> +		/* Update head pointer of hardware ring-buffer */
>> +		head = (head + GA_ENTRY_SIZE) % GA_LOG_SIZE;
>> +		writel(head, iommu->mmio_base + MMIO_GA_HEAD_OFFSET);
>> +
>> +		/* Handle GA entry */
>> +		switch (GA_REQ_TYPE(log_entry)) {
>> +		case GA_GUEST_NR:
>> +			if (!iommu_ga_log_notifier)
>> +				break;
>> +
>> +			pr_debug("AMD-Vi: %s: devid=%#x, ga_tag=%#x\n",
>> +				 __func__, GA_DEVID(log_entry),
>> +				 GA_TAG(log_entry));
>> +
>> +			if (iommu_ga_log_notifier(GA_TAG(log_entry)) != 0)
>> +				pr_err("AMD-Vi: GA log notifier failed.\n");
>> +			break;
>> +		default:
>> +			break;
>> +		}
>> +
>> +		/* Refresh ring-buffer information */
>> +		head = readl(iommu->mmio_base + MMIO_GA_HEAD_OFFSET);
>> +		tail = readl(iommu->mmio_base + MMIO_GA_TAIL_OFFSET);
>
> Couldn't that cause an endless-loop in case of an interrupt storm from a
> device? I think it is better to just read head and tail once before the
> loop and update head after we get out of the loop. Any new entries
> could be handled by the next iommu interrupt. This avoids any
> soft-lockups that might happen when the loop runs for too long.
>

Sure. Also, we might need to start handling GALogOverflow as well. 
However, let's put that in a separate patch series. What do you think?

S

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ