[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <9c0a9b97-fcc6-9f69-5a90-1f94cae3c899@linux.ibm.com>
Date: Wed, 15 Jul 2020 09:38:49 +0530
From: Ravi Bangoria <ravi.bangoria@...ux.ibm.com>
To: Jordan Niethe <jniethe5@...il.com>
Cc: Michael Ellerman <mpe@...erman.id.au>, mikey@...ling.org,
apopple@...ux.ibm.com, Paul Mackerras <paulus@...ba.org>,
Nicholas Piggin <npiggin@...il.com>,
Christophe Leroy <christophe.leroy@....fr>,
naveen.n.rao@...ux.vnet.ibm.com, peterz@...radead.org,
jolsa@...nel.org, oleg@...hat.com, fweisbec@...il.com,
mingo@...nel.org, pedromfc@...ibm.com, miltonm@...ibm.com,
linuxppc-dev@...ts.ozlabs.org, linux-kernel@...r.kernel.org,
Ravi Bangoria <ravi.bangoria@...ux.ibm.com>
Subject: Re: [PATCH v3 2/9] powerpc/watchpoint: Fix DAWR exception constraint
Hi Jordan,
>> @@ -536,7 +538,12 @@ static bool check_dawrx_constraints(struct pt_regs *regs, int type,
>> if (OP_IS_LOAD(type) && !(info->type & HW_BRK_TYPE_READ))
>> return false;
>>
>> - if (OP_IS_STORE(type) && !(info->type & HW_BRK_TYPE_WRITE))
>> + /*
>> + * The Cache Management instructions other than dcbz never
>> + * cause a match. i.e. if type is CACHEOP, the instruction
>> + * is dcbz, and dcbz is treated as Store.
>> + */
>> + if ((OP_IS_STORE(type) || type == CACHEOP) && !(info->type & HW_BRK_TYPE_WRITE))
>> return false;
> This change seems seperate to this commit?
I also thought about it but was not sure. See below ...
>>
>> if (is_kernel_addr(regs->nip) && !(info->type & HW_BRK_TYPE_KERNEL))
>> @@ -553,7 +560,8 @@ static bool check_dawrx_constraints(struct pt_regs *regs, int type,
>> * including extraneous exception. Otherwise return false.
>> */
>> static bool check_constraints(struct pt_regs *regs, struct ppc_inst instr,
>> - int type, int size, struct arch_hw_breakpoint *info)
>> + unsigned long ea, int type, int size,
>> + struct arch_hw_breakpoint *info)
>> {
>> bool in_user_range = dar_in_user_range(regs->dar, info);
>> bool dawrx_constraints;
>> @@ -569,11 +577,10 @@ static bool check_constraints(struct pt_regs *regs, struct ppc_inst instr,
>> }
>>
>> if (unlikely(ppc_inst_equal(instr, ppc_inst(0)))) {
>> - if (in_user_range)
>> - return true;
>> -
>> - if (dar_in_hw_range(regs->dar, info)) {
>> - info->type |= HW_BRK_TYPE_EXTRANEOUS_IRQ;
>> + if (cpu_has_feature(CPU_FTR_ARCH_31)) {
>> + if (dar_in_hw_range(regs->dar, info))
>> + return true;
>> + } else {
>> return true;
> I think this would be clearer as:
> if (cpu_has_feature(CPU_FTR_ARCH_31) &&
> !(dar_in_hw_range(regs->dar, info)))
> return false;
> else
> return true;
ok
>
>> }
>> return false;
>> @@ -581,10 +588,20 @@ static bool check_constraints(struct pt_regs *regs, struct ppc_inst instr,
>>
>> dawrx_constraints = check_dawrx_constraints(regs, type, info);
>>
>> - if (dar_user_range_overlaps(regs->dar, size, info))
>> + if (type == UNKNOWN) {
>> + if (cpu_has_feature(CPU_FTR_ARCH_31)) {
>> + if (dar_in_hw_range(regs->dar, info))
>> + return dawrx_constraints;
>> + } else {
>> + return dawrx_constraints;
>> + }
>> + return false;
>> + }
> Similar thing here, it could be:
> if ((cpu_has_feature(CPU_FTR_ARCH_31)) &&
> !(dar_in_hw_range(regs->dar, info)))
> return false;
> else
> return dawrx_constraints;
ok
>> +
>> + if (ea_user_range_overlaps(ea, size, info))
>> return dawrx_constraints;
>>
>> - if (dar_hw_range_overlaps(regs->dar, size, info)) {
>> + if (ea_hw_range_overlaps(ea, size, info)) {
>> if (dawrx_constraints) {
>> info->type |= HW_BRK_TYPE_EXTRANEOUS_IRQ;
>> return true;
>> @@ -593,8 +610,17 @@ static bool check_constraints(struct pt_regs *regs, struct ppc_inst instr,
>> return false;
>> }
>>
>> +static int cache_op_size(void)
>> +{
>> +#ifdef __powerpc64__
>> + return ppc64_caches.l1d.block_size;
>> +#else
>> + return L1_CACHE_BYTES;
>> +#endif
>> +}
>> +
>> static void get_instr_detail(struct pt_regs *regs, struct ppc_inst *instr,
>> - int *type, int *size, bool *larx_stcx)
>> + int *type, int *size, unsigned long *ea)
>> {
>> struct instruction_op op;
>>
>> @@ -602,16 +628,23 @@ static void get_instr_detail(struct pt_regs *regs, struct ppc_inst *instr,
>> return;
>>
>> analyse_instr(&op, regs, *instr);
>> -
>> - /*
>> - * Set size = 8 if analyse_instr() fails. If it's a userspace
>> - * watchpoint(valid or extraneous), we can notify user about it.
>> - * If it's a kernel watchpoint, instruction emulation will fail
>> - * in stepping_handler() and watchpoint will be disabled.
>> - */
>> *type = GETTYPE(op.type);
>> - *size = !(*type == UNKNOWN) ? GETSIZE(op.type) : 8;
>> - *larx_stcx = (*type == LARX || *type == STCX);
>> + *ea = op.ea;
>> +#ifdef __powerpc64__
>> + if (!(regs->msr & MSR_64BIT))
>> + *ea &= 0xffffffffUL;
>> +#endif
>> +
>> + *size = GETSIZE(op.type);
>> + if (*type == CACHEOP) {
>> + *size = cache_op_size();
>> + *ea &= ~(*size - 1);
>> + }
> Again related to CACHEOP, should these changes be mentioned in the
> commit message?
For CACHEOP, ea returned by analyse_instr() needs to be aligned down to cache
block size manually. Also, for CACHEOP, size returned by analyse_instr() is 0
and thus size also needs to be calculated manually. This was missed in
27985b2a640e. So it kind of relates to other changes of the patch but needs
special treatment as well. Will see if I can split it.
Thanks for the review,
Ravi
Powered by blists - more mailing lists