lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Sat, 30 Jun 2018 22:03:03 -0700 From: Reinette Chatre <reinette.chatre@...el.com> To: tglx@...utronix.de, fenghua.yu@...el.com, tony.luck@...el.com, vikas.shivappa@...ux.intel.com Cc: gavin.hindman@...el.com, jithu.joseph@...el.com, dave.hansen@...el.com, mingo@...hat.com, hpa@...or.com, x86@...nel.org, linux-kernel@...r.kernel.org, Reinette Chatre <reinette.chatre@...el.com> Subject: [PATCH 2/2] x86/intel_rdt: Fix cleanup of plr structure on error When a resource group enters pseudo-locksetup mode a pseudo_lock_region is associated with it. When the user writes to the resource group's schemata file the CBM of the requested pseudo-locked region is entered into the pseudo_lock_region struct. If any part of pseudo-lock region creation fails the resource group will remain in pseudo-locksetup mode with the pseudo_lock_region associated with it. In case of failure during pseudo-lock region creation care needs to be taken to ensure that the pseudo_lock_region struct associated with the resource group is cleared from any pseudo-locking data - especially the CBM. This is because the existence of a pseudo_lock_region struct with a CBM is significant in other areas of the code, for example, the display of bit_usage and initialization of a new resource group. Fix the error path of pseudo-lock region creation to ensure that the pseudo_lock_region struct is cleared at each error exit. Signed-off-by: Reinette Chatre <reinette.chatre@...el.com> --- arch/x86/kernel/cpu/intel_rdt_pseudo_lock.c | 22 ++++++++++++++++----- 1 file changed, 17 insertions(+), 5 deletions(-) diff --git a/arch/x86/kernel/cpu/intel_rdt_pseudo_lock.c b/arch/x86/kernel/cpu/intel_rdt_pseudo_lock.c index 1860ec10302d..8fd79c281ee6 100644 --- a/arch/x86/kernel/cpu/intel_rdt_pseudo_lock.c +++ b/arch/x86/kernel/cpu/intel_rdt_pseudo_lock.c @@ -290,6 +290,7 @@ static void pseudo_lock_region_clear(struct pseudo_lock_region *plr) static int pseudo_lock_region_init(struct pseudo_lock_region *plr) { struct cpu_cacheinfo *ci; + int ret; int i; /* Pick the first cpu we find that is associated with the cache. */ @@ -298,7 +299,8 @@ static int pseudo_lock_region_init(struct pseudo_lock_region *plr) if (!cpu_online(plr->cpu)) { rdt_last_cmd_printf("cpu %u associated with cache not online\n", plr->cpu); - return -ENODEV; + ret = -ENODEV; + goto out_region; } ci = get_cpu_cacheinfo(plr->cpu); @@ -312,8 +314,11 @@ static int pseudo_lock_region_init(struct pseudo_lock_region *plr) } } + ret = -1; rdt_last_cmd_puts("unable to determine cache line size\n"); - return -1; +out_region: + pseudo_lock_region_clear(plr); + return ret; } /** @@ -365,16 +370,23 @@ static int pseudo_lock_region_alloc(struct pseudo_lock_region *plr) */ if (plr->size > KMALLOC_MAX_SIZE) { rdt_last_cmd_puts("requested region exceeds maximum size\n"); - return -E2BIG; + ret = -E2BIG; + goto out_region; } plr->kmem = kzalloc(plr->size, GFP_KERNEL); if (!plr->kmem) { rdt_last_cmd_puts("unable to allocate memory\n"); - return -ENOMEM; + ret = -ENOMEM; + goto out_region; } - return 0; + ret = 0; + goto out; +out_region: + pseudo_lock_region_clear(plr); +out: + return ret; } /** -- 2.17.0
Powered by blists - more mailing lists