All of lore.kernel.org
 help / color / mirror / Atom feed
From: Shrikanth Hegde <sshegde@linux.vnet.ibm.com>
To: Nicholas Piggin <npiggin@gmail.com>, linuxppc-dev@lists.ozlabs.org
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>,
	"Nysal Jan K . A" <nysal@linux.ibm.com>
Subject: Re: [PATCH 0/6] powerpc/qspinlock: Fix yield latency bug and other
Date: Wed, 18 Oct 2023 13:11:42 +0530	[thread overview]
Message-ID: <f57e733e-fa18-484d-aca8-e67436b44ddc@linux.vnet.ibm.com> (raw)
In-Reply-To: <20231016124305.139923-1-npiggin@gmail.com>



On 10/16/23 6:12 PM, Nicholas Piggin wrote:
> This fixes a long-standing latency bug in the powerpc qspinlock
> implementation that quite a few people have reported and helped
> out with debugging.
> 
> The first patch is a minimal fix that avoids the problem. The
> other patches are streamlining and improvements after the fix.
> 

Hi Nick, Thanks for the fix. This issue has been happening in various
scenarios when there was vCPU contention.

Tested this on Power10 Shared processor LPAR(SPLPAR) based on powerVM.
System has two SPLPARs. on LPAR1 trying various scenarios and
LPAR2 is running constant stress-ng threads consuming 100% its CPU.
LPAR1: 96VP, 64EC and LPAR2 is 32VP, 32EC.

lscpu of LPAR1:
Architecture:            ppc64le
  Byte Order:            Little Endian
CPU(s):                  768


  On-line CPU(s) list:   0-767


Model name:              POWER10 (architected), altivec supported
  Model:                 2.0 (pvr 0080 0200)
  Thread(s) per core:    8

Scenarios tried on LPAR1:
1. run ppc64_cpu --smt=1 and ppc64_cpu --smt=8 to switch between SMT=1
   and SMT=8
2. create a cgroup, assign 5% quota to it and run same number of
   stress-ng as number of CPUs within that cgroup.
3. Run a suite of microbenchmarks such as unixbench, schbench, hackbench
   stress-ng with perf enabled.

baseline was tip/master at 84ab57184ff4 (origin/master, origin/HEAD)
Merge branch into tip/master: 'x86/tdx'

Hard lockup was SEEN in each of the above scenario with baseline.
With this patch series applied hard lockup was NOT SEEN in each of
the above scenario.

So,
Tested-by: Shrikanth Hegde <sshegde@linux.vnet.ibm.com>

> Thanks,
> Nick
> 
> Nicholas Piggin (6):
>   powerpc/qspinlock: Fix stale propagated yield_cpu
>   powerpc/qspinlock: stop queued waiters trying to set lock sleepy
>   powerpc/qspinlock: propagate owner preemptedness rather than CPU
>     number
>   powerpc/qspinlock: don't propagate the not-sleepy state
>   powerpc/qspinlock: Propagate sleepy if previous waiter is preempted
>   powerpc/qspinlock: Rename yield_propagate_owner tunable
> 
>  arch/powerpc/lib/qspinlock.c | 119 +++++++++++++++--------------------
>  1 file changed, 52 insertions(+), 67 deletions(-)
> 

  parent reply	other threads:[~2023-10-18  7:42 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-16 12:42 [PATCH 0/6] powerpc/qspinlock: Fix yield latency bug and other Nicholas Piggin
2023-10-16 12:43 ` [PATCH 1/6] powerpc/qspinlock: Fix stale propagated yield_cpu Nicholas Piggin
2023-10-16 12:43 ` [PATCH 2/6] powerpc/qspinlock: stop queued waiters trying to set lock sleepy Nicholas Piggin
2023-10-20  9:50   ` Nysal Jan K.A.
2023-10-16 12:43 ` [PATCH 3/6] powerpc/qspinlock: propagate owner preemptedness rather than CPU number Nicholas Piggin
2023-10-16 12:43 ` [PATCH 4/6] powerpc/qspinlock: don't propagate the not-sleepy state Nicholas Piggin
2023-10-16 12:43 ` [PATCH 5/6] powerpc/qspinlock: Propagate sleepy if previous waiter is preempted Nicholas Piggin
2023-10-16 12:43 ` [PATCH 6/6] powerpc/qspinlock: Rename yield_propagate_owner tunable Nicholas Piggin
2023-10-18  7:41 ` Shrikanth Hegde [this message]
2023-10-20 10:04 ` [PATCH 0/6] powerpc/qspinlock: Fix yield latency bug and other Nysal Jan K.A.
2023-10-20 12:01 ` (subset) " Michael Ellerman
2023-10-27  9:59 ` Michael Ellerman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f57e733e-fa18-484d-aca8-e67436b44ddc@linux.vnet.ibm.com \
    --to=sshegde@linux.vnet.ibm.com \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=npiggin@gmail.com \
    --cc=nysal@linux.ibm.com \
    --cc=srikar@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.