sched-bwc.rst (11047B)
1===================== 2CFS Bandwidth Control 3===================== 4 5.. note:: 6 This document only discusses CPU bandwidth control for SCHED_NORMAL. 7 The SCHED_RT case is covered in Documentation/scheduler/sched-rt-group.rst 8 9CFS bandwidth control is a CONFIG_FAIR_GROUP_SCHED extension which allows the 10specification of the maximum CPU bandwidth available to a group or hierarchy. 11 12The bandwidth allowed for a group is specified using a quota and period. Within 13each given "period" (microseconds), a task group is allocated up to "quota" 14microseconds of CPU time. That quota is assigned to per-cpu run queues in 15slices as threads in the cgroup become runnable. Once all quota has been 16assigned any additional requests for quota will result in those threads being 17throttled. Throttled threads will not be able to run again until the next 18period when the quota is replenished. 19 20A group's unassigned quota is globally tracked, being refreshed back to 21cfs_quota units at each period boundary. As threads consume this bandwidth it 22is transferred to cpu-local "silos" on a demand basis. The amount transferred 23within each of these updates is tunable and described as the "slice". 24 25Burst feature 26------------- 27This feature borrows time now against our future underrun, at the cost of 28increased interference against the other system users. All nicely bounded. 29 30Traditional (UP-EDF) bandwidth control is something like: 31 32 (U = \Sum u_i) <= 1 33 34This guaranteeds both that every deadline is met and that the system is 35stable. After all, if U were > 1, then for every second of walltime, 36we'd have to run more than a second of program time, and obviously miss 37our deadline, but the next deadline will be further out still, there is 38never time to catch up, unbounded fail. 39 40The burst feature observes that a workload doesn't always executes the full 41quota; this enables one to describe u_i as a statistical distribution. 42 43For example, have u_i = {x,e}_i, where x is the p(95) and x+e p(100) 44(the traditional WCET). This effectively allows u to be smaller, 45increasing the efficiency (we can pack more tasks in the system), but at 46the cost of missing deadlines when all the odds line up. However, it 47does maintain stability, since every overrun must be paired with an 48underrun as long as our x is above the average. 49 50That is, suppose we have 2 tasks, both specify a p(95) value, then we 51have a p(95)*p(95) = 90.25% chance both tasks are within their quota and 52everything is good. At the same time we have a p(5)p(5) = 0.25% chance 53both tasks will exceed their quota at the same time (guaranteed deadline 54fail). Somewhere in between there's a threshold where one exceeds and 55the other doesn't underrun enough to compensate; this depends on the 56specific CDFs. 57 58At the same time, we can say that the worst case deadline miss, will be 59\Sum e_i; that is, there is a bounded tardiness (under the assumption 60that x+e is indeed WCET). 61 62The interferenece when using burst is valued by the possibilities for 63missing the deadline and the average WCET. Test results showed that when 64there many cgroups or CPU is under utilized, the interference is 65limited. More details are shown in: 66https://lore.kernel.org/lkml/5371BD36-55AE-4F71-B9D7-B86DC32E3D2B@linux.alibaba.com/ 67 68Management 69---------- 70Quota, period and burst are managed within the cpu subsystem via cgroupfs. 71 72.. note:: 73 The cgroupfs files described in this section are only applicable 74 to cgroup v1. For cgroup v2, see 75 :ref:`Documentation/admin-guide/cgroup-v2.rst <cgroup-v2-cpu>`. 76 77- cpu.cfs_quota_us: run-time replenished within a period (in microseconds) 78- cpu.cfs_period_us: the length of a period (in microseconds) 79- cpu.stat: exports throttling statistics [explained further below] 80- cpu.cfs_burst_us: the maximum accumulated run-time (in microseconds) 81 82The default values are:: 83 84 cpu.cfs_period_us=100ms 85 cpu.cfs_quota_us=-1 86 cpu.cfs_burst_us=0 87 88A value of -1 for cpu.cfs_quota_us indicates that the group does not have any 89bandwidth restriction in place, such a group is described as an unconstrained 90bandwidth group. This represents the traditional work-conserving behavior for 91CFS. 92 93Writing any (valid) positive value(s) no smaller than cpu.cfs_burst_us will 94enact the specified bandwidth limit. The minimum quota allowed for the quota or 95period is 1ms. There is also an upper bound on the period length of 1s. 96Additional restrictions exist when bandwidth limits are used in a hierarchical 97fashion, these are explained in more detail below. 98 99Writing any negative value to cpu.cfs_quota_us will remove the bandwidth limit 100and return the group to an unconstrained state once more. 101 102A value of 0 for cpu.cfs_burst_us indicates that the group can not accumulate 103any unused bandwidth. It makes the traditional bandwidth control behavior for 104CFS unchanged. Writing any (valid) positive value(s) no larger than 105cpu.cfs_quota_us into cpu.cfs_burst_us will enact the cap on unused bandwidth 106accumulation. 107 108Any updates to a group's bandwidth specification will result in it becoming 109unthrottled if it is in a constrained state. 110 111System wide settings 112-------------------- 113For efficiency run-time is transferred between the global pool and CPU local 114"silos" in a batch fashion. This greatly reduces global accounting pressure 115on large systems. The amount transferred each time such an update is required 116is described as the "slice". 117 118This is tunable via procfs:: 119 120 /proc/sys/kernel/sched_cfs_bandwidth_slice_us (default=5ms) 121 122Larger slice values will reduce transfer overheads, while smaller values allow 123for more fine-grained consumption. 124 125Statistics 126---------- 127A group's bandwidth statistics are exported via 5 fields in cpu.stat. 128 129cpu.stat: 130 131- nr_periods: Number of enforcement intervals that have elapsed. 132- nr_throttled: Number of times the group has been throttled/limited. 133- throttled_time: The total time duration (in nanoseconds) for which entities 134 of the group have been throttled. 135- nr_bursts: Number of periods burst occurs. 136- burst_time: Cumulative wall-time (in nanoseconds) that any CPUs has used 137 above quota in respective periods. 138 139This interface is read-only. 140 141Hierarchical considerations 142--------------------------- 143The interface enforces that an individual entity's bandwidth is always 144attainable, that is: max(c_i) <= C. However, over-subscription in the 145aggregate case is explicitly allowed to enable work-conserving semantics 146within a hierarchy: 147 148 e.g. \Sum (c_i) may exceed C 149 150[ Where C is the parent's bandwidth, and c_i its children ] 151 152 153There are two ways in which a group may become throttled: 154 155 a. it fully consumes its own quota within a period 156 b. a parent's quota is fully consumed within its period 157 158In case b) above, even though the child may have runtime remaining it will not 159be allowed to until the parent's runtime is refreshed. 160 161CFS Bandwidth Quota Caveats 162--------------------------- 163Once a slice is assigned to a cpu it does not expire. However all but 1ms of 164the slice may be returned to the global pool if all threads on that cpu become 165unrunnable. This is configured at compile time by the min_cfs_rq_runtime 166variable. This is a performance tweak that helps prevent added contention on 167the global lock. 168 169The fact that cpu-local slices do not expire results in some interesting corner 170cases that should be understood. 171 172For cgroup cpu constrained applications that are cpu limited this is a 173relatively moot point because they will naturally consume the entirety of their 174quota as well as the entirety of each cpu-local slice in each period. As a 175result it is expected that nr_periods roughly equal nr_throttled, and that 176cpuacct.usage will increase roughly equal to cfs_quota_us in each period. 177 178For highly-threaded, non-cpu bound applications this non-expiration nuance 179allows applications to briefly burst past their quota limits by the amount of 180unused slice on each cpu that the task group is running on (typically at most 1811ms per cpu or as defined by min_cfs_rq_runtime). This slight burst only 182applies if quota had been assigned to a cpu and then not fully used or returned 183in previous periods. This burst amount will not be transferred between cores. 184As a result, this mechanism still strictly limits the task group to quota 185average usage, albeit over a longer time window than a single period. This 186also limits the burst ability to no more than 1ms per cpu. This provides 187better more predictable user experience for highly threaded applications with 188small quota limits on high core count machines. It also eliminates the 189propensity to throttle these applications while simultanously using less than 190quota amounts of cpu. Another way to say this, is that by allowing the unused 191portion of a slice to remain valid across periods we have decreased the 192possibility of wastefully expiring quota on cpu-local silos that don't need a 193full slice's amount of cpu time. 194 195The interaction between cpu-bound and non-cpu-bound-interactive applications 196should also be considered, especially when single core usage hits 100%. If you 197gave each of these applications half of a cpu-core and they both got scheduled 198on the same CPU it is theoretically possible that the non-cpu bound application 199will use up to 1ms additional quota in some periods, thereby preventing the 200cpu-bound application from fully using its quota by that same amount. In these 201instances it will be up to the CFS algorithm (see sched-design-CFS.rst) to 202decide which application is chosen to run, as they will both be runnable and 203have remaining quota. This runtime discrepancy will be made up in the following 204periods when the interactive application idles. 205 206Examples 207-------- 2081. Limit a group to 1 CPU worth of runtime:: 209 210 If period is 250ms and quota is also 250ms, the group will get 211 1 CPU worth of runtime every 250ms. 212 213 # echo 250000 > cpu.cfs_quota_us /* quota = 250ms */ 214 # echo 250000 > cpu.cfs_period_us /* period = 250ms */ 215 2162. Limit a group to 2 CPUs worth of runtime on a multi-CPU machine 217 218 With 500ms period and 1000ms quota, the group can get 2 CPUs worth of 219 runtime every 500ms:: 220 221 # echo 1000000 > cpu.cfs_quota_us /* quota = 1000ms */ 222 # echo 500000 > cpu.cfs_period_us /* period = 500ms */ 223 224 The larger period here allows for increased burst capacity. 225 2263. Limit a group to 20% of 1 CPU. 227 228 With 50ms period, 10ms quota will be equivalent to 20% of 1 CPU:: 229 230 # echo 10000 > cpu.cfs_quota_us /* quota = 10ms */ 231 # echo 50000 > cpu.cfs_period_us /* period = 50ms */ 232 233 By using a small period here we are ensuring a consistent latency 234 response at the expense of burst capacity. 235 2364. Limit a group to 40% of 1 CPU, and allow accumulate up to 20% of 1 CPU 237 additionally, in case accumulation has been done. 238 239 With 50ms period, 20ms quota will be equivalent to 40% of 1 CPU. 240 And 10ms burst will be equivalent to 20% of 1 CPU:: 241 242 # echo 20000 > cpu.cfs_quota_us /* quota = 20ms */ 243 # echo 50000 > cpu.cfs_period_us /* period = 50ms */ 244 # echo 10000 > cpu.cfs_burst_us /* burst = 10ms */ 245 246 Larger buffer setting (no larger than quota) allows greater burst capacity.