今天介绍一个插件 throttle-concurrent-builds-plugin
https://github.com/jenkinsci/throttle-concurrent-builds-plugin
This plugin allows for throttling the number of concurrent builds of a project running per node or globally.

throttle 有节流的意思,也就是限制 某个任务同时并发的个数的。下面分别的讲解 这个插件在 free style project、matrix project 等项目中的应用和异同。

throttle-concurrent-builds-plugin 使用

throttle-concurrent-builds-plugin 插件是用来限制某个job并发数的. 可以限制一个job的总的并发数量, 可以限制一个job在一个节点上的并发数量.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
配置 Maximum Total Concurrent Builds 为 0

配置 Maximum Concurrent Builds Per Node 为1,

有一个job名称是 test_freestyle , 设置为可以并发执行,
有4个节点 master节点 2个空闲, node0 ~ node2 空闲也是2个.

触发test_freestyle, 多触发几个,可以发现 test_freestyle 只会在 上面的4个节点 执行, 每个节点只会执行1个.也就是2个空闲只会占用1个.
如果没有配置 throttle-concurrent-builds-plugin 插件, 那么触发的任务多了就会把2个空闲都占用了.



一切都是那么的完美, 但是一到 pipeline 的 job 时候 这个配置就不生效了.


下面我们通过插件的代码 分析一把.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
ThrottleQueueTaskDispatcher 类, 继承了 QueueTaskDispatcher 类, 其中有 
public CauseOfBlockage canTake(Node node, Task task)
public CauseOfBlockage canTake(Node node, BuildableItem item)
方法, 这个方法返回null表示可以在这个节点上构建执行, 如果返回非null 表示不可执行.
其他的插件 有个 slave-prerequisites-plugin 插件 也是实现的 QueueTaskDispatcher 类, 这个插件
是执行一段用户自定义的脚本,(bat/shell脚本), 执行成功就能继续构建, 执行不成功就会一直等待.


下面通过在 ThrottleQueueTaskDispatcher 类中加上一些日志输出分析一下执行流程, 先在 private CauseOfBlockage canTakeImpl(Node node, Task task)中加上,
我们这里只看 getThrottleOption().equals("project") 的情况, 也就是job中配置的 Throttle this project alone . (另外一个配置是 Throttle this project as part of one or more categories的)
信息: node: hudson.slaves.DumbSlave[node1], task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 0
信息: node: hudson.slaves.DumbSlave[node0], task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 0
信息: node: hudson.slaves.DumbSlave[node2], task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 0
信息: node: hudson.slaves.DumbSlave[node2], task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 0
信息: node: hudson.model.Hudson@6b7450b, task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 0
信息: node: hudson.slaves.DumbSlave[node1], task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 0
信息: node: hudson.slaves.DumbSlave[node0], task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 0
信息: node: hudson.model.Hudson@6b7450b, task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 0
发现会有如上的输出, 可以发现 有8条输出, 2个的 node变量是hudson.model.Hudson@6b7450b, 表示这个是master节点.
6个是个hudson.slaves.DumbSlave的,分别是node0,node1,node2这3个节点. 个数也是和每个节点的空闲个数对应的.
max = 1 表示 配置的 throttle-concurrent-builds-plugin 插件中的 Maximum Concurrent Builds Per Node 是 1.
run = 0 表示 这个节点没有任务在执行.
这是第一次触发, 也就是当前所有节点都是空闲的.

第2次触发
信息: node: hudson.slaves.DumbSlave[node0], task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 0
信息: node: hudson.slaves.DumbSlave[node2], task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 0
信息: node: hudson.slaves.DumbSlave[node2], task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 0
信息: node: hudson.model.Hudson@6b7450b, task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 0
信息: node: hudson.slaves.DumbSlave[node1], task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 1
信息: node: hudson.slaves.DumbSlave[node0], task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 0
信息: node: hudson.model.Hudson@6b7450b, task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 0
第2次触发,可以发现前面有个节点 node1 已经有任务在执行了. 这里的输出也会同样的少一个, 只有 7个了


第3次触发
信息: node: hudson.slaves.DumbSlave[node0], task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 0
信息: node: hudson.slaves.DumbSlave[node2], task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 0
信息: node: hudson.slaves.DumbSlave[node2], task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 0
信息: node: hudson.slaves.DumbSlave[node0], task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 0
信息: node: hudson.slaves.DumbSlave[node1], task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 1
信息: node: hudson.model.Hudson@6b7450b, task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 1
第3次触发,可以发现前面有2个节点 node1 / 和这次的 master 节点 已经有任务在执行了. 这里的输出也会同样的少一个 , 只有 6个了



第4次触发
信息: node: hudson.slaves.DumbSlave[node2], task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 0
信息: node: hudson.slaves.DumbSlave[node2], task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 0
信息: node: hudson.slaves.DumbSlave[node1], task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 1
信息: node: hudson.slaves.DumbSlave[node0], task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 1
信息: node: hudson.model.Hudson@6b7450b, task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 1
第4次触发,可以发现前面有3个节点 node1 , node0, master 节点 已经有任务在执行了. 这里的输出也会同样的少一个 , 只有 5个了

第5次触发
信息: node: hudson.slaves.DumbSlave[node2], task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 1
信息: node: hudson.slaves.DumbSlave[node1], task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 1
信息: node: hudson.slaves.DumbSlave[node0], task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 1
信息: node: hudson.model.Hudson@6b7450b, task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 1
第5次触发,可以发现前面有4个节点 node2, node1, node0, master 节点 已经有任务在执行了. 这里的输出也会同样的少一个 , 只有 4个了

第五次触发只会一只有这4条输出, 除非有个节点上的任务执行完了. 也就是 第五次触发的这个任务一直在等待, 因为他的执行条件还没满足.


当前的队列和每个节点的执行任务的情况, 每个节点只有一个在执行, 第5个任务在等待中.
Build Queue (1)
test_freestyle cancel this build

Build Executor Status
master
1 test_freestyle #24 terminate this build
2 Idle

node0
1 Idle
2 test_freestyle #25 terminate this build

node1
1 Idle
2 test_freestyle #23 terminate this build

node2
1 test_freestyle #26 terminate this build
2 Idle


node1 -> master -> node0 -> node2


当node1上的执行完后, 这次的输出如下.
信息: node: hudson.slaves.DumbSlave[node2], task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 1
信息: node: hudson.slaves.DumbSlave[node1], task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 0
信息: node: hudson.slaves.DumbSlave[node1], task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 0
信息: node: hudson.slaves.DumbSlave[node0], task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 1
信息: node: hudson.model.Hudson@6b7450b, task: hudson.model.FreeStyleProject@53d9da80[test_freestyle], max: 1, run: 1
发现node1上已经有2个空闲了. 可以满足条件了, 所以队列中的那个任务就在node1上执行了.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
pipeline job的执行过程, 发现也是会输出8个, 和节点总空闲数是一致的,但是发现 pipeline的job 不会 执行到 判断 tjp.getThrottleOption().equals("project")) 的地方.
因为 pipeline的job tjp 直接就是null了.

node = {DumbSlave@21387} "hudson.slaves.DumbSlave[node0]"
task = {ExecutorStepExecution$PlaceholderTask@24825} "ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#51,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/51:test_pipeline #51],cookie=null,auth=null}"


node = {Hudson@20742}
task = {ExecutorStepExecution$PlaceholderTask@24825} "ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#51,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/51:test_pipeline #51],cookie=null,auth=null}"


node = {DumbSlave@21750} "hudson.slaves.DumbSlave[node1]"
task = {ExecutorStepExecution$PlaceholderTask@24825} "ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#51,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/51:test_pipeline #51],cookie=null,auth=null}"



node = {DumbSlave@21209} "hudson.slaves.DumbSlave[node2]"
task = {ExecutorStepExecution$PlaceholderTask@24825} "ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#51,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/51:test_pipeline #51],cookie=null,auth=null}"


node = {Hudson@20742}
task = {ExecutorStepExecution$PlaceholderTask@24825} "ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#51,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/51:test_pipeline #51],cookie=null,auth=null}"


node = {DumbSlave@21750} "hudson.slaves.DumbSlave[node1]"
task = {ExecutorStepExecution$PlaceholderTask@24825} "ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#51,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/51:test_pipeline #51],cookie=null,auth=null}"

node = {DumbSlave@21209} "hudson.slaves.DumbSlave[node2]"
task = {ExecutorStepExecution$PlaceholderTask@24825} "ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#51,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/51:test_pipeline #51],cookie=null,auth=null}"

node = {DumbSlave@21387} "hudson.slaves.DumbSlave[node0]"
task = {ExecutorStepExecution$PlaceholderTask@24825} "ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#51,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/51:test_pipeline #51],cookie=null,auth=null}"

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
代码加上了一些判断job是否是pipeline的job.是的话就做一些特殊的处理, 感觉改动的不是特别好.
下面是经过改造后, 临时的支持一下pipeline 的job的输出, 这个是 4个节点已经跑满, 然后启动了 96 号的构建的输出.


第一次执行
信息: node: hudson.model.Hudson@1658a444, task: ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#98,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/98:test_pipeline #98],cookie=null,auth=null}, max: 1, run: 0
信息: node: hudson.slaves.DumbSlave[node0], task: ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#98,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/98:test_pipeline #98],cookie=null,auth=null}, max: 1, run: 0
信息: node: hudson.model.Hudson@1658a444, task: ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#98,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/98:test_pipeline #98],cookie=null,auth=null}, max: 1, run: 0
信息: node: hudson.slaves.DumbSlave[node1], task: ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#98,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/98:test_pipeline #98],cookie=null,auth=null}, max: 1, run: 0
信息: node: hudson.slaves.DumbSlave[node2], task: ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#98,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/98:test_pipeline #98],cookie=null,auth=null}, max: 1, run: 0
信息: node: hudson.slaves.DumbSlave[node0], task: ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#98,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/98:test_pipeline #98],cookie=null,auth=null}, max: 1, run: 0
信息: node: hudson.slaves.DumbSlave[node2], task: ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#98,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/98:test_pipeline #98],cookie=null,auth=null}, max: 1, run: 0
信息: node: hudson.slaves.DumbSlave[node1], task: ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#98,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/98:test_pipeline #98],cookie=null,auth=null}, max: 1, run: 0
执行前8个都是空闲的,都是0
执行后 node1 上有一个运行了.


第二次
信息: node: hudson.model.Hudson@1658a444, task: ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#99,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/99:test_pipeline #99],cookie=null,auth=null}, max: 1, run: 0
信息: node: hudson.slaves.DumbSlave[node0], task: ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#99,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/99:test_pipeline #99],cookie=null,auth=null}, max: 1, run: 0
信息: node: hudson.model.Hudson@1658a444, task: ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#99,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/99:test_pipeline #99],cookie=null,auth=null}, max: 1, run: 0
信息: node: hudson.slaves.DumbSlave[node2], task: ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#99,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/99:test_pipeline #99],cookie=null,auth=null}, max: 1, run: 0
信息: node: hudson.slaves.DumbSlave[node0], task: ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#99,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/99:test_pipeline #99],cookie=null,auth=null}, max: 1, run: 0
信息: node: hudson.slaves.DumbSlave[node2], task: ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#99,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/99:test_pipeline #99],cookie=null,auth=null}, max: 1, run: 0
信息: node: hudson.slaves.DumbSlave[node1], task: ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#99,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/99:test_pipeline #99],cookie=null,auth=null}, max: 1, run: 1
执行前node1上有个执行了
执行后node2上有个执行了


第三次
信息: node: hudson.model.Hudson@1658a444, task: ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#100,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/100:test_pipeline #100],cookie=null,auth=null}, max: 1, run: 0
信息: node: hudson.slaves.DumbSlave[node0], task: ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#100,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/100:test_pipeline #100],cookie=null,auth=null}, max: 1, run: 0
信息: node: hudson.model.Hudson@1658a444, task: ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#100,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/100:test_pipeline #100],cookie=null,auth=null}, max: 1, run: 0
信息: node: hudson.slaves.DumbSlave[node0], task: ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#100,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/100:test_pipeline #100],cookie=null,auth=null}, max: 1, run: 0
信息: node: hudson.slaves.DumbSlave[node2], task: ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#100,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/100:test_pipeline #100],cookie=null,auth=null}, max: 1, run: 1
信息: node: hudson.slaves.DumbSlave[node1], task: ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#100,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/100:test_pipeline #100],cookie=null,auth=null}, max: 1, run: 1
执行前node1,node2都有执行了
执行后node0上有个执行了


第四次
信息: node: hudson.model.Hudson@1658a444, task: ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#101,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/101:test_pipeline #101],cookie=null,auth=null}, max: 1, run: 0
信息: node: hudson.model.Hudson@1658a444, task: ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#101,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/101:test_pipeline #101],cookie=null,auth=null}, max: 1, run: 0
信息: node: hudson.slaves.DumbSlave[node0], task: ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#101,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/101:test_pipeline #101],cookie=null,auth=null}, max: 1, run: 1
信息: node: hudson.slaves.DumbSlave[node2], task: ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#101,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/101:test_pipeline #101],cookie=null,auth=null}, max: 1, run: 1
信息: node: hudson.slaves.DumbSlave[node1], task: ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#101,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/101:test_pipeline #101],cookie=null,auth=null}, max: 1, run: 1
执行前node0, node1, node2上都有执行了
执行后master上有个执行了


第五次
信息: node: hudson.model.Hudson@1658a444, task: ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#102,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/102:test_pipeline #102],cookie=null,auth=null}, max: 1, run: 1
信息: node: hudson.slaves.DumbSlave[node0], task: ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#102,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/102:test_pipeline #102],cookie=null,auth=null}, max: 1, run: 1
信息: node: hudson.slaves.DumbSlave[node2], task: ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#102,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/102:test_pipeline #102],cookie=null,auth=null}, max: 1, run: 1
信息: node: hudson.slaves.DumbSlave[node1], task: ExecutorStepExecution.PlaceholderTask{runId=test_pipeline#102,label=null,context=CpsStepContext[3:node]:Owner[test_pipeline/102:test_pipeline #102],cookie=null,auth=null}, max: 1, run: 1
第五个任务就会排队阻塞了.因为设置的条件是每个节点只能运行一个.现在是4个节点分别的都运行了一个了




今天介绍一个插件 throttle-concurrent-builds-plugin
https://github.com/jenkinsci/throttle-concurrent-builds-plugin
This plugin allows for throttling the number of concurrent builds of a project running per node or globally.

throttle 有节流的意思,也就是限制 某个任务同时并发的个数的。下面分别的讲解 这个插件在 free style project、matrix project 等项目中的应用和异同。

自由风格项目 中的用法。

要想并发执行项目,需要勾选这里。
在这里插入图片描述

1.限制并发个数
在这里插入图片描述
这里我们在 自由风格 项目 test_free 中进行 throttle-concurrent-builds-plugin 插件的配置,
这里我们设置 Maximum Total Concurrent Builds 个数是2,这个代表总的并发数不能超过2。
设置 Maximum Concurrent Builds Per Node 个数是1,这个代表每个节点的并发数不能超过1。
如果设置的是0表示不作限制。
在这里插入图片描述
上面这个图就是运行的结果,我们连续启动了3个任务, #3 号任务跑到了master节点上。#4 号任务跑到了s1节点上。 这个时候test_free 项目总共的并发数已经是2了。这个时候再启动的 #5 号任务 就在等待了。从中还可以看到,每个节点只有一个任务运行,虽然节点上还有其他的空闲,但是不会执行第二个。
如下图所示。
在这里插入图片描述
这里我们设置 Maximum Total Concurrent Builds 个数是2,这个代表总的并发数不能超过2。
设置 Maximum Concurrent Builds Per Node 个数是1,这个代表每个节点的并发数不能超过1。
如果设置的是0表示不作限制。
一般常用的就是 总的并发限制为0, 节点的并发限制为1或2 或其他某个值。
另一个常用的就是, 总的并发数限制一个值,如 10, 节点的并发数限制个值, 如1。 节点的并发数限制了,任务执行的就会比较分散,基本上每个节点都会执行。
第三种用法就是,总的并发数限制一个值,如 10,节点的并发数不作限制。节点的并发不作限制,有可能10个任务都会跑到一共节点上(节点空闲要有10个才行),任务可能会比较集中到某个节点。

2.限制相同参数值的并发
在这里插入图片描述
使用 Prevent multiple jobs with identical parameters from running concurrently 选项来 设置限制 相同参数值的时候 做限制。
例如显示 参数名 是 param2的。
在这里插入图片描述
首先启动个任务 #10 号, 设置参数 param1 = a, param2=b, 这时候启动 任务 #11,设置参数 param1 = aa, param2=b
可以看到任务 #11 号在等待了。 因为param2 和正常执行的 任务#10的值是一样的了。
参数限制的同时 对 并发数的限制也是 同时生效的。同 用法1。

多配置项目 中的用法。

下面截图是配置的一个简单的 多配置项目, 定义了一个坐标,坐标名是 x, x可以使用的值是 1, 2 两个值。
p1
在 “Configuration Matrix” 这里配置坐标
在这里插入图片描述

用法1.
在这里插入图片描述
同样的我们限制 总的并发数是2, 节点的并发数是1.
在这里插入图片描述

连续触发了多个任务,#47 ~ #54 这么几个任务, 可以发现做的限制没有起到作用。
这是因为 多配置项目 要想做限制,需要勾选 Throttle Matrix master builds 或 Throttle Matrix configuration builds。
默认是 Throttle Matrix master builds = true, Throttle Matrix configuration builds = false的。
2个可以选择的,总共的排列组合是4个。不过不怎么用到的。常用的组合就是
默认的 Throttle Matrix master builds = true, Throttle Matrix configuration builds = false
另一个就是 Throttle Matrix master builds = true, Throttle Matrix configuration builds = true
第三种就是Throttle Matrix master builds = false, Throttle Matrix configuration builds = true

2个都是false的那就是不限制了。就是上面用法1提到的。

用法2.
默认是 Throttle Matrix master builds = true, Throttle Matrix configuration builds = false
在这里插入图片描述

连续启动了4个任务, #1 ~ #4 号任务

在这里插入图片描述
可以发现 #1 和 #2 都在master节点执行了。 这个时候总的并发数 到2个, 后面再次启动的 任务不再执行了。可以看到这种情况, 节点的并发数设置的1 并没有起到作用。

在这里插入图片描述

用法3.
Throttle Matrix master builds = true, Throttle Matrix configuration builds = true
在这里插入图片描述

连续启动任务,这里启动了3个任务, #3号,#4号, #5号, 发现 #5 号任务等待了, 不过等待的原因是总的并发数超过2了。
在这里插入图片描述
从下面的图片可以看到 #3号的 子任务 test_matrx>>2 和 test_matrx>>1 都跑到了master节点,
#4号子任务 test_matrx>>2跑到了 s2节点, #4号子任务 test_matrx>>1 跑到s1节点, 也就是说 每个节点只能有其中一种子任务,就是 1 和 2 不能同时在一个节点上,这里就是 限制 每个节点只能并发1个 设置项起到的作用。
在这里插入图片描述

下面我们调整一下总的并发数到4个。会发生什么情况呢??
在这里插入图片描述
调整之后保存, #5号任务之前是等待的,限制变成执行了。
在这里插入图片描述
#5号任务 执行的 也比较特别,#5号子任务 test_matrx>>2跑到了 s1节点, #5号子任务 test_matrx>>1 跑到s2节点 。 也就是说 每个节点只能有其中一种子任务,就是 1 或 2 不能同时在一个节点上,这里就是 限制 每个节点只能并发1个 设置项起到的作用。
在这里插入图片描述
再次启动几个任务, #6 号, #7 号, #8 号, #9 号。
在这里插入图片描述

其中的 #6 号, #7 号, #8 号 都开始执行了,#9 号 的父任务在执行,但是2个子任务在等待,因为每个节点都有 1或2在执行,#9号中的2个子任务 没有合适的节点执行了。
在这里插入图片描述
可以看到#9号中的2个子任务等待说明。
在这里插入图片描述
这时候再次启动个任务, #10号。#10号 任务也是等待状态了,这个时候 因为 总的并发数超过4个了。
在这里插入图片描述
在这里插入图片描述
可以看到#10号等待的原因。
在这里插入图片描述

这个时候如果有个节点 s3.那么#9号的2个子任务就会跑到s3上执行了。加个节点s3,目前s3节点offline,可以看到这个时候等待的原因多了一段。
s3节点加上
把s3上线,验证我们的猜想,我们的猜想是对的。
s3上线

用法4.
Throttle Matrix master builds = false, Throttle Matrix configuration builds = true
在这里插入图片描述这次 启动3个任务, #8号 #9 号任务在执行了. #10号任务父任务在执行了,
但是里面的2个子任务在排队了, 因为并发数超过2个了.
这次 把 Throttle Matrix master builds = false 设置取消了,相比 上面用法3来说, 可以启动多个
test_matrix 的 任务, 每个的父任务都会立即执行,不会被限制了, 但是 子任务会限制, 相应的整个任务
也会被限制了.
在这里插入图片描述
同时可以看到,每个子任务都比较分散,#8号的 test_matrix >> 1 不会和 #9好的 test_matrix >> 1 跑到一个节点上执行.
在这里插入图片描述

总结:
Throttle Matrix master builds 会和 Maximum Total Concurrent Builds 一起起作用,
Throttle Matrix configuration builds 会和 Maximum Concurrent Builds Per Node 一起起作用。
Maximum Concurrent Builds Per Node 一般会让任务更分散的在多个节点上执行。

源码分析

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
test_matrix:
通过在 ThrottleQueueTaskDispatcher.java 类 中canRun() 方法打上断点,发现,父任务执行时候会调用一次,而且仅有这一次。子任务在执行的时候会被调用两次。为什么会这样呢? 通过 Queue.java 类中的maintain() 方法可以发现答案的。父任务比较特殊,是没有buildables的,执行的时候也不会占用 执行器。 子任务 第一次调用是 和 父任务一样的位置, 第二次调用 for (BuildableItem p : new ArrayList<>(buildables)) 遍历 buildables的时候。

1.
public CauseOfBlockage canRun(Queue.Item item) hudson.matrix.MatrixProject@6f806be4[test_matrix] 这个是父任务
canRun:90, ThrottleQueueTaskDispatcher
getCauseOfBlockageForItem:1210, Queue
maintain:1572, Queue

2.
public CauseOfBlockage canRun(Queue.Item item) hudson.matrix.MatrixConfiguration@4e2f16c4[test_matrix/x=1] 这个是其中的一个子任务,x=1的情况
canRun:90, ThrottleQueueTaskDispatcher
getCauseOfBlockageForItem:1210, Queue
maintain:1572, Queue
3.
public CauseOfBlockage canRun(Queue.Item item) hudson.matrix.MatrixConfiguration@4e2f16c4[test_matrix/x=1]这个是其中的一个子任务,x=1的情况
canRun:90, ThrottleQueueTaskDispatcher
getCauseOfBlockageForItem:1210, Queue
maintain:1608, Queue for (BuildableItem p : new ArrayList<>(buildables))
4.
public CauseOfBlockage canRun(Queue.Item item) hudson.matrix.MatrixConfiguration@6e1a5a11[test_matrix/x=2]这个是其中的一个子任务,x=2的情况
canRun:90, ThrottleQueueTaskDispatcher
getCauseOfBlockageForItem:1210, Queue
maintain:1572, Queue
5.
public CauseOfBlockage canRun(Queue.Item item) hudson.matrix.MatrixConfiguration@6e1a5a11[test_matrix/x=2]这个是其中的一个子任务,x=2的情况
canRun:90, ThrottleQueueTaskDispatcher
getCauseOfBlockageForItem:1210, Queue
maintain:1608, Queue for (BuildableItem p : new ArrayList<>(buildables))

1.
canRun(Queue.Item item) hudson.model.Queue$WaitingItem:hudson.matrix.MatrixProject@4056b103[test_matrix]:46
canRun(task, tjp, pipelineCategories)
canRunImpl(task, tjp, pipelineCategories)
if (!shouldBeThrottled(task, tjp) && pipelineCategories.isEmpty()) {
return null; -->
}


2.
canRun(Queue.Item item) hudson.model.Queue$WaitingItem:hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2]:47
canRun(item.task, tjp, pipelineCategories)
canRunImpl(task, tjp, pipelineCategories);
buildsOfProjectOnAllNodes(task); task=hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2]
buildsOfProjectOnAllNodesImpl(task);
buildsOfProjectOnNode(jenkins, task);
buildsOfProjectOnNodeImpl(node, task); node=Hudson task=hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2]
buildsOnExecutor(task, e)
3.
canRun(Queue.Item item) hudson.model.Queue$WaitingItem:hudson.matrix.MatrixConfiguration@671937f7[test_matrix/x=1]:48
canRun(item.task, tjp, pipelineCategories);
canRunImpl(task, tjp, pipelineCategories)
buildsOfProjectOnAllNodes(task); task=hudson.matrix.MatrixConfiguration@671937f7[test_matrix/x=1]
buildsOfProjectOnAllNodesImpl(task);
buildsOfProjectOnNode(jenkins, task);
buildsOfProjectOnNodeImpl(node, task); node=Hudson task=hudson.matrix.MatrixConfiguration@671937f7[test_matrix/x=1]
buildsOnExecutor(task, e)


4. same to 2
canRun(Queue.Item item) hudson.model.Queue$BuildableItem:hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2]:47
canRun(item.task, tjp, pipelineCategories)
canRunImpl(task, tjp, pipelineCategories);
buildsOfProjectOnAllNodes(task); task=hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2]
buildsOfProjectOnAllNodesImpl(task);
buildsOfProjectOnNode(jenkins, task);
buildsOfProjectOnNodeImpl(node, task); node=Hudson task=hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2]
buildsOnExecutor(task, e)
5. same to 3
canRun(Queue.Item item) hudson.model.Queue$BuildableItem:hudson.matrix.MatrixConfiguration@671937f7[test_matrix/x=1]:48
canRun(item.task, tjp, pipelineCategories);
canRunImpl(task, tjp, pipelineCategories)
buildsOfProjectOnAllNodes(task); task=hudson.matrix.MatrixConfiguration@671937f7[test_matrix/x=1]
buildsOfProjectOnAllNodesImpl(task);
buildsOfProjectOnNode(jenkins, task);
buildsOfProjectOnNodeImpl(node, task); node=Hudson task=hudson.matrix.MatrixConfiguration@671937f7[test_matrix/x=1]
buildsOnExecutor(task, e)








































1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
test_free 自由风格项目 运行分析:
canRun()
canRunImpl(task, tjp, pipelineCategories)
buildsOfProjectOnAllNodes(task);


canTake()
canTakeImpl(node, task)
canRunImpl(task, tjp, pipelineCategories);
buildsOfProjectOnAllNodes(task);
buildsOfProjectOnNode(node, task);




private int buildsOnExecutor(Task task, Executor exec): hudson.model.FreeStyleProject@78bc75ef[test_free], exec: Thread[Executor #1 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.model.FreeStyleProject@78bc75ef[test_free], exec: Thread[Executor #2 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.model.FreeStyleProject@78bc75ef[test_free], exec: Thread[Executor #0 for master,5,main], currentExecutable: null

private int buildsOnExecutor(Task task, Executor exec): hudson.model.FreeStyleProject@78bc75ef[test_free], exec: Thread[Executor #1 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.model.FreeStyleProject@78bc75ef[test_free], exec: Thread[Executor #2 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.model.FreeStyleProject@78bc75ef[test_free], exec: Thread[Executor #0 for master,5,main], currentExecutable: null

public CauseOfBlockage canTake(Node node, Task task) hudson.model.FreeStyleProject@78bc75ef[test_free]
private int buildsOnExecutor(Task task, Executor exec): hudson.model.FreeStyleProject@78bc75ef[test_free], exec: Thread[Executor #1 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.model.FreeStyleProject@78bc75ef[test_free], exec: Thread[Executor #2 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.model.FreeStyleProject@78bc75ef[test_free], exec: Thread[Executor #0 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.model.FreeStyleProject@78bc75ef[test_free], exec: Thread[Executor #1 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.model.FreeStyleProject@78bc75ef[test_free], exec: Thread[Executor #2 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.model.FreeStyleProject@78bc75ef[test_free], exec: Thread[Executor #0 for master,5,main], currentExecutable: null

public CauseOfBlockage canTake(Node node, Task task) hudson.model.FreeStyleProject@78bc75ef[test_free]
private int buildsOnExecutor(Task task, Executor exec): hudson.model.FreeStyleProject@78bc75ef[test_free], exec: Thread[Executor #1 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.model.FreeStyleProject@78bc75ef[test_free], exec: Thread[Executor #2 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.model.FreeStyleProject@78bc75ef[test_free], exec: Thread[Executor #0 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.model.FreeStyleProject@78bc75ef[test_free], exec: Thread[Executor #1 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.model.FreeStyleProject@78bc75ef[test_free], exec: Thread[Executor #2 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.model.FreeStyleProject@78bc75ef[test_free], exec: Thread[Executor #0 for master,5,main], currentExecutable: null

public CauseOfBlockage canTake(Node node, Task task) hudson.model.FreeStyleProject@78bc75ef[test_free]
private int buildsOnExecutor(Task task, Executor exec): hudson.model.FreeStyleProject@78bc75ef[test_free], exec: Thread[Executor #1 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.model.FreeStyleProject@78bc75ef[test_free], exec: Thread[Executor #2 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.model.FreeStyleProject@78bc75ef[test_free], exec: Thread[Executor #0 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.model.FreeStyleProject@78bc75ef[test_free], exec: Thread[Executor #1 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.model.FreeStyleProject@78bc75ef[test_free], exec: Thread[Executor #2 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.model.FreeStyleProject@78bc75ef[test_free], exec: Thread[Executor #0 for master,5,main], currentExecutable: null




1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
多配置项目 运行分析
canRun()
canRunImpl(task, tjp, pipelineCategories)
buildsOfProjectOnAllNodes(task);


canTake()
canTakeImpl(node, task)
canRunImpl(task, tjp, pipelineCategories);
buildsOfProjectOnAllNodes(task);
buildsOfProjectOnNode(node, task);

private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2], exec: Thread[Executor #-1 for master : executing test_matrix #30,5,main], currentExecutable: test_matrix #30

private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2], exec: Thread[Executor #2 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2], exec: Thread[Executor #0 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2], exec: Thread[Executor #1 for master,5,main], currentExecutable: null

private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2], exec: Thread[Executor #-1 for master : executing test_matrix #30,5,main], currentExecutable: test_matrix #30

private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2], exec: Thread[Executor #2 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2], exec: Thread[Executor #0 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2], exec: Thread[Executor #1 for master,5,main], currentExecutable: null

public CauseOfBlockage canTake(Node node, Task task) hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2]
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2], exec: Thread[Executor #-1 for master : executing test_matrix #30,5,main], currentExecutable: test_matrix #30

private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2], exec: Thread[Executor #2 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2], exec: Thread[Executor #0 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2], exec: Thread[Executor #1 for master,5,main], currentExecutable: null

private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2], exec: Thread[Executor #-1 for master : executing test_matrix #30,5,main], currentExecutable: test_matrix #30

private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2], exec: Thread[Executor #2 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2], exec: Thread[Executor #0 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2], exec: Thread[Executor #1 for master,5,main], currentExecutable: null

public CauseOfBlockage canTake(Node node, Task task) hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2]
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2], exec: Thread[Executor #-1 for master : executing test_matrix #30,5,main], currentExecutable: test_matrix #30

private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2], exec: Thread[Executor #2 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2], exec: Thread[Executor #0 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2], exec: Thread[Executor #1 for master,5,main], currentExecutable: null

private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2], exec: Thread[Executor #-1 for master : executing test_matrix #30,5,main], currentExecutable: test_matrix #30

private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2], exec: Thread[Executor #2 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2], exec: Thread[Executor #0 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2], exec: Thread[Executor #1 for master,5,main], currentExecutable: null

public CauseOfBlockage canTake(Node node, Task task) hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2]
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2], exec: Thread[Executor #-1 for master : executing test_matrix #30,5,main], currentExecutable: test_matrix #30
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2], exec: Thread[Executor #2 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2], exec: Thread[Executor #0 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2], exec: Thread[Executor #1 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2], exec: Thread[Executor #-1 for master : executing test_matrix #30,5,main], currentExecutable: test_matrix #30
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2], exec: Thread[Executor #2 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2], exec: Thread[Executor #0 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@48e998ae[test_matrix/x=2], exec: Thread[Executor #1 for master,5,main], currentExecutable: null

private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@671937f7[test_matrix/x=1], exec: Thread[Executor #-1 for master : executing test_matrix #30,5,main], currentExecutable: test_matrix #30
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@671937f7[test_matrix/x=1], exec: Thread[Executor #2 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@671937f7[test_matrix/x=1], exec: Thread[Executor #0 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@671937f7[test_matrix/x=1], exec: Thread[Executor #1 for master : executing test_matrix/x=2 #30,5,main], currentExecutable: test_matrix/x=2 #30
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@671937f7[test_matrix/x=1], exec: Thread[Executor #-1 for master : executing test_matrix #30,5,main], currentExecutable: test_matrix #30
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@671937f7[test_matrix/x=1], exec: Thread[Executor #2 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@671937f7[test_matrix/x=1], exec: Thread[Executor #0 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@671937f7[test_matrix/x=1], exec: Thread[Executor #1 for master : executing test_matrix/x=2 #30,5,main], currentExecutable: test_matrix/x=2 #30

public CauseOfBlockage canTake(Node node, Task task) hudson.matrix.MatrixConfiguration@671937f7[test_matrix/x=1]
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@671937f7[test_matrix/x=1], exec: Thread[Executor #-1 for master : executing test_matrix #30,5,main], currentExecutable: test_matrix #30
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@671937f7[test_matrix/x=1], exec: Thread[Executor #2 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@671937f7[test_matrix/x=1], exec: Thread[Executor #0 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@671937f7[test_matrix/x=1], exec: Thread[Executor #1 for master : executing test_matrix/x=2 #30,5,main], currentExecutable: test_matrix/x=2 #30
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@671937f7[test_matrix/x=1], exec: Thread[Executor #-1 for master : executing test_matrix #30,5,main], currentExecutable: test_matrix #30
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@671937f7[test_matrix/x=1], exec: Thread[Executor #2 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@671937f7[test_matrix/x=1], exec: Thread[Executor #0 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@671937f7[test_matrix/x=1], exec: Thread[Executor #1 for master : executing test_matrix/x=2 #30,5,main], currentExecutable: test_matrix/x=2 #30

public CauseOfBlockage canTake(Node node, Task task) hudson.matrix.MatrixConfiguration@671937f7[test_matrix/x=1]
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@671937f7[test_matrix/x=1], exec: Thread[Executor #-1 for master : executing test_matrix #30,5,main], currentExecutable: test_matrix #30
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@671937f7[test_matrix/x=1], exec: Thread[Executor #2 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@671937f7[test_matrix/x=1], exec: Thread[Executor #0 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@671937f7[test_matrix/x=1], exec: Thread[Executor #1 for master : executing test_matrix/x=2 #30,5,main], currentExecutable: test_matrix/x=2 #30
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@671937f7[test_matrix/x=1], exec: Thread[Executor #-1 for master : executing test_matrix #30,5,main], currentExecutable: test_matrix #30
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@671937f7[test_matrix/x=1], exec: Thread[Executor #2 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@671937f7[test_matrix/x=1], exec: Thread[Executor #0 for master,5,main], currentExecutable: null
private int buildsOnExecutor(Task task, Executor exec): hudson.matrix.MatrixConfiguration@671937f7[test_matrix/x=1], exec: Thread[Executor #1 for master : executing test_matrix/x=2 #30,5,main], currentExecutable: test_matrix/x=2 #30