Con Kolivas
|
afcfea15a7
Simplify all those total_secs usages by initialising it to 1 second.
|
13 years ago |
Con Kolivas
|
5fadfdb219
Overlap queued decrementing with staged incrementing.
|
13 years ago |
Con Kolivas
|
9f1d9ce3b7
Artificially set the pool lagging flag on pool switch in failover only mode as well.
|
13 years ago |
Con Kolivas
|
a6b97327e1
Artificially set the pool lagging flag on work restart to avoid messages about slow pools after every longpoll.
|
13 years ago |
Con Kolivas
|
44e81218fd
Factor in opt_queue value into enough work queued or staged.
|
13 years ago |
Con Kolivas
|
611f1cec7c
Roll work whenever we can on getwork.
|
13 years ago |
Con Kolivas
|
fd0be1bb51
Queue requests for getwork regardless and test whether we should send for a getwork from the getwork thread itself.
|
13 years ago |
Con Kolivas
|
7d77c01619
Get rid of age_work().
|
13 years ago |
Con Kolivas
|
d1508bd40e
Merge pull request #296 from kanoi/api
|
13 years ago |
Kano
|
95dff7363e
API allow display/change failover-only setting
|
13 years ago |
Con Kolivas
|
8e20456bc0
Check we are not lagging as well as there is enough work in getwork.
|
13 years ago |
Con Kolivas
|
00691ababf
Merge pull request #292 from kanoi/main
|
13 years ago |
Con Kolivas
|
d66742a8c1
Minimise locking and unlocking when getting counts by reusing shared mutex lock functions.
|
13 years ago |
Con Kolivas
|
c91a95459b
Avoid getting more work if by the time the getwork thread is spawned we find ourselves with enough work.
|
13 years ago |
Con Kolivas
|
f27bcb8ee5
Going back to e68ecf5eb275e1cc2dc22c7db35b0bd8d9c799de
|
13 years ago |
Con Kolivas
|
c892ded6e0
Make sure there are true pending staged work items as well in failover only mode.
|
13 years ago |
Con Kolivas
|
61003df49f
In failover-only mode we need to queue enough work for the local pool and ignore the total queued count.
|
13 years ago |
Con Kolivas
|
8aa61f6626
Make sure we have work from the current pool somewhere in the queue in case the queue is full of requests from a pool that has just died.
|
13 years ago |
Con Kolivas
|
c0aaf56a8d
Since all the counts use the same mutex, grab it only once.
|
13 years ago |
Con Kolivas
|
4f9394be81
When popping work, grab cloned work first if possible since original work can be reused to make further clones.
|
13 years ago |
Con Kolivas
|
8085ae6854
Further simplify the queue request mechanism.
|
13 years ago |
Con Kolivas
|
f83863a996
Keep total queued count as a fake pending staged count to account for the period a queue is in flight before it is staged.
|
13 years ago |
Con Kolivas
|
e47dc87355
Clone work at the time of requesting it if an existing work item can be rolled.
|
13 years ago |
Con Kolivas
|
e68ecf5eb2
Queue one request for each staged request removed, keeping the staged request count optimal at all times.
|
13 years ago |
Kano
|
52e5524d7f
Escape " and \ when writing json config file
|
13 years ago |
ckolivas
|
3dd1658e1f
We may as well leave one curl still available per pool instead of reaping the last one.
|
13 years ago |
ckolivas
|
c7bcad653b
Need to recheck the pool->curls count on regaining the pool lock after the pthread conditional wait returns.
|
13 years ago |
ckolivas
|
ad8c4b7755
Revert "Only add to the pool curlring and increment the counter under mutex lock."
|
13 years ago |
ckolivas
|
145f04ccc7
Display reaped debug message outside mutex lock to avoid recursive locking.
|
13 years ago |
ckolivas
|
8897e06575
Only add to the pool curlring and increment the counter under mutex lock.
|
13 years ago |