Kano
|
0ac350547e
--default-config - allow command line to define the default configuration file for loading and saving
|
13 years ago |
Kano
|
cef9731fbc
CURL support for individual proxy per pool and all proxy types
|
13 years ago |
Kano
|
307d8da034
HW: error counter auto for all devices - ztex code not fixed
|
13 years ago |
Kano
|
fd2034ce77
Merge branch 'main'
|
13 years ago |
Kano
|
4023872b76
count device diff1 shares
|
13 years ago |
Kano
|
568b0fed89
API allow full debug settings control
|
13 years ago |
Con Kolivas
|
57c3b12f64
Sort the blocks database in reverse order, allowing us to remove the first block without iterating over them. Output the block number to debug.
|
13 years ago |
Con Kolivas
|
f97bf2e2ac
Keep the local block number in the blocks structs stored and sort them by number to guarantee we delete the oldest when ageing the block struct entries.
|
13 years ago |
Con Kolivas
|
b768758818
Test for lagging once more in queue_request to enable work to leak to backup pools.
|
13 years ago |
Con Kolivas
|
579c1299c6
There is no need to try to switch pools in select_pool since the current pool is actually not affected by the choice of pool to get work from.
|
13 years ago |
Con Kolivas
|
4a210d4eff
Only clear the pool lagging flag if we're staging work faster than we're using it.
|
13 years ago |
Con Kolivas
|
d1683f75c9
needed flag is currently always false in queue_request. Remove it for now.
|
13 years ago |
Con Kolivas
|
1b7db5bc9c
thr is always NULL going into queue_request now.
|
13 years ago |
Con Kolivas
|
0e0093e602
Select pool regardless of whether we're lagging or not, and don't queue another request in switch pool to avoid infinite recursion.
|
13 years ago |
Con Kolivas
|
7992e5f3c8
Carry the needed bool over the work command queue.
|
13 years ago |
Con Kolivas
|
37fa7d36d4
Move the decision to queue further work upstream before threads are spawned based on fine grained per-pool stats and increment the queued count immediately.
|
13 years ago |
Con Kolivas
|
618b3e8b11
Track queued and staged per pool once again for future use.
|
13 years ago |
Con Kolivas
|
4ca288e820
Limit queued_getworks to double the expected queued maximum rather than factoring in number of pools.
|
13 years ago |
Con Kolivas
|
ad90269508
Minimise the number of getwork threads we generate.
|
13 years ago |
Con Kolivas
|
0feb679b67
Only keep the last 6 blocks in the uthash database to keep memory usage constant. Storing more is unhelpful anyway.
|
13 years ago |
Con Kolivas
|
b74b54d95b
Check we haven't staged work while waiting for a curl entry before proceeding.
|
13 years ago |
Con Kolivas
|
61df3013a8
Ignore the submit_fail flag when deciding whether to recruit more curls or not since we have upper bounds on how many curls can be recruited, this test is redundant and can lead to problems.
|
13 years ago |
ckolivas
|
edd9b81622
Do not add time to dynamic opencl calculations over a getwork.
|
13 years ago |
Con Kolivas
|
9de3a264fc
Increase max curls to number of mining threads + queue * 2, accounting for up and downstream comms.
|
13 years ago |
Con Kolivas
|
3ab5dba67e
Queue enough requests to get started.
|
13 years ago |
Con Kolivas
|
3ebe8e8c77
Revert "Scale maximum number of curls up according to work submission rate."
|
13 years ago |
Con Kolivas
|
3ceb57b8f6
There is no point trying to clone_work in get_work() any more since we clone on every get_work_thread where possible.
|
13 years ago |
Con Kolivas
|
787e40a7cc
There is no point subtracting 1 from maxq in get_work_thread.
|
13 years ago |
Con Kolivas
|
1dff48e759
Scale maximum number of curls up according to work submission rate.
|
13 years ago |
Con Kolivas
|
56be75228e
Roll back to 45f0ac7b482abe9d9d7c4644c286df6e70924145
|
13 years ago |