Con Kolivas
|
7d34a6b6e3
Statify tv_sort.
|
14 years ago |
Con Kolivas
|
f0cc293239
Convert the opt queue into a minimum number of work items to have queued instead of an extra number to decrease risk of getting idle devices without increasing risk of higher rejects.
|
14 years ago |
Con Kolivas
|
3aee066b88
Add options to explicitly enable CPU mining or disable GPU mining.
|
14 years ago |
Con Kolivas
|
ee5b476402
Don't show value of intensity since it's dynamic by default.
|
14 years ago |
Con Kolivas
|
efa1731822
cgminer no longer supports default url user and pass so remove them.
|
14 years ago |
Con Kolivas
|
fc46d57d62
Return -1 if no input is detected from the menu to prevent it being interpreted as a 0.
|
14 years ago |
Con Kolivas
|
8d39311613
Reinstate minimum 1 extra in queue to make it extremely unlikely to ever have 0 staged work items and any idle time.
|
14 years ago |
Con Kolivas
|
8a7b9acd00
Switching between redrawing windows does not fix the crash with old libncurses, so redraw both windows, but only when the window size hasn't changed.
|
14 years ago |
Con Kolivas
|
eb0fa6e5df
Copy cgminer path, not cat it.
|
14 years ago |
Con Kolivas
|
57c93d7e20
Prevent segfault on exit for when accessory threads don't exist.
|
14 years ago |
Con Kolivas
|
e81a362b5f
Bump threshhold for lag up to maximum queued but no staged work.
|
14 years ago |
Con Kolivas
|
5b48881175
Only consider pool lagging if more than one item is queued.
|
14 years ago |
Con Kolivas
|
3d5f555407
Allow a custom kernel path to be entered on the command line.
|
14 years ago |
Con Kolivas
|
7dc3db2340
Implement SSE2 32 bit assembly algorithm as well.
|
14 years ago |
Con Kolivas
|
a4ec961ecc
We can queue all the necessary work without hitting frequent stales now with the time and string stale protection active all the time.
|
14 years ago |
Con Kolivas
|
81aedc972a
Add message about needing one server.
|
14 years ago |
Con Kolivas
|
f2f0ba8024
Revert "Revert "Since we roll work all the time now, we end up staging a lot of work without queueing, so don't queue if we've already got staged work.""
|
14 years ago |
Con Kolivas
|
cea1cf6cc0
Revert "Since we roll work all the time now, we end up staging a lot of work without queueing, so don't queue if we've already got staged work."
|
14 years ago |
Con Kolivas
|
5a2cf5a6b1
Get start times just before mining begins to not have very slow rise in average.
|
14 years ago |
Con Kolivas
|
b643b56a95
Allow LP to reset block detect and block detect lp flags to know who really came first.
|
14 years ago |
Con Kolivas
|
73c98e1e79
Check if there is more than one work item queued before complaining about a slow pool.
|
14 years ago |
Con Kolivas
|
dbf0a1366d
Use the new hashes directly for counts instead of the fragile counters currently in use.
|
14 years ago |
Con Kolivas
|
0899ee86ae
Only consider pool slow to respond if we can't even roll work.
|
14 years ago |
Con Kolivas
|
6197ff2009
Remove silly debugging output.
|
14 years ago |
Con Kolivas
|
93f4163aca
Create a hash list of all the blocks created and search them to detect when a new block has definitely appeared, using that information to detect stale work and discard it.
|
14 years ago |
Con Kolivas
|
b81077f36a
Since we roll work all the time now, we end up staging a lot of work without queueing, so don't queue if we've already got staged work.
|
14 years ago |
Con Kolivas
|
bf3033e0f1
Make restarting of GPUs optional for systems that hang on any attempt to restart them.
|
14 years ago |
Con Kolivas
|
666fcc3f55
Move staged threads to hashes so we can sort them by time.
|
14 years ago |
Con Kolivas
|
d9accc4846
Put a lower limit on the nonce increment in cpu mining.
|
14 years ago |
Con Kolivas
|
f6591379fb
Minimise how much more work can be given in cpu mining threads each interval.
|
14 years ago |