Move FTC to factor four diff swing over 7 days
-
[quote name=“erk” post=“27765” timestamp=“1378845560”]
Gamecoin (GME) just implemented 10% difficulty change ever 12 blocks, they are also 2.5min blocks. It seems to be working ok, feels a little slow to respond when a pool hops, but it gets there. They also had issues previously with 51% attack and being locked at high diff. One thing they did was patch the client so no older versions were valid on the chain after the update.
[/quote]Thanks for that erk, I have taken a look at GameCoin and will keep an eye on them. I do intend to change the protocol version as well.
-
[quote]EDIT: I think I have it, a diff change every 126 blocks (max 9%) but sampling from 504 blocks ago?[/quote]
i like the idea to protect against short term manipulation, but this has also often a repeater effect of past change as you resample them 4 times. the problem is that it doesn’t converge: ex: you want 4 and get 5 when you are under 3.7 and 3 when you are above 4.3so you get 4-3-5-3 next is 5-4-3-5-3-5-4-3… and repeat. Yes the actual single sampling don’t converge either so we get 3-5-3-5-3-5-3 :(
4 times is not too bad and with some variation nearly converge better then the actual sampling. when more resampling the repeater can becomes repeater-amplifier. A 12 blocks over a 504 resample makes past event too important so it react to older trend not current. ex: if you say we get 10-x with 9% change so we should stable at 5 and have previous like 7-8-8-8-8-8-8-8 next would be 7-6-5-4-3-2-1-2. so going way too low and get next way too high from past history
exponential moving average can help but need to be analysed a bit more.gamecoin is not on coinchoose or coinwarz at the moment so the problem seems very different. 10% is way too much (it is more then 600% over 504 blocks when compound) for small interval and can be abused with time warp as we can’t completely eliminate it. the allowed time difference should be less then the difficulty adjustment when added or substract from the sampling period. actual 2h over 504 is ~ 10%. 30min over 126 would also be 10% so borderline to be abused. 30 min. over 12 blocks is 100% so someone can make 5-6 diff change one way and one the other way to reset time and then repeat to abuse the diff. Zetacoin get this kind of stuff. longer sampling help solve this.
-
[quote name=“ghostlander” post=“27755” timestamp=“1378839706”]
I see you’ve implemented 9% over 126 blocks without large averaging window of 504 blocks or something like.[code] - static const int64 nTargetTimespan = (7 * 24 * 60 * 60) / 8; // Feathercoin: 7/8 days
- static const int64 nTargetTimespan = (7 * 24 * 60 * 60) / 32; // Feathercoin: 7/32 days[/code]
In fact, this is the recent Phenixcoin patch with no block target/reward changed. It doesn’t work well for them and I see no reason why it’s going to work well for us. We have discussed so much and finished exactly where started.
See below what PXC has got with such patch. We shall have no better unfortunately.
[/quote]I agree with this assessment. A large averaging window is critical for smoothing out temporary hash rate spikes. If you change your retarget time to every 126 blocks with a 9% cap, but make your sample window 500+ blocks you’ll get a much slower adjustment rate that won’t spike when hash rates do. We call this a “lagging indicator” in trading terminology.
-
for time warp, the first part is to reduce the 2h windows. this is check in the checkblock where the height is not available, so in transition this can be dangerous to change. so i made it a check just for when set in chain, so an orphan can be in future but not enter the chain if more then 30 min, this is where the past time validation is done also. the second change enforce the max time in the past from the last block at 2* 30 min as a block in the max future would not prevent a node with max timeadjusted to enter block in the chain. the last one is to change max time adjust to 13 minutes (just a bit less then the 30 minutes o remove split of the network using the time adjust with + and - on each side
in main.cpp at line 1819 add:
[code]
// limit block in futur accepted in chain to only a time window of 30 min
if (GetBlockTime() > GetAdjustedTime() + 30 * 60)
return error(“CheckBlock() : block timestamp too far in the future”);// Check timestamp against prev it should not be more then 2 times the window
if (GetBlockTime() -
[quote name=“Kevlar” post=“27781” timestamp=“1378852749”]
[quote author=ghostlander link=topic=3447.msg27755#msg27755 date=1378839706]
I see you’ve implemented 9% over 126 blocks without large averaging window of 504 blocks or something like.[code] - static const int64 nTargetTimespan = (7 * 24 * 60 * 60) / 8; // Feathercoin: 7/8 days
- static const int64 nTargetTimespan = (7 * 24 * 60 * 60) / 32; // Feathercoin: 7/32 days[/code]
In fact, this is the recent Phenixcoin patch with no block target/reward changed. It doesn’t work well for them and I see no reason why it’s going to work well for us. We have discussed so much and finished exactly where started.
See below what PXC has got with such patch. We shall have no better unfortunately.
[/quote]I agree with this assessment. A large averaging window is critical for smoothing out temporary hash rate spikes. If you change your retarget time to every 126 blocks with a 9% cap, but make your sample window 500+ blocks you’ll get a much slower adjustment rate that won’t spike when hash rates do. We call this a “lagging indicator” in trading terminology.
[/quote]Will a larger averaging not nullify the effect of of the faster readjusts though?
Keeping the numbers simple…
We start at block 1 and maintain a steady 5Ghs rate to block 500. Then for whatever reason the price drops or the diff increases. Or another coin gets really profitable… Either way, we loose half the hashing power. At the next retarget we can’t drop 9%/126 because we are still stuck due to the average of the 504 blocks.
Or do I need more coffee?? :-[
-
Nut, you are correct, the difficulty will be less responsive to sudden changes in hash power. There are dangers of abuse in a smaller window and this proposal tries to reduce it, for example the effect of a two hour time warp on a 126 block sample. A 504 block sample would help on this.
Let’s look at the time warp.
The attackers can swing us two hours into the future on a 21 hour block time difficulty window (504 blocks).
With groll’s proposal we will have half an hour in a 5.25 hour window (126 blocks).
2/21 = 9.5% time added from attackers on 504 blocks
0.5/5.25 = 9.5% time added from attackers on 126 blocksIf we do not limit the extremes of block times then a 2 hour swing on a 5.25 hour window is very wild.
2/5.25 = 38% time added from attackers on 126 blocks
So if we then sample from 504 blocks and limit the time to half an hour we would end up with.
0.5/21 = 2.38% time added from attackers on 504 blocks every 126 blocks
This limits the effect greatly of a time warp to the point where it is ineffective.
As Sunny King is the chap who spotted and reported the flaw in TRC’s design I am asking him to review the proposed changes to see if he can find anything wrong.
This thread is a great help and I feel that we are making good progress. Keep it up please :)
-
[quote name=“erk” post=“27805” timestamp=“1378894013”]
Why don’t we just remove the code that works out the time by consensus and use NTP servers? Toughen up the rules for time stamps to be valid. This whole time warp attack thing should have never been possible, it’s a design flaw in the bitcoin code.
[/quote]This is a very interesting point that you bring up and is part of a larger debate over centralisation. Bitcoin tried to invent decentralised time. We can see that there are dangers in this and have to suffer the consequences. I believe that Bitcoin may not have the proper balance of trade-offs between centralisation and decentralisation largely due to concerns of resilience against hostile entities. Bitcoin was designed to survive if the lead developer disappeared :)
Moving to NTP servers is outside the scope of the upcoming patch. A change of that magnitude needs a patch all of its own. If you want to discuss this further please do so in another thread to keep this one focused.
[url=http://forum.feathercoin.com/index.php?topic=3077.0]Further NTP discussion for those interested. [/url]
-
[url=https://github.com/ghostlander/FeatherCoin/commit/ef61714942aeabc1a954a77089c6371585c8faf2]Update for 0.6.4.4 beta 1[/url]
Runs on the livenet as usual until block #87948, switches to 9% over 126 blocks with 504 blocks averaging after. I shall import time warp related changes tomorrow.
-
with the nut nut premise of 0-500 block at 2.5 and then loose half hash rate so time becomes 5 minutes. I made an excel that show for each 126 blocks range what is the time between block and the second line the diff. with 1 as starting and then use a variable hash that is a simple formula 5*diff^3. so the has rate grow faster then the diff change in %. this make 5 minutes as max block time and should target for a diff of 0.7937 for the 2.5 minutes.
the second scenario use 400 blocks at 2.5 minutes and 104 at 5m minutes (putting 256 at 2.5 and last 256 at 5 minutes give very similar result)
the third one use mostly no change in hash rate and show that 504 damping and 126 with 504 are not very good. first one is too slow, second one echo past even too much so converge slowly
https://docs.google.com/file/d/0B5YFJvIJozEwYmM3eGtRemdkb3M/edit?usp=sharingso for the first 2 126 block range after the first 504. I put good the one that react at max possible speed or near it so should be fast enough for fast change. in the rest between 2.4 and 2.6 is good(green), between 2 and 3 (neutral), and below 2 and above 3 is bad (red)
so the code I posted with 126 retarget over 504 block weighted average with a damping factor of .25 seems to be doing pretty fine. it’s second to the 126 with damping of .25, but this would be less resilient to time attack. pure 126 and 504 just diverge to the min-max of their range. the 9% just make it smaller as seen with PXC. simple longer average also add variation from the past that are way greater then the 9% so they just oscillate also.
feel free to test other scenario and add other change
-
spotted a mistake on 504 formula so new link as I can’t edit the previous link
https://docs.google.com/file/d/0B5YFJvIJozEwdi1KY3dMeEpTNmM/edit?usp=sharing -
Ghostlander, thanks for doing that work, it looks excellent.
What is concerning is groll’s simulations.
The 504 we have now looks very poor .
126 blocks over 504 average is poor. The time ends up fluctuating wildly and does not look good.
Change at 126 blocks without the history oscillates on the extremes but is much more palatable.
Change at 126 blocks with .25 damping looks the best of the bunch. About time warp, bringing down the block time to 30 mins makes this the same vulnerability we have now.
The damage from the last time warp attack was pools not being able to mint block. According to Coinotron this was a bug in the stratum implementation. It may be possible to move to 126 blocks with 30 min block time difference.
-
My previous patch needs more work. Although looks good, there is some unexpected behaviour. I need to check the syntax more thoroughly. Anyway, here is another small patch.
[code=“main.cpp”]
if (vtx[0].GetValueOut() > GetBlockValue(pindex->nHeight, nFees))
return false;
+- // No more blocks with bogus reward accepted
- if ((pindex->nHeight > 87948) &&
-
(vtx[0].GetValueOut() != GetBlockValue(pindex-\>nHeight, nFees)))
-
return false;
[/code]
Doesn’t allow to set block reward lower than expected. Doesn’t break block chain download like my previous patch on this matter. We have to accept those weird blocks already in the block chain, but new ones will be rejected after the upcoming hard fork.
-
[quote]The damage from the last time warp attack was pools not being able to mint block. According to Coinotron this was a bug in the stratum implementation. It may be possible to move to 126 blocks with 30 min block time difference.[/quote]
as far as i understand it their was 2 problems one was the future time that was 2h in FTC and 16.7 min in stratum (7200 vs 1000) so block was reject by stratum but in the chain. The second one is that the median past with 2h and 11 blocks had been set in the future so much that no block other then the attacker with 2h ahead was able to enter the chain. And was easy to maintain in future to keep it in the future with 6 blocks in 2h.to achieve the last point you need to make 6 blocks with timestamps in future in last 11 generate. so near 51% is required to start(45% and some time, like over 40 block and luck, should be able to get it) and more then 6 blocks in 30 minutes. after it start, to keep it 6 blocks in 30 minutes is required(was 6 in 2h). The number of blocks depends on the diff and the hashrate of the attacker. At diff 200 it would need >3Gh/s I think.
[quote]126 blocks over 504 average is poor[/quote]
i have try 4 126 range weigthing, possibly individual 504 weigth or other formula can be better, linear for sure seems to repeat the past too much. but damping seems required to stabilise(note .25 seems ok going too low would make it less responsive as we see in the simulation for 504 and .25 it slow it too much if no hash change) -
Ahhh - nothing like the smell of progress! :D
Good work guys.
-
As a mostly non-contributing member of the community (can’t code for toffee) I’d just like to say that this methods of open discussion and working together builds a lot of confidence.
It’s good to see the input different members are having, even if I don’t actually understand much of it…
-
[quote name=“spynappels” post=“27949” timestamp=“1378996115”]
As a mostly non-contributing member of the community (can’t code for toffee) I’d just like to say that this methods of open discussion and working together builds a lot of confidence.It’s good to see the input different members are having, even if I don’t actually understand much of it…
[/quote]I agree.
-
I think it’d be cool to have a changelog somewhere on the main site with credits to contributing developers, a simplified explanation, and a link to the discussion thread. You know, show the process. Do it all open like Obama promised but never really did.
Maybe update the readme on GitHub as well?
-
[quote name=“mnstrcck” post=“27990” timestamp=“1379009171”]
I think it’d be cool to have a changelog somewhere on the main site with credits to contributing developers, a simplified explanation, and a link to the discussion thread. You know, show the process. Do it all open like Obama promised but never really did.Maybe update the readme on GitHub as well?
[/quote]That was quite a good suggestion!
Will talk with the rest of the team about this. Thanks :D -
[quote name=“Nutnut” post=“27796” timestamp=“1378880747”]
We start at block 1 and maintain a steady 5Ghs rate to block 500. Then for whatever reason the price drops or the diff increases. Or another coin gets really profitable… Either way, we loose half the hashing power. At the next retarget we can’t drop 9%/126 because we are still stuck due to the average of the 504 blocks.
[/quote]First of all, even if the FTC price falls or some other coin becomes much more profitable due to the price or difficulty, we are not going to lose half of hash power immediately. Coin hopping pools will switch within 10 minutes. Most miners switch manually and it takes time. Loyal miners will stay anyway. So, it may take several hours to lose half of hash power, and we shall be close to the next retarget by that moment. On the other hand, your scenario may be inverted if FTC becomes very profitable for some reason. Well, we can expect the network to double in hash power within a few hours. Either way is tolerable, and the network will stabilise itself after a few retargets, i.e. within a single day. In fact, such extreme situations happen maybe once or twice a month. Soon to disappear at all as we become stronger every month.
So, 504 blocks averaging carries a trail of history which plays either way. Any issues get resolved reasonably fast. Very good protection against time warp attacks. Good rewards for loyal miners.
If people are so concerned about the scenario above, a hybrid solution may be chosen to calculate past averages using both long and short averaging windows, i.e. 504 and 126 blocks, and give them equal weights for the final result. Recent history is amplified this way, but not overamplified. Can be implemented relatively easy if compared to EMA (exponential moving average).
-
[quote name=“mnstrcck” post=“27990” timestamp=“1379009171”]
I think it’d be cool to have a changelog somewhere on the main site with credits to contributing developers, a simplified explanation, and a link to the discussion thread. You know, show the process. Do it all open like Obama promised but never really did.Maybe update the readme on GitHub as well?
[/quote]I second that. If you fork the project and make the change, you can issue a pull request which can then contain the discussion, and when it’s ready to be merged, press the button and merge it into the mainline code base.