Move FTC to factor four diff swing over 7 days
-
[quote name=“Kevlar” post=“27781” timestamp=“1378852749”]
[quote author=ghostlander link=topic=3447.msg27755#msg27755 date=1378839706]
I see you’ve implemented 9% over 126 blocks without large averaging window of 504 blocks or something like.[code] - static const int64 nTargetTimespan = (7 * 24 * 60 * 60) / 8; // Feathercoin: 7/8 days
- static const int64 nTargetTimespan = (7 * 24 * 60 * 60) / 32; // Feathercoin: 7/32 days[/code]
In fact, this is the recent Phenixcoin patch with no block target/reward changed. It doesn’t work well for them and I see no reason why it’s going to work well for us. We have discussed so much and finished exactly where started.
See below what PXC has got with such patch. We shall have no better unfortunately.
[/quote]I agree with this assessment. A large averaging window is critical for smoothing out temporary hash rate spikes. If you change your retarget time to every 126 blocks with a 9% cap, but make your sample window 500+ blocks you’ll get a much slower adjustment rate that won’t spike when hash rates do. We call this a “lagging indicator” in trading terminology.
[/quote]Will a larger averaging not nullify the effect of of the faster readjusts though?
Keeping the numbers simple…
We start at block 1 and maintain a steady 5Ghs rate to block 500. Then for whatever reason the price drops or the diff increases. Or another coin gets really profitable… Either way, we loose half the hashing power. At the next retarget we can’t drop 9%/126 because we are still stuck due to the average of the 504 blocks.
Or do I need more coffee?? :-[
-
Nut, you are correct, the difficulty will be less responsive to sudden changes in hash power. There are dangers of abuse in a smaller window and this proposal tries to reduce it, for example the effect of a two hour time warp on a 126 block sample. A 504 block sample would help on this.
Let’s look at the time warp.
The attackers can swing us two hours into the future on a 21 hour block time difficulty window (504 blocks).
With groll’s proposal we will have half an hour in a 5.25 hour window (126 blocks).
2/21 = 9.5% time added from attackers on 504 blocks
0.5/5.25 = 9.5% time added from attackers on 126 blocksIf we do not limit the extremes of block times then a 2 hour swing on a 5.25 hour window is very wild.
2/5.25 = 38% time added from attackers on 126 blocks
So if we then sample from 504 blocks and limit the time to half an hour we would end up with.
0.5/21 = 2.38% time added from attackers on 504 blocks every 126 blocks
This limits the effect greatly of a time warp to the point where it is ineffective.
As Sunny King is the chap who spotted and reported the flaw in TRC’s design I am asking him to review the proposed changes to see if he can find anything wrong.
This thread is a great help and I feel that we are making good progress. Keep it up please :)
-
[quote name=“erk” post=“27805” timestamp=“1378894013”]
Why don’t we just remove the code that works out the time by consensus and use NTP servers? Toughen up the rules for time stamps to be valid. This whole time warp attack thing should have never been possible, it’s a design flaw in the bitcoin code.
[/quote]This is a very interesting point that you bring up and is part of a larger debate over centralisation. Bitcoin tried to invent decentralised time. We can see that there are dangers in this and have to suffer the consequences. I believe that Bitcoin may not have the proper balance of trade-offs between centralisation and decentralisation largely due to concerns of resilience against hostile entities. Bitcoin was designed to survive if the lead developer disappeared :)
Moving to NTP servers is outside the scope of the upcoming patch. A change of that magnitude needs a patch all of its own. If you want to discuss this further please do so in another thread to keep this one focused.
[url=http://forum.feathercoin.com/index.php?topic=3077.0]Further NTP discussion for those interested. [/url]
-
[url=https://github.com/ghostlander/FeatherCoin/commit/ef61714942aeabc1a954a77089c6371585c8faf2]Update for 0.6.4.4 beta 1[/url]
Runs on the livenet as usual until block #87948, switches to 9% over 126 blocks with 504 blocks averaging after. I shall import time warp related changes tomorrow.
-
with the nut nut premise of 0-500 block at 2.5 and then loose half hash rate so time becomes 5 minutes. I made an excel that show for each 126 blocks range what is the time between block and the second line the diff. with 1 as starting and then use a variable hash that is a simple formula 5*diff^3. so the has rate grow faster then the diff change in %. this make 5 minutes as max block time and should target for a diff of 0.7937 for the 2.5 minutes.
the second scenario use 400 blocks at 2.5 minutes and 104 at 5m minutes (putting 256 at 2.5 and last 256 at 5 minutes give very similar result)
the third one use mostly no change in hash rate and show that 504 damping and 126 with 504 are not very good. first one is too slow, second one echo past even too much so converge slowly
https://docs.google.com/file/d/0B5YFJvIJozEwYmM3eGtRemdkb3M/edit?usp=sharingso for the first 2 126 block range after the first 504. I put good the one that react at max possible speed or near it so should be fast enough for fast change. in the rest between 2.4 and 2.6 is good(green), between 2 and 3 (neutral), and below 2 and above 3 is bad (red)
so the code I posted with 126 retarget over 504 block weighted average with a damping factor of .25 seems to be doing pretty fine. it’s second to the 126 with damping of .25, but this would be less resilient to time attack. pure 126 and 504 just diverge to the min-max of their range. the 9% just make it smaller as seen with PXC. simple longer average also add variation from the past that are way greater then the 9% so they just oscillate also.
feel free to test other scenario and add other change
-
spotted a mistake on 504 formula so new link as I can’t edit the previous link
https://docs.google.com/file/d/0B5YFJvIJozEwdi1KY3dMeEpTNmM/edit?usp=sharing -
Ghostlander, thanks for doing that work, it looks excellent.
What is concerning is groll’s simulations.
The 504 we have now looks very poor .
126 blocks over 504 average is poor. The time ends up fluctuating wildly and does not look good.
Change at 126 blocks without the history oscillates on the extremes but is much more palatable.
Change at 126 blocks with .25 damping looks the best of the bunch. About time warp, bringing down the block time to 30 mins makes this the same vulnerability we have now.
The damage from the last time warp attack was pools not being able to mint block. According to Coinotron this was a bug in the stratum implementation. It may be possible to move to 126 blocks with 30 min block time difference.
-
My previous patch needs more work. Although looks good, there is some unexpected behaviour. I need to check the syntax more thoroughly. Anyway, here is another small patch.
[code=“main.cpp”]
if (vtx[0].GetValueOut() > GetBlockValue(pindex->nHeight, nFees))
return false;
+- // No more blocks with bogus reward accepted
- if ((pindex->nHeight > 87948) &&
-
(vtx[0].GetValueOut() != GetBlockValue(pindex-\>nHeight, nFees)))
-
return false;
[/code]
Doesn’t allow to set block reward lower than expected. Doesn’t break block chain download like my previous patch on this matter. We have to accept those weird blocks already in the block chain, but new ones will be rejected after the upcoming hard fork.
-
[quote]The damage from the last time warp attack was pools not being able to mint block. According to Coinotron this was a bug in the stratum implementation. It may be possible to move to 126 blocks with 30 min block time difference.[/quote]
as far as i understand it their was 2 problems one was the future time that was 2h in FTC and 16.7 min in stratum (7200 vs 1000) so block was reject by stratum but in the chain. The second one is that the median past with 2h and 11 blocks had been set in the future so much that no block other then the attacker with 2h ahead was able to enter the chain. And was easy to maintain in future to keep it in the future with 6 blocks in 2h.to achieve the last point you need to make 6 blocks with timestamps in future in last 11 generate. so near 51% is required to start(45% and some time, like over 40 block and luck, should be able to get it) and more then 6 blocks in 30 minutes. after it start, to keep it 6 blocks in 30 minutes is required(was 6 in 2h). The number of blocks depends on the diff and the hashrate of the attacker. At diff 200 it would need >3Gh/s I think.
[quote]126 blocks over 504 average is poor[/quote]
i have try 4 126 range weigthing, possibly individual 504 weigth or other formula can be better, linear for sure seems to repeat the past too much. but damping seems required to stabilise(note .25 seems ok going too low would make it less responsive as we see in the simulation for 504 and .25 it slow it too much if no hash change) -
Ahhh - nothing like the smell of progress! :D
Good work guys.
-
As a mostly non-contributing member of the community (can’t code for toffee) I’d just like to say that this methods of open discussion and working together builds a lot of confidence.
It’s good to see the input different members are having, even if I don’t actually understand much of it…
-
[quote name=“spynappels” post=“27949” timestamp=“1378996115”]
As a mostly non-contributing member of the community (can’t code for toffee) I’d just like to say that this methods of open discussion and working together builds a lot of confidence.It’s good to see the input different members are having, even if I don’t actually understand much of it…
[/quote]I agree.
-
I think it’d be cool to have a changelog somewhere on the main site with credits to contributing developers, a simplified explanation, and a link to the discussion thread. You know, show the process. Do it all open like Obama promised but never really did.
Maybe update the readme on GitHub as well?
-
[quote name=“mnstrcck” post=“27990” timestamp=“1379009171”]
I think it’d be cool to have a changelog somewhere on the main site with credits to contributing developers, a simplified explanation, and a link to the discussion thread. You know, show the process. Do it all open like Obama promised but never really did.Maybe update the readme on GitHub as well?
[/quote]That was quite a good suggestion!
Will talk with the rest of the team about this. Thanks :D -
[quote name=“Nutnut” post=“27796” timestamp=“1378880747”]
We start at block 1 and maintain a steady 5Ghs rate to block 500. Then for whatever reason the price drops or the diff increases. Or another coin gets really profitable… Either way, we loose half the hashing power. At the next retarget we can’t drop 9%/126 because we are still stuck due to the average of the 504 blocks.
[/quote]First of all, even if the FTC price falls or some other coin becomes much more profitable due to the price or difficulty, we are not going to lose half of hash power immediately. Coin hopping pools will switch within 10 minutes. Most miners switch manually and it takes time. Loyal miners will stay anyway. So, it may take several hours to lose half of hash power, and we shall be close to the next retarget by that moment. On the other hand, your scenario may be inverted if FTC becomes very profitable for some reason. Well, we can expect the network to double in hash power within a few hours. Either way is tolerable, and the network will stabilise itself after a few retargets, i.e. within a single day. In fact, such extreme situations happen maybe once or twice a month. Soon to disappear at all as we become stronger every month.
So, 504 blocks averaging carries a trail of history which plays either way. Any issues get resolved reasonably fast. Very good protection against time warp attacks. Good rewards for loyal miners.
If people are so concerned about the scenario above, a hybrid solution may be chosen to calculate past averages using both long and short averaging windows, i.e. 504 and 126 blocks, and give them equal weights for the final result. Recent history is amplified this way, but not overamplified. Can be implemented relatively easy if compared to EMA (exponential moving average).
-
[quote name=“mnstrcck” post=“27990” timestamp=“1379009171”]
I think it’d be cool to have a changelog somewhere on the main site with credits to contributing developers, a simplified explanation, and a link to the discussion thread. You know, show the process. Do it all open like Obama promised but never really did.Maybe update the readme on GitHub as well?
[/quote]I second that. If you fork the project and make the change, you can issue a pull request which can then contain the discussion, and when it’s ready to be merged, press the button and merge it into the mainline code base.
-
what concern me is that hashrate change more then the diff in % so we diverge to the max change.
I had redone the third table but putting closer value to the real value we have. I added 3 averaging method the squared weight with damping where last 126 is weight 16, second last 126 is weight 9, third by 4 and last by 1 (4^2,3^2,2^2,1^2). the Ghostlander (126/504 + 126)/2 and the same one with damping.
[url=https://docs.google.com/file/d/0B5YFJvIJozEwaG10Zk5wN0tDbFE/edit?usp=sharing]https://docs.google.com/file/d/0B5YFJvIJozEwaG10Zk5wN0tDbFE/edit?usp=sharing[/url]
the square weight and the (126/504 + 126)/2 gives similar result and both need damping to converge. the excel values calculate 4 full retarget of actual 504+1 126 for next one. so in theory 3.7 days(17*5.25h) and many method are still diverging.
for the one that don’t know it I have not invent the damping, it’s a well known and used method in control (motor, heater, etc…) where you use the damping on the control loop(also call feedback loop) to stabilise the output requested to the correct value without overshooting and smooth the change(you don’t want a fan to make 0%-100% when you request 50% speed :)). the optimal way to calculate it is often a Laplace function. usually feedback change/ needed change is a good starting point. In our case my observation was we got ~4 times more change in hash rate then we expected to get by changing the diff.
edit: I just look at the damping for 126/504 and since the block pass 4 times. the feedback effect is not just 4 times as I put in the file but way greater(as it feedback over the previous feedback effect), I don’t remember the formula for that exactly (sorry was doing control 20 years ago, but not anymore). it’s a differential equation but it’s result is something like f(x) = x^n+… . So the .25(1/4) should be probably more liked 1/16. not sure i want to go back to those math.
for the Nut Nut scenario it happen mostly every time LTC retarget :( but either way depending on LTC change.
-
[quote name=“groll” post=“28001” timestamp=“1379015883”]
what concern me is that hashrate change more then the diff in % so we diverge to the max change.
[/quote]I’m afraid we have to live with it. It is possible in theory to make difficulty to follow network hash rate by doing near real time retargets with very small averaging window and no difficulty limiter or very relaxed one. However, it is not feasible due to security reasons. We have profitability on one side and security on the other. More we gain in one, more we lose in other.
9% over 126 blocks with 504 blocks averaging and future limit of 30 minutes bring us a significant security improvement to the level where time warp attacks become nearly inefficient. I think it is an important achievement. 126 blocks averaging brings no improvement.
[quote author=groll link=topic=3447.msg28001#msg28001 date=1379015883]
for the Nut Nut scenario it happen mostly every time LTC retarget :( but either way depending on LTC change.
[/quote]We attract too many coin hoppers currently. More frequent retargets will help us to get rid of some. As we grow up and our base of loyal miners also grows up, we become less affected by the LTC dynamics.
-
i put here the chart of each algo for the one that don’t have look or understand the spreadsheet. so it show the actual 504 as diverging a lot, 126/504 as diverging mostly as much but as peak with in between points.
126 diverge also just over a smaller range as do (126/504 + 126 ) /2
126, (126/504 + 126 ) /2 and 126/504 with squared weighting. all with a damping factor of .25 converge mostly nicely, i try many formula and all 3 seems to do a pretty good job. 126 damping .25 is the best, but it is more vulnerable to time manipulation. the 2 others are ok.
the source is in this googledoc version of the excel on sheet2
[url=https://docs.google.com/spreadsheet/ccc?key=0ApYFJvIJozEwdDBkZjZibm5LQ0JXYnJ0RmdseGtGUVE&usp=sharing]https://docs.google.com/spreadsheet/ccc?key=0ApYFJvIJozEwdDBkZjZibm5LQ0JXYnJ0RmdseGtGUVE&usp=sharing[/url]edit: i added another one that start nearly at stable and show that without damping the diff naturally diverge when the hashrate change faster then the difficulty
[attachment deleted by admin]
-
Groll, try to set up a 51% attack on your models. No time warps, a simple one for a day. 3GH/s before the attack (columns B to E or keep current), 12GH/s at the attack (columns F to I), 3GH/s after the attack (column J). We get a difficulty trap with 126 blocks averaging whether damped or not. No trap with 126/504 or (126/504 +126)/2, non-damped or damped.
Actually, (126/504 +126)/2 works well with no damping (+/- 27% block target variance). 0.25 damping makes it even better.