Move FTC to factor four diff swing over 7 days
-
with the nut nut premise of 0-500 block at 2.5 and then loose half hash rate so time becomes 5 minutes. I made an excel that show for each 126 blocks range what is the time between block and the second line the diff. with 1 as starting and then use a variable hash that is a simple formula 5*diff^3. so the has rate grow faster then the diff change in %. this make 5 minutes as max block time and should target for a diff of 0.7937 for the 2.5 minutes.
the second scenario use 400 blocks at 2.5 minutes and 104 at 5m minutes (putting 256 at 2.5 and last 256 at 5 minutes give very similar result)
the third one use mostly no change in hash rate and show that 504 damping and 126 with 504 are not very good. first one is too slow, second one echo past even too much so converge slowly
https://docs.google.com/file/d/0B5YFJvIJozEwYmM3eGtRemdkb3M/edit?usp=sharingso for the first 2 126 block range after the first 504. I put good the one that react at max possible speed or near it so should be fast enough for fast change. in the rest between 2.4 and 2.6 is good(green), between 2 and 3 (neutral), and below 2 and above 3 is bad (red)
so the code I posted with 126 retarget over 504 block weighted average with a damping factor of .25 seems to be doing pretty fine. it’s second to the 126 with damping of .25, but this would be less resilient to time attack. pure 126 and 504 just diverge to the min-max of their range. the 9% just make it smaller as seen with PXC. simple longer average also add variation from the past that are way greater then the 9% so they just oscillate also.
feel free to test other scenario and add other change
-
spotted a mistake on 504 formula so new link as I can’t edit the previous link
https://docs.google.com/file/d/0B5YFJvIJozEwdi1KY3dMeEpTNmM/edit?usp=sharing -
Ghostlander, thanks for doing that work, it looks excellent.
What is concerning is groll’s simulations.
The 504 we have now looks very poor .
126 blocks over 504 average is poor. The time ends up fluctuating wildly and does not look good.
Change at 126 blocks without the history oscillates on the extremes but is much more palatable.
Change at 126 blocks with .25 damping looks the best of the bunch. About time warp, bringing down the block time to 30 mins makes this the same vulnerability we have now.
The damage from the last time warp attack was pools not being able to mint block. According to Coinotron this was a bug in the stratum implementation. It may be possible to move to 126 blocks with 30 min block time difference.
-
My previous patch needs more work. Although looks good, there is some unexpected behaviour. I need to check the syntax more thoroughly. Anyway, here is another small patch.
[code=“main.cpp”]
if (vtx[0].GetValueOut() > GetBlockValue(pindex->nHeight, nFees))
return false;
+- // No more blocks with bogus reward accepted
- if ((pindex->nHeight > 87948) &&
-
(vtx[0].GetValueOut() != GetBlockValue(pindex-\>nHeight, nFees)))
-
return false;
[/code]
Doesn’t allow to set block reward lower than expected. Doesn’t break block chain download like my previous patch on this matter. We have to accept those weird blocks already in the block chain, but new ones will be rejected after the upcoming hard fork.
-
[quote]The damage from the last time warp attack was pools not being able to mint block. According to Coinotron this was a bug in the stratum implementation. It may be possible to move to 126 blocks with 30 min block time difference.[/quote]
as far as i understand it their was 2 problems one was the future time that was 2h in FTC and 16.7 min in stratum (7200 vs 1000) so block was reject by stratum but in the chain. The second one is that the median past with 2h and 11 blocks had been set in the future so much that no block other then the attacker with 2h ahead was able to enter the chain. And was easy to maintain in future to keep it in the future with 6 blocks in 2h.to achieve the last point you need to make 6 blocks with timestamps in future in last 11 generate. so near 51% is required to start(45% and some time, like over 40 block and luck, should be able to get it) and more then 6 blocks in 30 minutes. after it start, to keep it 6 blocks in 30 minutes is required(was 6 in 2h). The number of blocks depends on the diff and the hashrate of the attacker. At diff 200 it would need >3Gh/s I think.
[quote]126 blocks over 504 average is poor[/quote]
i have try 4 126 range weigthing, possibly individual 504 weigth or other formula can be better, linear for sure seems to repeat the past too much. but damping seems required to stabilise(note .25 seems ok going too low would make it less responsive as we see in the simulation for 504 and .25 it slow it too much if no hash change) -
Ahhh - nothing like the smell of progress! :D
Good work guys.
-
As a mostly non-contributing member of the community (can’t code for toffee) I’d just like to say that this methods of open discussion and working together builds a lot of confidence.
It’s good to see the input different members are having, even if I don’t actually understand much of it…
-
[quote name=“spynappels” post=“27949” timestamp=“1378996115”]
As a mostly non-contributing member of the community (can’t code for toffee) I’d just like to say that this methods of open discussion and working together builds a lot of confidence.It’s good to see the input different members are having, even if I don’t actually understand much of it…
[/quote]I agree.
-
I think it’d be cool to have a changelog somewhere on the main site with credits to contributing developers, a simplified explanation, and a link to the discussion thread. You know, show the process. Do it all open like Obama promised but never really did.
Maybe update the readme on GitHub as well?
-
[quote name=“mnstrcck” post=“27990” timestamp=“1379009171”]
I think it’d be cool to have a changelog somewhere on the main site with credits to contributing developers, a simplified explanation, and a link to the discussion thread. You know, show the process. Do it all open like Obama promised but never really did.Maybe update the readme on GitHub as well?
[/quote]That was quite a good suggestion!
Will talk with the rest of the team about this. Thanks :D -
[quote name=“Nutnut” post=“27796” timestamp=“1378880747”]
We start at block 1 and maintain a steady 5Ghs rate to block 500. Then for whatever reason the price drops or the diff increases. Or another coin gets really profitable… Either way, we loose half the hashing power. At the next retarget we can’t drop 9%/126 because we are still stuck due to the average of the 504 blocks.
[/quote]First of all, even if the FTC price falls or some other coin becomes much more profitable due to the price or difficulty, we are not going to lose half of hash power immediately. Coin hopping pools will switch within 10 minutes. Most miners switch manually and it takes time. Loyal miners will stay anyway. So, it may take several hours to lose half of hash power, and we shall be close to the next retarget by that moment. On the other hand, your scenario may be inverted if FTC becomes very profitable for some reason. Well, we can expect the network to double in hash power within a few hours. Either way is tolerable, and the network will stabilise itself after a few retargets, i.e. within a single day. In fact, such extreme situations happen maybe once or twice a month. Soon to disappear at all as we become stronger every month.
So, 504 blocks averaging carries a trail of history which plays either way. Any issues get resolved reasonably fast. Very good protection against time warp attacks. Good rewards for loyal miners.
If people are so concerned about the scenario above, a hybrid solution may be chosen to calculate past averages using both long and short averaging windows, i.e. 504 and 126 blocks, and give them equal weights for the final result. Recent history is amplified this way, but not overamplified. Can be implemented relatively easy if compared to EMA (exponential moving average).
-
[quote name=“mnstrcck” post=“27990” timestamp=“1379009171”]
I think it’d be cool to have a changelog somewhere on the main site with credits to contributing developers, a simplified explanation, and a link to the discussion thread. You know, show the process. Do it all open like Obama promised but never really did.Maybe update the readme on GitHub as well?
[/quote]I second that. If you fork the project and make the change, you can issue a pull request which can then contain the discussion, and when it’s ready to be merged, press the button and merge it into the mainline code base.
-
what concern me is that hashrate change more then the diff in % so we diverge to the max change.
I had redone the third table but putting closer value to the real value we have. I added 3 averaging method the squared weight with damping where last 126 is weight 16, second last 126 is weight 9, third by 4 and last by 1 (4^2,3^2,2^2,1^2). the Ghostlander (126/504 + 126)/2 and the same one with damping.
[url=https://docs.google.com/file/d/0B5YFJvIJozEwaG10Zk5wN0tDbFE/edit?usp=sharing]https://docs.google.com/file/d/0B5YFJvIJozEwaG10Zk5wN0tDbFE/edit?usp=sharing[/url]
the square weight and the (126/504 + 126)/2 gives similar result and both need damping to converge. the excel values calculate 4 full retarget of actual 504+1 126 for next one. so in theory 3.7 days(17*5.25h) and many method are still diverging.
for the one that don’t know it I have not invent the damping, it’s a well known and used method in control (motor, heater, etc…) where you use the damping on the control loop(also call feedback loop) to stabilise the output requested to the correct value without overshooting and smooth the change(you don’t want a fan to make 0%-100% when you request 50% speed :)). the optimal way to calculate it is often a Laplace function. usually feedback change/ needed change is a good starting point. In our case my observation was we got ~4 times more change in hash rate then we expected to get by changing the diff.
edit: I just look at the damping for 126/504 and since the block pass 4 times. the feedback effect is not just 4 times as I put in the file but way greater(as it feedback over the previous feedback effect), I don’t remember the formula for that exactly (sorry was doing control 20 years ago, but not anymore). it’s a differential equation but it’s result is something like f(x) = x^n+… . So the .25(1/4) should be probably more liked 1/16. not sure i want to go back to those math.
for the Nut Nut scenario it happen mostly every time LTC retarget :( but either way depending on LTC change.
-
[quote name=“groll” post=“28001” timestamp=“1379015883”]
what concern me is that hashrate change more then the diff in % so we diverge to the max change.
[/quote]I’m afraid we have to live with it. It is possible in theory to make difficulty to follow network hash rate by doing near real time retargets with very small averaging window and no difficulty limiter or very relaxed one. However, it is not feasible due to security reasons. We have profitability on one side and security on the other. More we gain in one, more we lose in other.
9% over 126 blocks with 504 blocks averaging and future limit of 30 minutes bring us a significant security improvement to the level where time warp attacks become nearly inefficient. I think it is an important achievement. 126 blocks averaging brings no improvement.
[quote author=groll link=topic=3447.msg28001#msg28001 date=1379015883]
for the Nut Nut scenario it happen mostly every time LTC retarget :( but either way depending on LTC change.
[/quote]We attract too many coin hoppers currently. More frequent retargets will help us to get rid of some. As we grow up and our base of loyal miners also grows up, we become less affected by the LTC dynamics.
-
i put here the chart of each algo for the one that don’t have look or understand the spreadsheet. so it show the actual 504 as diverging a lot, 126/504 as diverging mostly as much but as peak with in between points.
126 diverge also just over a smaller range as do (126/504 + 126 ) /2
126, (126/504 + 126 ) /2 and 126/504 with squared weighting. all with a damping factor of .25 converge mostly nicely, i try many formula and all 3 seems to do a pretty good job. 126 damping .25 is the best, but it is more vulnerable to time manipulation. the 2 others are ok.
the source is in this googledoc version of the excel on sheet2
[url=https://docs.google.com/spreadsheet/ccc?key=0ApYFJvIJozEwdDBkZjZibm5LQ0JXYnJ0RmdseGtGUVE&usp=sharing]https://docs.google.com/spreadsheet/ccc?key=0ApYFJvIJozEwdDBkZjZibm5LQ0JXYnJ0RmdseGtGUVE&usp=sharing[/url]edit: i added another one that start nearly at stable and show that without damping the diff naturally diverge when the hashrate change faster then the difficulty
[attachment deleted by admin]
-
Groll, try to set up a 51% attack on your models. No time warps, a simple one for a day. 3GH/s before the attack (columns B to E or keep current), 12GH/s at the attack (columns F to I), 3GH/s after the attack (column J). We get a difficulty trap with 126 blocks averaging whether damped or not. No trap with 126/504 or (126/504 +126)/2, non-damped or damped.
Actually, (126/504 +126)/2 works well with no damping (+/- 27% block target variance). 0.25 damping makes it even better.
-
Taking a look at your scenario I can see that sampling over 504 blocks with adjusts every 126 blocks stops our difficulty getting to high if we come under a hash rate attack. I am very sure we have seen these before to get us into a state ready for further attacks. The attackers would need to spend more time getting our difficulty up.
We need to find the right balance between protection and responsiveness.
Personally I like the idea of the average between a 504 and a 126 block sample.
Considering all the data we now have available can you please make a recommendation Ghostlander?
-
[quote name=“Bushstar” post=“28039” timestamp=“1379073383”]
We need to find the right balance between protection and responsiveness.Personally I like the idea of the average between a 504 and a 126 block sample.
Considering all the data we now have available can you please make a recommendation Ghostlander?
[/quote]Yes, average between 504 and 126 block windows with 0.25 damping seems a very good trade-off and gets my vote.
-
timing of attack is important 504 was going from the b-c-d-e down so next was in range. putting diff 200 and 6Gh/s makes a very different graph(i don’t put 3G but 1G in J and any place where we are below 1Gh/s as we are likely to have mostly only hardcore miner from the high diff) what is interesting is that none is worst then the current 504. but all 126, 126/504, (126/504 + 126/2) got to the same high diff mostly and comes back. I don’t have time to simulate other scenario, feel free to try.
but I agree with Ghostlander using 126-504 sampling mix (as I identify as (126/504 + 126)/2). with damping 0.25 seems to be the best trade off.
Making the damping greater seems to help also as change(including warp) have less impact so more resilient to attack. see the 1/16 at the bottom of the sheet but not in graph. the draw back is the 144% range the adjustment can be too slow(put 1000 as hashrate to get a real time warp for this one)
note: timewarp can’t be solved completly as the timestamp of the block comes from the client. so we need to live with it and NTP don’t solve it as client can change it to what ever they want. to change that the structure of the chain and the protocol need to be reinvented to testify the block time from an external source to the miner.
[attachment deleted by admin]
-
Well this looks to work out well then. As I said I like the 126 and 504 average and we need that .25 damping so this is something that we agree upon.
Let’s get this implemented and running on a testnet. I am going to review the code with Ghostlander.