Move FTC to factor four diff swing over 7 days
-
[quote name=“mnstrcck” post=“27990” timestamp=“1379009171”]
I think it’d be cool to have a changelog somewhere on the main site with credits to contributing developers, a simplified explanation, and a link to the discussion thread. You know, show the process. Do it all open like Obama promised but never really did.Maybe update the readme on GitHub as well?
[/quote]That was quite a good suggestion!
Will talk with the rest of the team about this. Thanks :D -
[quote name=“Nutnut” post=“27796” timestamp=“1378880747”]
We start at block 1 and maintain a steady 5Ghs rate to block 500. Then for whatever reason the price drops or the diff increases. Or another coin gets really profitable… Either way, we loose half the hashing power. At the next retarget we can’t drop 9%/126 because we are still stuck due to the average of the 504 blocks.
[/quote]First of all, even if the FTC price falls or some other coin becomes much more profitable due to the price or difficulty, we are not going to lose half of hash power immediately. Coin hopping pools will switch within 10 minutes. Most miners switch manually and it takes time. Loyal miners will stay anyway. So, it may take several hours to lose half of hash power, and we shall be close to the next retarget by that moment. On the other hand, your scenario may be inverted if FTC becomes very profitable for some reason. Well, we can expect the network to double in hash power within a few hours. Either way is tolerable, and the network will stabilise itself after a few retargets, i.e. within a single day. In fact, such extreme situations happen maybe once or twice a month. Soon to disappear at all as we become stronger every month.
So, 504 blocks averaging carries a trail of history which plays either way. Any issues get resolved reasonably fast. Very good protection against time warp attacks. Good rewards for loyal miners.
If people are so concerned about the scenario above, a hybrid solution may be chosen to calculate past averages using both long and short averaging windows, i.e. 504 and 126 blocks, and give them equal weights for the final result. Recent history is amplified this way, but not overamplified. Can be implemented relatively easy if compared to EMA (exponential moving average).
-
[quote name=“mnstrcck” post=“27990” timestamp=“1379009171”]
I think it’d be cool to have a changelog somewhere on the main site with credits to contributing developers, a simplified explanation, and a link to the discussion thread. You know, show the process. Do it all open like Obama promised but never really did.Maybe update the readme on GitHub as well?
[/quote]I second that. If you fork the project and make the change, you can issue a pull request which can then contain the discussion, and when it’s ready to be merged, press the button and merge it into the mainline code base.
-
what concern me is that hashrate change more then the diff in % so we diverge to the max change.
I had redone the third table but putting closer value to the real value we have. I added 3 averaging method the squared weight with damping where last 126 is weight 16, second last 126 is weight 9, third by 4 and last by 1 (4^2,3^2,2^2,1^2). the Ghostlander (126/504 + 126)/2 and the same one with damping.
[url=https://docs.google.com/file/d/0B5YFJvIJozEwaG10Zk5wN0tDbFE/edit?usp=sharing]https://docs.google.com/file/d/0B5YFJvIJozEwaG10Zk5wN0tDbFE/edit?usp=sharing[/url]
the square weight and the (126/504 + 126)/2 gives similar result and both need damping to converge. the excel values calculate 4 full retarget of actual 504+1 126 for next one. so in theory 3.7 days(17*5.25h) and many method are still diverging.
for the one that don’t know it I have not invent the damping, it’s a well known and used method in control (motor, heater, etc…) where you use the damping on the control loop(also call feedback loop) to stabilise the output requested to the correct value without overshooting and smooth the change(you don’t want a fan to make 0%-100% when you request 50% speed :)). the optimal way to calculate it is often a Laplace function. usually feedback change/ needed change is a good starting point. In our case my observation was we got ~4 times more change in hash rate then we expected to get by changing the diff.
edit: I just look at the damping for 126/504 and since the block pass 4 times. the feedback effect is not just 4 times as I put in the file but way greater(as it feedback over the previous feedback effect), I don’t remember the formula for that exactly (sorry was doing control 20 years ago, but not anymore). it’s a differential equation but it’s result is something like f(x) = x^n+… . So the .25(1/4) should be probably more liked 1/16. not sure i want to go back to those math.
for the Nut Nut scenario it happen mostly every time LTC retarget :( but either way depending on LTC change.
-
[quote name=“groll” post=“28001” timestamp=“1379015883”]
what concern me is that hashrate change more then the diff in % so we diverge to the max change.
[/quote]I’m afraid we have to live with it. It is possible in theory to make difficulty to follow network hash rate by doing near real time retargets with very small averaging window and no difficulty limiter or very relaxed one. However, it is not feasible due to security reasons. We have profitability on one side and security on the other. More we gain in one, more we lose in other.
9% over 126 blocks with 504 blocks averaging and future limit of 30 minutes bring us a significant security improvement to the level where time warp attacks become nearly inefficient. I think it is an important achievement. 126 blocks averaging brings no improvement.
[quote author=groll link=topic=3447.msg28001#msg28001 date=1379015883]
for the Nut Nut scenario it happen mostly every time LTC retarget :( but either way depending on LTC change.
[/quote]We attract too many coin hoppers currently. More frequent retargets will help us to get rid of some. As we grow up and our base of loyal miners also grows up, we become less affected by the LTC dynamics.
-
i put here the chart of each algo for the one that don’t have look or understand the spreadsheet. so it show the actual 504 as diverging a lot, 126/504 as diverging mostly as much but as peak with in between points.
126 diverge also just over a smaller range as do (126/504 + 126 ) /2
126, (126/504 + 126 ) /2 and 126/504 with squared weighting. all with a damping factor of .25 converge mostly nicely, i try many formula and all 3 seems to do a pretty good job. 126 damping .25 is the best, but it is more vulnerable to time manipulation. the 2 others are ok.
the source is in this googledoc version of the excel on sheet2
[url=https://docs.google.com/spreadsheet/ccc?key=0ApYFJvIJozEwdDBkZjZibm5LQ0JXYnJ0RmdseGtGUVE&usp=sharing]https://docs.google.com/spreadsheet/ccc?key=0ApYFJvIJozEwdDBkZjZibm5LQ0JXYnJ0RmdseGtGUVE&usp=sharing[/url]edit: i added another one that start nearly at stable and show that without damping the diff naturally diverge when the hashrate change faster then the difficulty
[attachment deleted by admin]
-
Groll, try to set up a 51% attack on your models. No time warps, a simple one for a day. 3GH/s before the attack (columns B to E or keep current), 12GH/s at the attack (columns F to I), 3GH/s after the attack (column J). We get a difficulty trap with 126 blocks averaging whether damped or not. No trap with 126/504 or (126/504 +126)/2, non-damped or damped.
Actually, (126/504 +126)/2 works well with no damping (+/- 27% block target variance). 0.25 damping makes it even better.
-
Taking a look at your scenario I can see that sampling over 504 blocks with adjusts every 126 blocks stops our difficulty getting to high if we come under a hash rate attack. I am very sure we have seen these before to get us into a state ready for further attacks. The attackers would need to spend more time getting our difficulty up.
We need to find the right balance between protection and responsiveness.
Personally I like the idea of the average between a 504 and a 126 block sample.
Considering all the data we now have available can you please make a recommendation Ghostlander?
-
[quote name=“Bushstar” post=“28039” timestamp=“1379073383”]
We need to find the right balance between protection and responsiveness.Personally I like the idea of the average between a 504 and a 126 block sample.
Considering all the data we now have available can you please make a recommendation Ghostlander?
[/quote]Yes, average between 504 and 126 block windows with 0.25 damping seems a very good trade-off and gets my vote.
-
timing of attack is important 504 was going from the b-c-d-e down so next was in range. putting diff 200 and 6Gh/s makes a very different graph(i don’t put 3G but 1G in J and any place where we are below 1Gh/s as we are likely to have mostly only hardcore miner from the high diff) what is interesting is that none is worst then the current 504. but all 126, 126/504, (126/504 + 126/2) got to the same high diff mostly and comes back. I don’t have time to simulate other scenario, feel free to try.
but I agree with Ghostlander using 126-504 sampling mix (as I identify as (126/504 + 126)/2). with damping 0.25 seems to be the best trade off.
Making the damping greater seems to help also as change(including warp) have less impact so more resilient to attack. see the 1/16 at the bottom of the sheet but not in graph. the draw back is the 144% range the adjustment can be too slow(put 1000 as hashrate to get a real time warp for this one)
note: timewarp can’t be solved completly as the timestamp of the block comes from the client. so we need to live with it and NTP don’t solve it as client can change it to what ever they want. to change that the structure of the chain and the protocol need to be reinvented to testify the block time from an external source to the miner.
[attachment deleted by admin]
-
Well this looks to work out well then. As I said I like the 126 and 504 average and we need that .25 damping so this is something that we agree upon.
Let’s get this implemented and running on a testnet. I am going to review the code with Ghostlander.
-
Interesting discussion, I’ve not been keeping up with developments. Can I also say that it was very heartening to see the progress in difficulty adjustment discussion. I have been closing my eyes and crossing my fingers for the past week and hoping for the best.
We are living in a quantum world of the Crypto Currency Big Bang where just discussing possible changes effects the system. I believe the quality of discussion makes that effect positive. I have already shown that the changes only have to be correct to an order of magnitude to work (in and unknown and varying / human / robot controlled / future environment).
I can’t predict the future, but from my analysis Feathercoin is over the bump and starting to accelerate downhill, its gonna take a lot to knock us of course now. Well done.
-
Many altcoins get screwed up when their developers try to come up with “the best, the fastest, the most advanced” settings. Most of these folks don’t even know the story behind Geist Geld.
Meanwhile, [url=https://github.com/ghostlander/FeatherCoin/commit/1f7a9e74895eaa723daea351b6e1a2294b864330]a small patch[/url] to fix broken getnetworkhashps.
-
Thanks for the patch Ghostlander. Can you make the changes on the Feathercoin/Feathercoin repo? You can edit it on the webpage. This will commit it to a temp branch with a pull request for me to accept. Currently I have to commit everything on your fork to pull your last commit.
-
Done. Our master branches are out-of-synch a bit since I’ve committed some of your recent changes myself.
-
this GME code only change the calculation of the hashrate approximation, this is [b]not[/b] involve in retarget calculation. This represent the hashrate average number for 30 blocks in the stats page of FTC, this is used to determine what you will get in the client when requesting the network hashrate in its console.
FTC was 120 and Ghostlander just make a patch to set it to 30 like the stat page (see the change link 5 post ago)
-
[quote] Well Gamecoin (GME) is a lesson in how not to do diff adjustments. [/quote]
There is no other coin in our position. We now have significant Hash rate to make attacks expensive.
We have been a consistent value, for a significant time, on the exchange, so even scammers have built up an investment in FTC.
The software changes have been successful, and have worked to stabilise the coin. Future changes look technically feasible and few side effects; look good to further increase network security. + Its still a community driven, open source coin…
Can I say thanks for that. Well Done.
-
I have pulled the commit from Ghostlander for better default values in the getnetworkhashps function.
https://github.com/FeatherCoin/FeatherCoin/pull/20
I spoke to Ghostlander last week about his branch for 0.6.4.4, there was an issue that he was ironing out. One question which he brought up was how to implement the .25 damping. I have not spent any time on this yet but have been very busy all the same on behalf of our project. If anyone has code they would like to put forward for this then please do and we can incorporate it.
-
when you get the nActualTimespan calculated (in an if for this patch like line 930) you just add 3 full target and then divide it back by 4 so if you get (in hour) 4.25 it is (4.25 +(3*5.25))/4 = 5
[quote]nActualTimespan += 3 * nTargetTimespanCurrent;
nActualTimespan = nActualTimespan/4;[/quote]you can have a nSample in it also depending on how you calculate nActualTimespan (as I have done in the spread sheet as 8 126 sample to construct the 504/126: so (nActualTimespan = (8 sample +(24* nTargetTimespanCurrent ))/32 )
-
[url=https://github.com/ghostlander/FeatherCoin/commit/69ec264e6c2405d4a2048267c2c3221990c25368]Update for 0.6.4.4 beta 2[/url]
Implements a new retarget strategy at block #87948 supporting combined average windows of 126 and 504 blocks with .25 damping. Tested on the livenet to report statistics in 504/2016 mode, seems fine. Ready for testnet.