Move FTC to factor four diff swing over 7 days
-
[quote name=“groll” post=“27000” timestamp=“1378094049”]
you need to know that higher the median number allow time wrap in the past for a greater period. 49 would means a block would be accept with a time of the 25 back block in normal condition a bit more then 1h, but would also result in a >4h with 10 minutes between block we sometimes get.
[/quote]Although we are unlikely to run into 10 minutes actual block targets often with more frequent retargetting every 126 blocks, we can keep median of 11 past blocks probably with 30 minutes future limit. On the other hand, larger past median makes it more difficult for the attackers to manipulate median time stamps as they have to maintain superior hash power over larger time frame. It is also interesting if a complex rule can be developed such as MAX(median of 11 last blocks, current time - 30 minutes) to make sure both ends are symmetric in the worst case as we are not likely to have actual block target above 30 minutes.
[quote author=groll link=topic=3447.msg27000#msg27000 date=1378094049]
this number of block would be very very dangerous with a lower sampling range.
[/quote]That’s why I suggest to keep 504 blocks averaging window as it reduces this effect much better than smaller 126 blocks one.
-
[quote]Can you explain the last part about using a .25 factor of the diff adjust?
Quote
with the time warp corrected: I would say 7% over 126 with a .25 factor of actual the diff adjustment calculation. so actual >28% would becomes 7% and 8% becomes 2%[/quote]
this is a reduction of the applied difficulty change. the actual calculation make what is the diff to get the 2.5minutes at current hash rate. the reality is it overshoot as hash rate increase or decrease also with the difficulty change as profitability change. we currently have 15% loyal and 85% hopper. So the ratio is in the range of 0.17.
the .25 is to take into account hoppers comming in as we have seen in the past if we are far we are very far, if near we overshoot and get all the train switch. the .25 (can be something else) is to make it less agressive so we count in the expected hashrate change that will occur fr the diff change.
it is important to note that for extreme it will go to max of the limiter also so way off adjustement is not change. when we are nearer the correct time we just change the diff more slowly.
in reality the hashrate * time =k*difficulty where k is around 4G for FTC. the actual adjustement link the diff to the time and take hashrate as constant. but we know change in diff means the hashrate will change and if we are near the target we just overshoot. so using 0.25 will make a smaller step and the hashrate change should compensate the rest. if not enough hashrate change we will change a bit again in next retarget, but we would be in 10% of the target so not a bad place to be.
edited: note: the price influence greatly the hash rate and vary by ±10% in a day so this change the equation for next retarget also. but price can’t be factor directly, but the hashrate already factor it.
-
[quote name=“d2” post=“27098” timestamp=“1378183323”]
Just like you can’t take the price into account, you also cannot assume that any additional hash rate will enter the network if the difficulty drops by X amount or to X.XX level as the profit miners only jump onto the coin if a profitability website says to do it.
[/quote]
it’s a possibility and next retarget will just put us more close to the correct value if nothing else change. But the actual formula assume no change in hashrate and this have been proven to be very wrong in change more then 1-2%. as we can see now passing from 228 to 211 just made a jump from 5G to 8G of hashrate. today at diff 228 I have made 6-7 checks at random interval with price mostly stable for some block (i discarded many range for non stable price on BTC-e) and price change from 0.00107 at 5Gh/s and 0.00113 at 7Gh/s. The funny part was that the hashrate for 30 blocks was mostly following the price so the 5% change in price was giving a 2Gh/s change in hashrate. Now we have retarget at 211 (8%) and price at 0.00108 and we hit 8.5Gh/s. so we overshoot if current condition continue we will retarget over 260(if full swing will be near 300 :( ) and be on a long run again overshooting the other way after.the 0.25 “damping” factor would just slow the change if we are not too far of stable value where we know we should have some variable hash power already mining. so it expect but not need hoppers to comes in or comes out we will adjust more slowly, possibly over some retarget for small change better then the actual full gaz/full break mode. if we are far from the target the max will be used anyway so we can follow in any direction as necessary as it reduce the impact not the range of the retarget limits.
I have notice in the past 5-6 weeks that going more then 5% over LTC gives us more than 2.5Gh/s over been under LTC and it slowly increase with bigger % (~5.5Gh/s with ~15%). this is true even if many coins are over us in the profitability. you can see the change in LTC hashrate mostly directly.
1-2Gh/s move constantly with the rest of the coin profitability. The resulting change on us are more or less important depending on the coin.
-
It’s being worked on as we speak Erk. Issue is that it requires a hard fork which should not be taken lightly. We need to ensure that we have everything we need in the new client and that we haven’t made any mistakes. We promise - it is coming.
Remember, we have many more wallets in the the wild than most other alts so a hard fork is a big step.
It would also appear that there’s a high percentage of clients that haven’t upgraded to the ACP wallet yet.
Btw. As I was reading through that thread, I noticed you used coinotrons pps? If you still want a pps pool, try ours. Same fee but higher share value. ;)
-
[quote name=“Nutnut” post=“27307” timestamp=“1378363382”]
It’s being worked on as we speak Erk. Issue is that it requires a hard fork which should not be taken lightly. We need to ensure that we have everything we need in the new client and that we haven’t made any mistakes. We promise - it is coming.Remember, we have many more wallets in the the wild than most other alts so a hard fork is a big step.
It would also appear that there’s a high percentage of clients that haven’t upgraded to the ACP wallet yet.
Btw. As I was reading through that thread, I noticed you used coinotrons pps? If you still want a pps pool, try ours. Same fee but higher share value. ;)
[/quote]Woot? did we make em lower their fees ;D love it!
-
9/126 has been implemented in the 0.6.4.4 branch of Feathercoin.
https://github.com/FeatherCoin/FeatherCoin/tree/0.6.4.4
We need to code in the specific changes to increase the cost of time warp.
If people want to contribute code, a commit or give explicit details of proposed code changes now is the time to come forward :)
-
I see you’ve implemented 9% over 126 blocks without large averaging window of 504 blocks or something like.
[code] - static const int64 nTargetTimespan = (7 * 24 * 60 * 60) / 8; // Feathercoin: 7/8 days
- static const int64 nTargetTimespan = (7 * 24 * 60 * 60) / 32; // Feathercoin: 7/32 days[/code]
In fact, this is the recent Phenixcoin patch with no block target/reward changed. It doesn’t work well for them and I see no reason why it’s going to work well for us. We have discussed so much and finished exactly where started.
See below what PXC has got with such patch. We shall have no better unfortunately.
[attachment deleted by admin]
-
I’ve said before that we cannot compare ourselves to PXC. They have no base hashrate and need to address that issue. We are an active healthy coin with a loyal miner base which we want to look after :)
However , I think that there is some misunderstanding between us.
Can you provide some code, psuedo code or an explicit explanation of what you think we need?
EDIT: I think I have it, a diff change every 126 blocks (max 9%) but sampling from 504 blocks ago?
Please PM me whatever IM details you have and we can make some quick progress to report here. Thanks.
-
[quote name=“Bushstar” post=“27756” timestamp=“1378840332”]
I’ve said before that we cannot compare ourselves to PXC. They have no base hashrate and need to address that issue. We are an active healthy coin with a loyal miner base which we want to look after :)
[/quote]Indeed we are much larger than PXC. Those who are after us are also much more serious. PXC is hunted by Multipool only, the others ignore it simply. Too small game for them. We are large enough to catch their attention, and our block target is 3.3x slower which is also good for them. Alright, let’s continue in private until we figure out something constructive.
-
[quote name=“erk” post=“27765” timestamp=“1378845560”]
Gamecoin (GME) just implemented 10% difficulty change ever 12 blocks, they are also 2.5min blocks. It seems to be working ok, feels a little slow to respond when a pool hops, but it gets there. They also had issues previously with 51% attack and being locked at high diff. One thing they did was patch the client so no older versions were valid on the chain after the update.
[/quote]Thanks for that erk, I have taken a look at GameCoin and will keep an eye on them. I do intend to change the protocol version as well.
-
[quote]EDIT: I think I have it, a diff change every 126 blocks (max 9%) but sampling from 504 blocks ago?[/quote]
i like the idea to protect against short term manipulation, but this has also often a repeater effect of past change as you resample them 4 times. the problem is that it doesn’t converge: ex: you want 4 and get 5 when you are under 3.7 and 3 when you are above 4.3so you get 4-3-5-3 next is 5-4-3-5-3-5-4-3… and repeat. Yes the actual single sampling don’t converge either so we get 3-5-3-5-3-5-3 :(
4 times is not too bad and with some variation nearly converge better then the actual sampling. when more resampling the repeater can becomes repeater-amplifier. A 12 blocks over a 504 resample makes past event too important so it react to older trend not current. ex: if you say we get 10-x with 9% change so we should stable at 5 and have previous like 7-8-8-8-8-8-8-8 next would be 7-6-5-4-3-2-1-2. so going way too low and get next way too high from past history
exponential moving average can help but need to be analysed a bit more.gamecoin is not on coinchoose or coinwarz at the moment so the problem seems very different. 10% is way too much (it is more then 600% over 504 blocks when compound) for small interval and can be abused with time warp as we can’t completely eliminate it. the allowed time difference should be less then the difficulty adjustment when added or substract from the sampling period. actual 2h over 504 is ~ 10%. 30min over 126 would also be 10% so borderline to be abused. 30 min. over 12 blocks is 100% so someone can make 5-6 diff change one way and one the other way to reset time and then repeat to abuse the diff. Zetacoin get this kind of stuff. longer sampling help solve this.
-
[quote name=“ghostlander” post=“27755” timestamp=“1378839706”]
I see you’ve implemented 9% over 126 blocks without large averaging window of 504 blocks or something like.[code] - static const int64 nTargetTimespan = (7 * 24 * 60 * 60) / 8; // Feathercoin: 7/8 days
- static const int64 nTargetTimespan = (7 * 24 * 60 * 60) / 32; // Feathercoin: 7/32 days[/code]
In fact, this is the recent Phenixcoin patch with no block target/reward changed. It doesn’t work well for them and I see no reason why it’s going to work well for us. We have discussed so much and finished exactly where started.
See below what PXC has got with such patch. We shall have no better unfortunately.
[/quote]I agree with this assessment. A large averaging window is critical for smoothing out temporary hash rate spikes. If you change your retarget time to every 126 blocks with a 9% cap, but make your sample window 500+ blocks you’ll get a much slower adjustment rate that won’t spike when hash rates do. We call this a “lagging indicator” in trading terminology.
-
for time warp, the first part is to reduce the 2h windows. this is check in the checkblock where the height is not available, so in transition this can be dangerous to change. so i made it a check just for when set in chain, so an orphan can be in future but not enter the chain if more then 30 min, this is where the past time validation is done also. the second change enforce the max time in the past from the last block at 2* 30 min as a block in the max future would not prevent a node with max timeadjusted to enter block in the chain. the last one is to change max time adjust to 13 minutes (just a bit less then the 30 minutes o remove split of the network using the time adjust with + and - on each side
in main.cpp at line 1819 add:
[code]
// limit block in futur accepted in chain to only a time window of 30 min
if (GetBlockTime() > GetAdjustedTime() + 30 * 60)
return error(“CheckBlock() : block timestamp too far in the future”);// Check timestamp against prev it should not be more then 2 times the window
if (GetBlockTime() -
[quote name=“Kevlar” post=“27781” timestamp=“1378852749”]
[quote author=ghostlander link=topic=3447.msg27755#msg27755 date=1378839706]
I see you’ve implemented 9% over 126 blocks without large averaging window of 504 blocks or something like.[code] - static const int64 nTargetTimespan = (7 * 24 * 60 * 60) / 8; // Feathercoin: 7/8 days
- static const int64 nTargetTimespan = (7 * 24 * 60 * 60) / 32; // Feathercoin: 7/32 days[/code]
In fact, this is the recent Phenixcoin patch with no block target/reward changed. It doesn’t work well for them and I see no reason why it’s going to work well for us. We have discussed so much and finished exactly where started.
See below what PXC has got with such patch. We shall have no better unfortunately.
[/quote]I agree with this assessment. A large averaging window is critical for smoothing out temporary hash rate spikes. If you change your retarget time to every 126 blocks with a 9% cap, but make your sample window 500+ blocks you’ll get a much slower adjustment rate that won’t spike when hash rates do. We call this a “lagging indicator” in trading terminology.
[/quote]Will a larger averaging not nullify the effect of of the faster readjusts though?
Keeping the numbers simple…
We start at block 1 and maintain a steady 5Ghs rate to block 500. Then for whatever reason the price drops or the diff increases. Or another coin gets really profitable… Either way, we loose half the hashing power. At the next retarget we can’t drop 9%/126 because we are still stuck due to the average of the 504 blocks.
Or do I need more coffee?? :-[
-
Nut, you are correct, the difficulty will be less responsive to sudden changes in hash power. There are dangers of abuse in a smaller window and this proposal tries to reduce it, for example the effect of a two hour time warp on a 126 block sample. A 504 block sample would help on this.
Let’s look at the time warp.
The attackers can swing us two hours into the future on a 21 hour block time difficulty window (504 blocks).
With groll’s proposal we will have half an hour in a 5.25 hour window (126 blocks).
2/21 = 9.5% time added from attackers on 504 blocks
0.5/5.25 = 9.5% time added from attackers on 126 blocksIf we do not limit the extremes of block times then a 2 hour swing on a 5.25 hour window is very wild.
2/5.25 = 38% time added from attackers on 126 blocks
So if we then sample from 504 blocks and limit the time to half an hour we would end up with.
0.5/21 = 2.38% time added from attackers on 504 blocks every 126 blocks
This limits the effect greatly of a time warp to the point where it is ineffective.
As Sunny King is the chap who spotted and reported the flaw in TRC’s design I am asking him to review the proposed changes to see if he can find anything wrong.
This thread is a great help and I feel that we are making good progress. Keep it up please :)
-
[quote name=“erk” post=“27805” timestamp=“1378894013”]
Why don’t we just remove the code that works out the time by consensus and use NTP servers? Toughen up the rules for time stamps to be valid. This whole time warp attack thing should have never been possible, it’s a design flaw in the bitcoin code.
[/quote]This is a very interesting point that you bring up and is part of a larger debate over centralisation. Bitcoin tried to invent decentralised time. We can see that there are dangers in this and have to suffer the consequences. I believe that Bitcoin may not have the proper balance of trade-offs between centralisation and decentralisation largely due to concerns of resilience against hostile entities. Bitcoin was designed to survive if the lead developer disappeared :)
Moving to NTP servers is outside the scope of the upcoming patch. A change of that magnitude needs a patch all of its own. If you want to discuss this further please do so in another thread to keep this one focused.
[url=http://forum.feathercoin.com/index.php?topic=3077.0]Further NTP discussion for those interested. [/url]
-
[url=https://github.com/ghostlander/FeatherCoin/commit/ef61714942aeabc1a954a77089c6371585c8faf2]Update for 0.6.4.4 beta 1[/url]
Runs on the livenet as usual until block #87948, switches to 9% over 126 blocks with 504 blocks averaging after. I shall import time warp related changes tomorrow.
-
with the nut nut premise of 0-500 block at 2.5 and then loose half hash rate so time becomes 5 minutes. I made an excel that show for each 126 blocks range what is the time between block and the second line the diff. with 1 as starting and then use a variable hash that is a simple formula 5*diff^3. so the has rate grow faster then the diff change in %. this make 5 minutes as max block time and should target for a diff of 0.7937 for the 2.5 minutes.
the second scenario use 400 blocks at 2.5 minutes and 104 at 5m minutes (putting 256 at 2.5 and last 256 at 5 minutes give very similar result)
the third one use mostly no change in hash rate and show that 504 damping and 126 with 504 are not very good. first one is too slow, second one echo past even too much so converge slowly
https://docs.google.com/file/d/0B5YFJvIJozEwYmM3eGtRemdkb3M/edit?usp=sharingso for the first 2 126 block range after the first 504. I put good the one that react at max possible speed or near it so should be fast enough for fast change. in the rest between 2.4 and 2.6 is good(green), between 2 and 3 (neutral), and below 2 and above 3 is bad (red)
so the code I posted with 126 retarget over 504 block weighted average with a damping factor of .25 seems to be doing pretty fine. it’s second to the 126 with damping of .25, but this would be less resilient to time attack. pure 126 and 504 just diverge to the min-max of their range. the 9% just make it smaller as seen with PXC. simple longer average also add variation from the past that are way greater then the 9% so they just oscillate also.
feel free to test other scenario and add other change
-
spotted a mistake on 504 formula so new link as I can’t edit the previous link
https://docs.google.com/file/d/0B5YFJvIJozEwdi1KY3dMeEpTNmM/edit?usp=sharing -
Ghostlander, thanks for doing that work, it looks excellent.
What is concerning is groll’s simulations.
The 504 we have now looks very poor .
126 blocks over 504 average is poor. The time ends up fluctuating wildly and does not look good.
Change at 126 blocks without the history oscillates on the extremes but is much more palatable.
Change at 126 blocks with .25 damping looks the best of the bunch. About time warp, bringing down the block time to 30 mins makes this the same vulnerability we have now.
The damage from the last time warp attack was pools not being able to mint block. According to Coinotron this was a bug in the stratum implementation. It may be possible to move to 126 blocks with 30 min block time difference.