Kimoto's Gravity Well
-
[quote name=“zerodrama” post=“56596” timestamp=“1391344444”]
Kimoto’s Gravity Well
I think it tries to address the wrong problem.
[/quote]I agree, community monitoring (condition based maintenance) will be an important factor, in preventing / detecting attacks.
Particularly in the future when a number of coins are established world currency.Untill then, it is obvious to me, if not the Bitcoin “head in the sands”, attacks on the altcoins will evolve / has lead / to successful attacks on Bitcoin and Litecoin.
We have a defence for many attacks they don’t, and they are relying on network dominance, mistake.
However, ACP has its own draw backs and we would like to continue to try ideas to replace that. Particularly as it will be lees necessary if we take alternate methods eg Kickstarter cheap ASICS for merchants. We need to concentrate on the attacks that are actually happening, eg. evil Multipools.
-
I think it would be a great IDEA to implement the Kimoto’s Gravity Well
-
If someone supplies a patch,Kimoto’s Gravity Well, we can set up a test.
-
[quote name=“wrapper” post=“56620” timestamp=“1391349307”]
If someone supplies a patch,Kimoto’s Gravity Well, we can set up a test.
[/quote]Problem is you can’t explain Kimoto to the average person. How are you going to explain it to regulators? It’s black magic for a problem caused by pools. Kimoto won’t kill an attack by zealots.
The more black magic we have, the less participation there is in the software, which means eventually the policies.
-
[quote name=“zerodrama” post=“56689” timestamp=“1391373448”]
[quote author=wrapper link=topic=7305.msg56620#msg56620 date=1391349307]
If someone supplies a patch,Kimoto’s Gravity Well, we can set up a test.
[/quote]The more black magic we have, the less participation there is in the software, which means eventually the policies.
[/quote]I’ve now looked into the Gravity Well quite a bit. It is very much fulfilling our requirement to re-targets based on the rate of change between the short and long block time averages. I actually think it is very brilliant.
A number of other coins are employing it specifically against multipool leaching, and don’t seem to have, forking or other validation issues which may have concerned us more (with a larger network and more miners to update).
The only point against is it is effectively retargeting at each block, so it might be worth trying that (easy solution) first.
It effectively damps the block difficulty changes, so our current granulated difficulty change may become superfluous.[quote]
unsigned int static KimotoGravityWell(const CBlockIndex* pindexLast, const CBlockHeader *pblock, uint64 TargetBlocksSpacingSeconds, uint64 PastBlocksMin, uint64 PastBlocksMax) {
/* current difficulty formula, megacoin - kimoto gravity well */
const CBlockIndex *BlockLastSolved = pindexLast;
const CBlockIndex *BlockReading = pindexLast;
const CBlockHeader *BlockCreating = pblock;
BlockCreating = BlockCreating;
uint64 PastBlocksMass = 0;
int64 PastRateActualSeconds = 0;
int64 PastRateTargetSeconds = 0;
double PastRateAdjustmentRatio = double(1);
CBigNum PastDifficultyAverage;
CBigNum PastDifficultyAveragePrev;
double EventHorizonDeviation;
double EventHorizonDeviationFast;
double EventHorizonDeviationSlow;if (BlockLastSolved == NULL || BlockLastSolved->nHeight == 0 || (uint64)BlockLastSolved->nHeight < PastBlocksMin) { return bnProofOfWorkLimit.GetCompact(); }
for (unsigned int i = 1; BlockReading && BlockReading->nHeight > 0; i++) {
if (PastBlocksMax > 0 && i > PastBlocksMax) { break; }
PastBlocksMass++;if (i == 1) { PastDifficultyAverage.SetCompact(BlockReading->nBits); }
else { PastDifficultyAverage = ((CBigNum().SetCompact(BlockReading->nBits) - PastDifficultyAveragePrev) / i) + PastDifficultyAveragePrev; }
PastDifficultyAveragePrev = PastDifficultyAverage;PastRateActualSeconds = BlockLastSolved->GetBlockTime() - BlockReading->GetBlockTime();
PastRateTargetSeconds = TargetBlocksSpacingSeconds * PastBlocksMass;
PastRateAdjustmentRatio = double(1);
if (PastRateActualSeconds < 0) { PastRateActualSeconds = 0; }
if (PastRateActualSeconds != 0 && PastRateTargetSeconds != 0) {
PastRateAdjustmentRatio = double(PastRateTargetSeconds) / double(PastRateActualSeconds);
}
EventHorizonDeviation = 1 + (0.7084 * pow((double(PastBlocksMass)/double(144)), -1.228));
EventHorizonDeviationFast = EventHorizonDeviation;
EventHorizonDeviationSlow = 1 / EventHorizonDeviation;if (PastBlocksMass >= PastBlocksMin) {
if ((PastRateAdjustmentRatio -
[quote name=“FrankoIsFreedom” post=“57385” timestamp=“1391653264”]
Its pretty easy to explain. There is a block target, an upper limit and a lower limit. The more you deviate from the target towards the limits the greater the change in difficulty.As you can see from http://www.coinwarz.com/difficulty-charts/franko-difficulty-chart
Franko was suffering really bad from the multipools, we had a handful of dedicated miners and a ton of other miners who wanted to mine but couldn’t afford to. Since the update, all those little spikes on the difficulty chart are multipools being forced to mine at a more fair rate according to their hashing power. Since the update, even with spikes to 4+ difficulty we have maintained our block target average of 30 seconds. An amazing change for us, there were times where frk blocks were taking over an hour. I’m personally glad those days are over and it seems other frk supporters are too.
[/quote]Well, your previous re-target algorithm was not good (4.0 difficulty limiter like Litecoin). After looking at the graphs below, I must say Kimoto’s Gravity Well isn’t as good as it may seem. Your last week’s difficulty median is 1.8, but there are many spikes, a couple even over 4.0. PXC and FTC share the same basic re-target algorithm with some differences in parameters. Both don’t deviate from their median difficulty by over 20%. There is nothing to worry about.
[img]http://phoenixcoin.org/archive/frk_pxc_ftc_diff.png[/img]
-
Oh guys, firstly, this thing is overcomplicated.
Why use this sophisticated approach when we already have very responsive and smooth diff adjustment algorithm designed by PPC’s developers?The second point is more technical.
NEVER EVER use floating point math in decentralized software.
That’s because of floating-point accuracy problem.
It means that different hardware may perform the same calculations with (slightly) different results.
This will permanently destroy the network consensus in a way similar to Bitcoin’s March fork due to BDB limitations.
The larger the network grows the more likely this will happen.Please consider this carefully.
-
[quote name=“RoadTrain” post=“58819” timestamp=“1392253809”]
The second point is more technical.
NEVER EVER use floating point math in decentralized software.
That’s because of floating-point accuracy problem.
It means that different hardware may perform the same calculations with (slightly) different results.
This will permanently destroy the network consensus in a way similar to Bitcoin’s March fork due to BDB limitations.
The larger the network grows the more likely this will happen.
[/quote]He’s 110% correct.
Floating point arithmetic is exact when integers having less bits than the mantissa are involved, but that’s about the only guarantee you can realistically make. You can’t, for instance, say that you’ll get the same result as using fixed precision math: If you divide one floating point value by another, and now you add the result multiple times, you will likely obtain different results than if you had used fixed-point arithmetic (unless it was an exact division).
With all the new devices coming online, most won’t even have an FPU, and expecting them to follow IEEE-754 is just going to disappoint you. But even on the same architecture, your choice of compiler flags can result in different operations, like with -fast with the Sun Studio compiler, or -ffast-math in gcc. Did you know that many Linux Gentoo users use -ffast-math by default? #truestory
Don’t believe me? Try this code out:
[code]
float a = 1.f / 81;
float b = 0;
for (int i = 0; i < 729; ++ i)
b += a;
printf(“%.7g\n”, b); // prints 9.000023
[/code]…and yet…
[code]
double a = 1.0 / 81;
double b = 0;
for (int i = 0; i < 729; ++ i)
b += a;
printf(“%.15g\n”, b); // prints 8.99999999999996
[/code]Which one is correct? The answer is they both are, in accordance with the standard.
Fortunately in that particular bit of code they never == 0.0, because if you had I can tell you it generally won’t even when you think it should.
+1 rep for you.
-
Thank you for the thorough explanation.
I hope it helps coin developers realize their responsibility when changing things. ;) -
[quote name=“RoadTrain” post=“58819” timestamp=“1392253809”]
Oh guys, firstly, this thing is overcomplicated.
Why use this sophisticated approach when we already have very responsive and smooth diff adjustment algorithm designed by PPC’s developers?
[/quote]Sunny King’s approach doesn’t work well for many pure PoW coins. There is quite a big difference between PPC which uses PoW for bootstrapping mostly and most of the other coins which work through PoW only. I’m not impressed with how it works for XPM. Good to see it working well for CNC re-loaded so far, but FTC is much larger and abused by coin hoppers much more.
[quote]
This will permanently destroy the network consensus in a way similar to Bitcoin’s March fork due to BDB limitations.
The larger the network grows the more likely this will happen.
[/quote]If they knew about this and increased dbenv.set_lk_max_locks in v0.7 prior to v0.8 release, the March fork didn’t happen. Updated BTC v0.7 can work with v0.8 still. The same applies to FTC v0.6 and v0.8.
-
[quote name=“ghostlander” post=“58874” timestamp=“1392305288”]
[quote author=RoadTrain link=topic=7305.msg58819#msg58819 date=1392253809]
Oh guys, firstly, this thing is overcomplicated.
Why use this sophisticated approach when we already have very responsive and smooth diff adjustment algorithm designed by PPC’s developers?
[/quote]Sunny King’s approach doesn’t work well for many pure PoW coins. There is quite a big difference between PPC which uses PoW for bootstrapping mostly and most of the other coins which work through PoW only. I’m not impressed with how it works for XPM. Good to see it working well for CNC re-loaded so far, but FTC is much larger and abused by coin hoppers much more.
[/quote]
What’s wrong with XPM?
Sunny King’s algo is simple and can be tweaked to be more responsive or to filter variance better.
We’ve applied the same algo to GLC and it works nice as well even during periods when hashrate increases sixfold.[quote author=ghostlander link=topic=7305.msg58874#msg58874 date=1392305288]
[quote]
This will permanently destroy the network consensus in a way similar to Bitcoin’s March fork due to BDB limitations.
The larger the network grows the more likely this will happen.
[/quote]If they knew about this and increased dbenv.set_lk_max_locks in v0.7 prior to v0.8 release, the March fork didn’t happen. Updated BTC v0.7 can work with v0.8 still. The same applies to FTC v0.6 and v0.8.
[/quote]
That’s not the point. I only meant the consequences this kind of fork can cause - the inability of network to converge to one chain. -
Thanks for that information, it’s a deep subject.
I can now understand more why the Gravity Well appeals to me as a physicist, but has some possible technical drawbacks in implementation.
Would it be possible to re-design the Gravity algorithm so that it used integer maths?
Why hasn’t the floating point problem been an issue with Megacoin?
Are there any other issues that could cause conflicts, like client disagreeing when the difficulty should change, if it is variable?
If so why not make the checks fuzzy ?
Does any one have a Spread sheet formulae yet to model the gravity well? -
[quote name=“RoadTrain” post=“58883” timestamp=“1392307210”]
[quote author=ghostlander link=topic=7305.msg58874#msg58874 date=1392305288]
[quote author=RoadTrain link=topic=7305.msg58819#msg58819 date=1392253809]
Oh guys, firstly, this thing is overcomplicated.
Why use this sophisticated approach when we already have very responsive and smooth diff adjustment algorithm designed by PPC’s developers?
[/quote]Sunny King’s approach doesn’t work well for many pure PoW coins. There is quite a big difference between PPC which uses PoW for bootstrapping mostly and most of the other coins which work through PoW only. I’m not impressed with how it works for XPM. Good to see it working well for CNC re-loaded so far, but FTC is much larger and abused by coin hoppers much more.
[/quote]What’s wrong with XPM?
Sunny King’s algo is simple and can be tweaked to be more responsive or to filter variance better.
We’ve applied the same algo to GLC and it works nice as well even during periods when hashrate increases sixfold.
[/quote]Nothing wrong in general, but their implementation is slow. It doesn’t matter much since it cannot be mined with GPUs and ASICs like most SHA-256 and Scrypt coins. Botnets don’t switch mining targets often.
-
[quote name=“wrapper” post=“58929” timestamp=“1392319890”]
Thanks for that information, it’s a deep subject.I can now understand more why the Gravity Well appeals to me as a physicist, but has some possible technical drawbacks in implementation.
Would it be possible to re-design the Gravity algorithm so that it used integer maths?
Why hasn’t the floating point problem been an issue with Megacoin?
Are there any other issues that could cause conflicts, like client disagreeing when the difficulty should change, if it is variable?
If so why not make the checks fuzzy ?
Does any one have a Spread sheet formulae yet to model the gravity well?
[/quote]
Yes, the bad thing is that it lacks explanation.Basically almost everything can be implemented using integer math, but considering use of pow() it will be a lot of code.
The use of floating point math in consensus-sensitive code is like a time bomb. But no one knows when it’s gonna detonate and split the network.
PS. I looked at a bunch of cryptos which have adopted this algorithm.
E.g. Franko’s diff doesn’t even seem to become less volatile.
http://www.coinwarz.com/difficulty-charts/franko-difficulty-chart