While helping a friend of mine with programming assignments, I stumbled upon a question on two different algorithms of calculating **Pythagoras Sum **(Pythagoras Addition), through a function called
pythag

One algorithm is straight-forward, which is just

The other, however, is very intriguing, it makes use of absolute, min, max and several lines of weird mathematical operations to get the result.

Not knowing what the second method does or why it even existed, I turned to my friend Google.

There are not much resources on the topic of Pythagoras Sum, but I managed to find both implementations in various programming languages:

**Python (1st algorithm):**

1 2 3 |
def hypotenuse(a,b): c = sqrt( a**2 + b**2 ) return c |

**Haskell ****(1st algorithm)**:

**(1st algorithm)**:

1 |
pythag a b = sqrt $ a^2 + b^2 |

**C (2nd algorithm):**

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
/*< double precision function pythag(a,b) >*/ doublereal pythag_(doublereal *a, doublereal *b) { /* System generated locals */ doublereal ret_val, d__1, d__2, d__3; /* Local variables */ doublereal p, r__, s, t, u; /*< double precision a,b >*/ /* finds dsqrt(a**2+b**2) without overflow or destructive underflow */ /*< double precision p,r,s,t,u >*/ /*< p = dmax1(dabs(a),dabs(b)) >*/ /* Computing MAX */ d__1 = abs(*a), d__2 = abs(*b); p = max(d__1,d__2); /*< if (p .eq. 0.0d0) go to 20 >*/ if (p == 0.) { goto L20; } /*< r = (dmin1(dabs(a),dabs(b))/p)**2 >*/ /* Computing MIN */ d__2 = abs(*a), d__3 = abs(*b); /* Computing 2nd power */ d__1 = min(d__2,d__3) / p; r__ = d__1 * d__1; /*< 10 continue >*/ L10: /*< t = 4.0d0 + r >*/ t = r__ + 4.; /*< if (t .eq. 4.0d0) go to 20 >*/ if (t == 4.) { goto L20; } /*< s = r/t >*/ s = r__ / t; /*< u = 1.0d0 + 2.0d0*s >*/ u = s * 2. + 1.; /*< p = u*p >*/ p = u * p; /*< r = (s/u)**2 * r >*/ /* Computing 2nd power */ d__1 = s / u; r__ = d__1 * d__1 * r__; /*< go to 10 >*/ goto L10; /*< 20 pythag = p >*/ L20: ret_val = p; /*< return >*/ return ret_val; /*< end >*/ } /* pythag_ */ |

**Fortran (2nd algorithm):**

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
double precision function pythag(a,b) double precision a,b c c finds dsqrt(a**2+b**2) without overflow or destructive underflow c double precision p,r,s,t,u p = dmax1(dabs(a),dabs(b)) if (p .eq. 0.0d0) go to 20 r = (dmin1(dabs(a),dabs(b))/p)**2 10 continue t = 4.0d0 + r if (t .eq. 4.0d0) go to 20 s = r/t u = 1.0d0 + 2.0d0*s p = u*p r = (s/u)**2 * r go to 10 20 pythag = p return end |

After digging a little deeper on Google, I managed to find the origin of the 2nd algorithm, it is in fact invented by **Moler, Cleve **and** Donald Morrison**.

The original paper is titled Replacing Square Roots by Pythagorean Sums published on *IBM Journal of Research and Development*.

In this paper, it is highlighted that this algorithm has several advantages over the simple one (directly quote from paper):

- No destructive floating point overflows or underflows are possible
- Can be extended to compute Euclidean norm of a vector
- Particularly attractive for computers where space and reliability are more important than speed
- Potentially faster than a square root

This is an interesting discovery because it is an example of the case where the obvious and simple solution may not be the best solution.