$10
Write a simple C/C++ program (not C#) that performs the calculation 1.0/3.0 (one calculation) and a separate part of the calculation which should be a loop which sums up 0.000001 a million times (to get a result that should be one). Output the results of both calculations with many digits of precision. You should also note that the results differ when using floats and doubles (so this is actually 4 problems – do the two calculations each for single precision and then for double).