While the main purpose of using significant figures is to reduce false precision, in certain calculations it seems like it can also reduce precision and even introduce false precision.

Loss of Precision

According to the rules of addition & subtraction the result should be equally accurate to the least accurate number entering the calculation. For example in the following calculation: even though .007 contains a single significant figure the answer contains two significant figures.

.007 + .007 = .014

However, in the rules for multiplication & division the result should have an equal number of significant figures as the number with the least number of significant figures. For example in the following calculation: both 2 and .007 contain a single significant figure, thus limiting the result to one significant digit.

2 x .007 = .01

The above multiplication operation results in 29% error.

False Precision

Significant figures can also introduce false precision (i.e. mathematically increase precision)

10. x .0010 = .010

In the above operation a comparatively crude measurement is multiplied by a number precise to 1/10,000^{th}to achieve a result presumed to be more precise (1/1000th) than the first measurement.

Am I overlooking something? It seems like as long as you are dealing with numbers of the same units you should be able to retain digits as you would with addition/subtraction. Also significant figures alone would do a poor job of handling uncertainty of different top and bottom ranges (e.g. 5.5 + 3 / -0) , so are significant figures eliminated when uncertainty is able to be expressed +/- (i.e. only use significant figures if uncertainty is unknown)?