Statistical Tests, Tests of Significance, and Tests of a Hypothesis Using Excel

December 22, 2017 | Author: Alexina Antonia Griffin | Category: N/A
Share Embed Donate


Short Description

1 Journal of Modern Applied Statistical Methods Volume 5 Issue 2 Article Statistical Tests, Tests of Significance, and T...

Description

Journal of Modern Applied Statistical Methods Volume 5 | Issue 2

Article 29

11-1-2005

Statistical Tests, Tests of Significance, and Tests of a Hypothesis Using Excel David A. Heiser United States Air Force, Retired, [email protected]

Follow this and additional works at: http://digitalcommons.wayne.edu/jmasm Part of the Applied Statistics Commons, Social and Behavioral Sciences Commons, and the Statistical Theory Commons Recommended Citation Heiser, David A. (2005) "Statistical Tests, Tests of Significance, and Tests of a Hypothesis Using Excel," Journal of Modern Applied Statistical Methods: Vol. 5: Iss. 2, Article 29. Available at: http://digitalcommons.wayne.edu/jmasm/vol5/iss2/29

This Statistical Software Applications and Review is brought to you for free and open access by the Open Access Journals at DigitalCommons@WayneState. It has been accepted for inclusion in Journal of Modern Applied Statistical Methods by an authorized administrator of DigitalCommons@WayneState.

Copyright © 2006 JMASM, Inc. 1538 – 9472/06/$95.00

Journal of Modern Applied Statistical Methods November, 2006, Vol. 5, No. 2, 551-566

Statistical Software Applications and Review Statistical Tests, Tests of Significance, and Tests of a Hypothesis Using Excel David A. Heiser Environmental Management United States Air Force, Retired Microsoft’s spreadsheet program Excel has many statistical functions and routines. Over the years there have been criticisms about the inaccuracies of these functions and routines (see McCullough 1998, 1999). This article reviews some of these statistical methods used to test for differences between two samples. In practice, the analysis is done by a software program and often with the actual method used unknown. The user has to select the method and variations to be used, without full knowledge of just what calculations are used. Usually there is no convenient trace back to textbook explanations. This article describes the Excel algorithm and gives textbook related explanations to bolster Microsoft’s Help explanations. Key words: Excel, spreadsheets, statistical functions, hypothesis testing, t test The question is here, how much of Excel’s computed output is believed to be correct and just what is correct?

Introduction Testing any commercial/academic statistically oriented computer program for correctness and accuracy runs directly into the questions, what is correctness and what is accuracy. Unfortunately, the answers are user dependent in the sense that each user has a different answer. The fact is that all commercial/academic software at sometime gives incorrect values, but that doesn’t stop users from using it. “There’s a credibility gap: We don’t know how much of the computer’s answers to believe. Novice computer users solve this problem by implicitly trusting in the computer as an infallible authority; they tend to believe that all digits of a printed answer are significant. Disillusioned computer users have just the opposite approach; they are constantly afraid that their answers are almost meaningless” (Knuth 1998, p229).

The EXCEL Spreadsheet Program Microsoft’s Excel spreadsheet program is an inexpensive program for doing many kinds of calculations in business, engineering, and science. Excel has functions and data analysis routines for doing statistical calculations. There are many introductory statistics books that include instructions for solving problems using Excel. Excel also has basic chart and graph capabilities for displaying data and results. Excel remains very popular, because it allows easy integration with Microsoft’s Word and with Microsoft’s Access (large business data bases). Results in the form of tables and charts can be easily integrated with Microsoft’s PowerPoint presentation software. The pivot table feature as a means of analyzing data is a very popular feature. Excel’s capabilities are limited by the fact that it only does simple statistics. It does not include a lot of additional functions and routines that reflect current commonly used statistical procedures. It was programmed prior to 1992 and version 4.0 in 1994 was the first fully documented version (Excel 1992). It has had essentially no major improvements in statistical capabilities since then. Significant changes

David A. Heiser, B.S, University of Wisconsin, M. S., California Institute of Technology, both in Chemical Engineering. He maintains the web page on using Excel in statistics at http://www.daheiser.info/excel/frontpage.html. Email: [email protected]

551

552

STATISTICAL TESTS USING EXCEL

(corrections and improvements) were made for the Excel 1997 and Excel 2003 versions, but the basic module remained the same. The Computer Environment It is important for people who deal with numerical computations to understand that the computer works only with a subset of real numbers {IR}. It is a special kind of mathematical object, a field. The computer software uses a different object {IF} to simulate {IR} objects. These objects are called floating point numbers. The object defined by {IF} is a finite subset of {IR}, it is not however, a field (nor any other object that mathematicians commonly define and study) (Gentle, 2004). In computer software, addition and multiplication of {IF} objects are not associative. The summation in {IF} is not well defined, and usually is taken as a number when its value no longer changes. This no-furtherchange limit is referred to as being {IF}convergent, which is different from {IR}convergent. The harmonic series (sum of 1/ i ) in {IR} is divergent, but in {IF}, it is {IF}convergent. The {IF}-convergent value can be different, depending on how the internal algorithm does associations. The sum of integers is {IF}-convergent, because there is a limit on the size of integers that can be represented as {IF} objects (Gentle, 2004). The Excel functions and routines handle numbers as the IEEE-754 64 bit standard floating point double precision number. The following are descriptions from KBA 78113: “A floating-point number is stored in binary in three parts within a 65bit range: the sign, the exponent, and the mantissa. 1 Sign 11 Bit Bit Exponent

1 Hidden 52 Bit Bit Mantissa

The sign stores the sign of the number (positive or negative), the exponent stores the power of 2 to which the number is raised or lowered (the maximum/minimum power of 2 is +1,023 and -1,022), and the mantissa stores the actual number. The finite storage area for

the mantissa limits how close two adjacent floating point numbers can be (that is, the precision). (KBA 78113) The mantissa and the exponent have fixed sizes. As a result, the amount of precision possible may vary depending on the size of the number (the mantissa) being manipulated. Whenever a computation is made (or a value input), the mantissa bits are moved left one at a time and the exponent bits are re-set until the left most bit is a one. Then one more shift is made, transforming this one-bit of information to the hidden bit. Zero bits are added on the right to fill out the 52-bit mantissa.” (KBA 78113) An augmented mantissa of 53 bits corresponds to 15.7 decimal digits. Excel only displays the rounded 15 decimal digits. “Every decimal integer can be exactly represented by a binary integer; however, this is not true for fractional numbers. In fact, every number that is irrational in base 10 will also be irrational in any system with a base smaller than 10. For binary, in particular, only fractional numbers that can be represented in the form p/q, where q is an integer power of 2, can be expressed exactly, with a finite number of bits. Even common decimal fractions, such as decimal 0.0001, cannot be represented exactly in binary. (0.0001 is a repeating binary fraction with a period of 104 bits).” (KBA 78113) Errors occur during computer arithmetic {IF} operations. Round off error. Results when addition and subtraction are performed. Also occurs in multiplication and division when the sequences involve interchanges between internal 80 bit registers and external 64 bit memory storage. The Excel display also involves another round off.

DAVID A. HEISER Overflow and underflow. Results when the sequence of instructions results in one of the intermediate values either exceeding 1.797693134862315E + 308 (fpmax) or being less than 4.940656458412465E-324 (fpmin). An error return does not always occur. Changing the associations will result in different results. Quantizing error. Results when the decimal number cannot be exactly represented by the IEEE-754 binary representation. The IEEE-754 standard also has an 80bit floating-point standard. This standard retains the same bit pattern as the 64-bit standard, but extends the mantissa (to the right) an additional 16 bits to a total of 68-bits. Microsoft uses the 80-bit standard for the machine registers that contain the floating-point numbers. At the machine level, computations are done using the 80-bit standard. If however in the sequence of instructions, one of these registers has to be stored in memory, the 80-bit number is rounded to the 64-bit standard and transferred to memory. A multiply-divide sequence that transfers intermediate values to memory will have a different result than one in which the intermediate values are held in the 80 bit floating-point registers. The issue on round-off errors comes from the conversion of the 80 bit number to a 64 bit number. KBAs 42980, 78113, 145889, 125056 and 214118 are some good sources of information on the {IF} problem. McCullough (1998) also discussed this problem. Knuth (1998) presented the basic theoretical problems of accurately adding, subtraction, multiplying and dividing using floating point numbers as the {IF} object. Higham (1993) also found that there is no universal way to correct for addition (and subtraction) errors in long lists in floating point form. Algorithms and Computer Programs This is the area where the mathematics is converted into computer instructions. The general process is to take the mathematics (the equations) and to break the sequences into a series of computing blocks (i.e. subroutines).

553

Then for each of the subroutines, develop (or find in the literature) algorithms made up of fundamental arithmetic type operations (addition, subtraction, multiplication, division, etc) that will perform the desired computations. Subroutines will be written using a computer language such as Fortran, C++, or Visual Basic. The final step is then a conversion (compiling) to a sequence of binary machine instructions (i.e. Intel chip level). Building a robust algorithm that always gives correct values is not an easy task. For example, take the simple computation of the standard deviation of a list of numbers. σ = √ (Σ (xi – xave) / (n-1))

(1)

This computation would be done using the calculator formula σ = √ { [ (nΣ xi 2 ) – (Σ xi)2 ] / [n(n-1)] } (2) with internal summation loops (Knuth, 1998, p 232). This will occasionally require a square root of a negative number, and the overall accuracy is poor. Excel 2000 and earlier versions used this calculator formula to calculate standard deviation values. Excel 2003 uses a two pass method, first calculating an average, then in the second pass calculating deviations from the average, a sum of squares of the deviations and then the standard deviation (KBAs 828888 and 826248). An improved algorithm is Welford’s (1962), which is recommended by Knuth (1998). Knuth’s form of the algorithm is provided below. Both the mean and the standard deviation are outputted values. DIM Data X(1 to N) As Double DIM M1, M2 ,S1, S2 as Double DIM N, K As Integer M1 = X(1) S1 = 0 FOR K = 2 to N M2 = M1 + (X(K)-M1) / CDBL(K) S2 = S1 + (X(K)-M1) * (X(K) – M2) M1=M2 S1=S2 NEXT K AVERAGE = M2 STDEV = SQRT(S2/ CDBL(N – 1) ) )

554

STATISTICAL TESTS USING EXCEL

Note: CDBL converts integers to a floating point numbers Use of the third algorithm substantially improves the accuracy of the result in Excel 2000, but only slightly in Excel 2003. Other statistical computer programs use other algorithms. Maechler (2005) chose West’s modification of this algorithm. As he stated, “I’d conclude from Communications ACM, Vol 22, No. 9, page 531, that Welford’s algorithm is a bit less accurate than the (very similar) ‘West’ version, and we (the R developers) should rather implement the latter.” Algorithms sometimes show strange results for an unusual set of input values. For example, enter three identical values, 1E+30, 1E+30 and 1E+30 into Excel cells and do a STDEV function on this range. The result is 1.72368E+14, not zero as expected. Also, do a VAR on this range and 2.97106E+28 will appear. This raises an important issue. When input of parameter values from one narrow, unusual region of input parameter space results in a wrong output, does one conclude that the computer program should never be used because it returns wrong values? The Display Of The Result Within the computer program there are internal subroutines that convert the binary floating point word (64 bits) to a string of ASCII characters (text) which are displayed/printed. The user can (in Excel) chose how the text is formatted as to text type, size, bold, italic, floating point or fixed point and the number of decimals to the right of the decimal point. In Excel there is a default set (Arial, 10, regular), a default cell width of 8.43 points, and the default General format. For numbers from 1 to 0.0001, the General display will show 6 decimal digits. Below 0.0001, a floating point display of 3 digits (plus exponent) will be displayed. There have been articles published criticizing the accuracy of computer software based solely on the default display (e.g., Altman 2002, Hilbe, 2002, McCullough, 1998, 1999, McCullough & Wilson, 1999, 2000, 2004; Knŭsel, 1998, 2003).

Methodology McCullough (1998, 1999) pioneered some of the basic methods of conducting tests on software. He used the NIST suite of data-bases with known statistics to test several software programs. His two articles are good background and methodology sources. Testing methods Any testing of statistical software programs involves the exercise of selection to get down to the area or routines to be tested. With respect to Excel these are functions and data analysis routines. For other programs, there may be all kinds of decision trees and selections to arrive at the test objective or method to be tested. What is the function/routine actually doing? In most cases, the developer says very little regarding the specifics of what the program does, but a great deal is said on marketing (selling) how good and comprehensive is the program. For proprietary reasons, of course, very little should be said. For that reason, some testing has to be done to find out just exactly what is being calculated, how to get as many digits as possible, and to find some boundaries on the ranges of input parameters. This is exploratory testing. The next level is accuracy testing. For accuracy testing the software will require a test database and a parameter and selection vector. In some cases only a test database is needed and in some others such as the distribution functions, only a parameter vector is needed. In all cases there has to be an output vector that can be compared to a reference standard vector, such that a difference can be obtained as a measure of the accuracy of the method. In the case of Excel functions, this output vector has only one value (the exception is the array functions that output a range, matrix or a table of values). The Excel Data Analysis routines also may output a table, which is the output vector formatted to be readable. Standard values of summary statistics from a data set may come from several sources.

DAVID A. HEISER 1. Theoretical values manually calculated or selected (by theory) that are valid accurate reference values. For example one can construct a list of data values that has a theoretical precise mean and a precise standard deviation. (Method: A). 2. Values calculated by an external software program, chosen to be the reference (Method: B). 3. Data and values published as part of a standard. (Method C). 4. Comparing the results from many different software programs on the same data set and deciding on “correctness” (Method D). Altman and McDonald (2000). The NIST Tests The National Institute of Technology (NIST) established datasets for software tests, the StRD series (NIST nd). “For all datasets multiple precision calculations (accurate to 500 digits) were made using the post-processor and FORTRAN subroutine package of Bailey (1995, available from NETLIB). Data were read in exactly as multiple precision numbers and all calculations were made with this very high precision. The results were output in multiple precision, and only then rounded (without error) to fifteen significant digits. These multiple precision results are an idealization. They represent what would be achieved if calculations were made without round-off or other errors. Any typical numerical algorithm (i.e. not implemented in multiple precision) will introduce round-off error, and will produce results that differ slightly from these certified values.” (NIST, nd) The NIST data sets covered univariate analysis, linear regression, non-linear equation fitting, ANOVA and correlations. This has been the essential test method (method C) to test Excel. McCullough (1998, 1999) pioneered the basic method of conducting tests on software using the

555

NIST test sets. McCullough and Wilson (1999, 2000, 2004) also presented a series of papers on tests made on Excel using the NIST and other test data . Other Previous Excel Tests Some of the early testing (Excel 1995) was done by the Center for Information Systems Engineering, (Britain) in 1999 (CISE 27/99). They used the IMSL Fortran 90 Math/Library (version 3.0) provided by the Digital Equipment Corporation to do testing (Method B). A number of email messages, web site reports (papers), and discussions on the newsgroups and on the statistical lists (since 1998) described tests on some of the Excel functions and routines. These included cases where a particular (real) data set, when analyzed using Excel, gave results different from some other software package. Most of these were casual tests, based on a particular data set. Significance Test Methods The NIST data sets and their computed statistics were not useable on the family of significance tests in Excel. NIST did not provide paired or dual data sets for testing significance test functions/routines. The literature does not report on specific testing of Excel significance test functions and routines. Therefore, test data sets for testing the Excel family of significance tests had to be built, and ways to arrive at accurate statistical values found Because the outputs from some of the significance tests are p values, a set of Visual Basic statistical distribution functions provided by Smith (2002) were used to calculate accurate reference p values. The Excel distribution functions are not accurate enough to be used to obtain accurate p values. Two approaches were taken, one of exploratory testing to identify just what the function was returning (e.g., the proper tail area). The other was to do accuracy testing. This required the development of more extensive data sets to stress the functions/routines. The NIST approach was to use several types of test data sets. One of these types was to build patterned data tables of data. A patterned number can be considered as having a whole number part and a fractional part where the

556

STATISTICAL TESTS USING EXCEL

numbers to the right of the decimal point is the fractional part. A patterned data table has patterned numbers all with the same whole number, but with different fractional values. For the NIST SmLs01 to SmLs09 data sets, the fractional part had specific alternating values (0.3 and 0.5 or 0.2 and 0.4), and then with one odd value for each set, gave a data set with theoretical, precise means, variances and standard deviation values. By increasing the magnitude of the whole number from 1 to 1E+09, and by changing the size of the set, the overflow effect on floating point number computations and algorithms could be determined. The NIST approach to the SmLs sets suggested ways to build test data sets with accurate statistics to test the Excel family of significance tests. The theory behind it comes from the basic way numbers are represented in Excel. In terms of floating point numbers, a larger whole number part of the patterned number pushes the mantissa bits (these are on a number base of 2, not on a number base of 10) off the right end, characteristic of overflow. This overflow of floating point numbers is one of the causes of errors. However, there are other causes of errors that are not brought out by the use of patterned numbers, and other methods have to be used. Good algorithms are those that minimize the overflow effect. The charts in Heiser (2005) show the loss of accuracy of many Excel functions due to this type of overflow. Measures Of Accuracy - Log Relative Error (LRE) The measure of the accuracy of the information from a computed value is by a calculation called Log Relative Error or LRE. This was introduced by McCullough in his 1998 paper. The LRE value represents a measure of how many significant (accurate) digits (decimal) there are in the output parameter values. LRE = -LOG10 ( abs ( CV-RV ) / RV ) ) CV is the computed value and RV is the reference or true value. LRE values vary from 0 to 15 on the McCullough scale. 15 can be considered as an exact match.

LRE values from the statistical distributions present problems, because of the 9’s problem. Here, a leading sequence of 9’s really are leading zeros, and should not be considered as significant digits, but mathematically they are. Excel computes p values above 0.5 as 1 minus the corresponding below 0.5 value, for all symmetric probability distributions. Consequently, p values above 0.5 have uncertain accuracies, depending on the user’s view. Smith’s (2002) distribution functions calculate p and q values by separate algorithms. The LRE values approximate the number of accurate digits in the Excel cell value, independent of how it is displayed. For the floating point form, (select Format→ Cells→ Number→ Scientific→ Decimal Palaces→ 14) it approximately represents the number of accurate digits, including the digits to the left of the decimal point, and the digits to the right of the decimal point. Results of Tests This study examined the errors from the Excel VAR algorithm and Welford’s algorithm on a patterned data set. In this case, two sets of random fractional numbers, one uniform u(0-1) and the other normal n(0,1) with 1001 values of each set were generated in a column (Please note that for all test data sets with random numbers, Marsaglia’s MWC256 RNG, Marsaglia (1995, 2002) was used. For random normal, Smiths’s (2002) precise inverse normal function was used). The variance value of the base case from either of the two functions was the identical. Whole number sets (from 1 to 1E+15) were added forming 15 additional columns. Variances from each function were then calculated. Figure 1 shows the result. Given the nature of the input data and the basic structure of a patterned number in terms of the decimal system, the data from a good algorithm should closely follow a straight line from 16 on the y axis to 16 on the x axis. The Excel 2003 algorithm, although an exact algorithm, shows some unexpected behavior in the region below an exponent of 8. This behavior generally occurs also for other Excel functions when the whole number is less than 1E+08. The inaccuracies at the right end

DAVID A. HEISER

557

are expected. Welfords’s algorithm in general is close to the expected line and shows consistent behavior, typical of a good algorithm.

FTEST - Returns the one-tailed probability value of an F test on two separate ranges of data. The ranges may be of different lengths.

Variance Accuracies

TTEST - Returns the probability value of a t test on two separate data sets. Function allows for 1 or 2 tail tests, paired data and equal-unequal variances. The function has two parts internally, one to calculate a t value from the two separate data sets, and the other to calculate internally a p value from the t value.

VAR Uniform

Welford Uniform

VAR Normal

Welford Normal

18 16

LRE Value

14 12 10 8 6 4 2 0 0

4

8

12

16

Whole Number Exponent

Figure 1: Comparison of Algorithms The Excel Significance Test Functions And Routines Excel 2003 provides 80 direct functions and 19 Data Analysis routines that can be used in statistical data analysis. Only a part of the available functions and routines are directly applicable to tests of significance and hypothesis testing. The functions and routines useful for significance testing are: CHITEST - This is a Chi-square Goodness-of-Fit test for grouped data. It does not support general Chi Square tests on variances. The test will only work on 2 way contingency tables. The test cannot be applied to single lists of observed and expected values. The first input, actual range is the range of the observed values, as a 2-way contingency table. The second input is expected range, the range of a separate contingency table giving the expected values.

ZTEST - Returns the two-tailed probability of a normal distribution z test on a range of data with respect to a known population mean and standard deviation. If the standard deviation field is left blank, the routine used the standard deviation of the data. The function has three parts internally: 1 To calculate a mean value (and a standard deviation) from the input data set. 2 To calculate z = [ (input mean value) – (data set mean) ] / [ (data set standard deviation or input standard deviation) / Square Root (size of the data set, n) ]. 3 To calculate a p value from NORMSDIST(z). All of the other Excel functions can be used to build up intermediate values for significance test inputs. They can also be used along with new VBA functions and subroutines to build new significance tests beyond the limited capability of Excel. Data Analysis Routines These are routines called by selecting the Tools menu and then selecting Data Analysis and then selecting one of the listed routines.

558

STATISTICAL TESTS USING EXCEL

F-Test Two-Sample for Variances t-Test: Paired Two Sample for Means t-Test: Two-Sample Assuming Equal Variances t-Test: Two-Sample Assuming Unequal Variances z-Test: Two-Sample for Means: After inputting the requested data, they return a table. Tests on the Accuracies of Functions and Data Analysis Routines The CHIDIST, FTEST and TTEST functions were tested. There were differences found between the results of these tests for Excel 2000 and Excel 2003. The Excel 2000 tests show relatively low LRE values. As explained by Microsoft in KBA 828888, the problem was the low accuracy of the VAR and STDEV functions that were used inside the routines. Rather than take up a great deal of space to show both 2000 and 2003 outputs, only the Excel 2003 values are shown in the following tables. There were 4 data sets used for testing as follows: Set 1 (columns A and B) represented paired data, integers with blank spaces. Set 2 (columns C and D) represented unequal length data from two different populations. Integers. Set 3 (columns E and F) represented patterned data of two samples from one population with equal sample sizes. The whole number was 1000, and the fractional numbers were uniformly distributed (0-1) random numbers. Set 4 (columns G and H) represented a variable length set (up to 2000). The first column represented the control data set, and the second column represented the treatment data set. The base case was where the numbers in both columns were all random normal (0,1) z values from one population. Whole numbers were added as described previously.

Testing The Difference Between Variances CHITEST Tests indicated that the Excel algorithm in the CHITEST function is the correct one. Errors occur from errors in the inputed expected values table and in the CHIDIST function. CHITEST returns correct values if the Expected Values table is correct. FTEST The function description (Excel, 1992) suggests that the FTEST function just computes the ratio of two variances where the variances come from the VAR function. Neither Excel Help nor the KBAs provide any additional information. The VAR function holds up well against overload as shown in figure 1, but does introduce some error. Given the ratio, the FINV function then was used to arrive at a p value. The F distribution FINV generally has p value accuracies above an LRE value of 8, over the entire range of input parameters (see Heiser, 2005) for specific details. The output then of FTEST should be an accurate p value with at least 8 accurate decimal digits. The actual output for data set 2 indicates that FTEST returns wrong values. Table 1: FTEST Function Response Cell Entry Returned Value Correct Value

=FTEST(C,D) 0.9425381810184540 0.481410961628470

FTEST outputs an incorrect p value, corresponding to a two-tail test. The problem is with Microsoft. In Excel (1992), the function description says, “Returns the results of an F-test. An F-test returns the one-tailed probability that the variances in array 1 and array 2 are not significantly different”. In Excel Help (2006), “Returns the result of an F-test. An F-test returns the twotailed probability that the variances in array1 and array2 are not significantly different. Use this function to determine whether two samples have different variances.”

DAVID A. HEISER The standard for the F test on a ratio of variances is the one tailed test. It is a test on all values of the ratio from 0 to the critical value. On this basis, the only valid test is the one–tailed test. The workaround here is to always divide the FTEST p value by 2 to get the correct q value of the right tail. This has been reported before. Test On The Data Analysis F-Test: Two-Sample For Variances: Here Excel returns an accurate value. Table 2: Excel Data Analysis Routine Output, Actual Excel Output F-Test Two-Sample for Variances C Mean 1000.503767 Variance 0.092055155 Observations 30 Df 29 F 1.017614821 P(F
View more...

Comments

Copyright � 2017 SILO Inc.