Notices
Results 1 to 77 of 77

Thread: Regression Analysis

  1. #1 Regression Analysis 
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    Spawned from another thread, I'd like to work through an example of regression analysis and some of the complications that comes with it. So everyone's on the same page, I'm using the data freely available at http://data.giss.nasa.gov/gistemp/graphs_v3/. For the moment I want to use the Global Monthly Mean Surface Temperature Change data. There are two columns of data within that set. For the purposes of argument, let's use the Land-Ocean temperature index since it best demonstrates the problems with regression analysis.

    So, the question is, given that the data set is quite noisy, what conclusions can you draw from it? Does it show a trend? Does it show no trend? (Note that those could both be false.)

    More specifically, if you fit a simple linear model to the data, you get that the best fit line has a slope of 0.01 and that slope's standard error is 0.0015, which gives it a p-value of 3.5 * 10^-10. Now, that's not much of a slope, but that's also a tiny p-value. In case you don't know, a p-value is an estimation of how likely it is that that value is due to chance. If the assumptions underlying the model are sound, it's a pretty good estimate. If you're using a statistical package to do this analysis (such as R, which is free) you can also look at some diagnostic graphs to check if those assumptions hold, and in this case, those graphs look pretty reasonable to me.

    Now, the standard deviation of the temperature is 0.14, and the contention is that such a huge deviation relative to the slope means that you can't draw any conclusions at all from the data set.

    I think that there are two separate things getting confused here. There's the error associated with the regression line and the coefficients of that line, and the errors associated with any value predicted using that line, and those aren't the same thing. As you get more and more data points, the errors associated with the line itself go to zero since the individual errors average out. On the other hand, the error associated with any predictions are limited by the errors within the data itself. You can't make a guess more accurate than the data you started with.

    What I think this means in this case is that we can be confident that a small but real upwards trend exists within the data, but that trend is small enough that trying to say any more than that (for example, what the temperature will be next year) is pointless as the 0.01 degrees from the trend would be lost in the 0.14 degree error within the data.

    As a starting point for references: Mean and predicted response - Wikipedia, the free encyclopedia and http://www.stat.cmu.edu/~roeder/stat707/lectures.pdf. I'm looking for more and better references, but this isn't the kind of stuff you find in papers. It's textbook stuff, and I don't have any good stat textbooks on me. (If anyone has any better links, feel free to post them.)


    Reply With Quote  
     

  2.  
     

  3. #2  
    Your Mama! GiantEvil's Avatar
    Join Date
    Apr 2010
    Location
    Vancouver, Wa
    Posts
    1,909
    Probability & Statistics - Free E-Books


    I was some of the mud that got to sit up and look around.
    Lucky me. Lucky mud.
    -Kurt Vonnegut Jr.-
    Cat's Cradle.
    Reply With Quote  
     

  4. #3  
     

  5. #4  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    Quote Originally Posted by GiantEvil View Post
    Thanks. I'll try and see if any of those discuss this issue, but it might be a day or two before I can sift through that much stuff.

    Those are very interesting links (and do answer an unrelated question I had) but I don't see what they have to do with the problem at hand.
    Reply With Quote  
     

  6. #5  
    Forum Cosmic Wizard
    Join Date
    Dec 2013
    Posts
    2,408
    Kernel density estimation can be used to smooth noisy data.
    Reply With Quote  
     

  7. #6  
    Suspended
    Join Date
    Feb 2013
    Posts
    1,774
    Quote Originally Posted by MagiMaster View Post
    Spawned from another thread, I'd like to work through an example of regression analysis and some of the complications that comes with it. So everyone's on the same page, I'm using the data freely available at http://data.giss.nasa.gov/gistemp/graphs_v3/. For the moment I want to use the Global Monthly Mean Surface Temperature Change data. There are two columns of data within that set. For the purposes of argument, let's use the Land-Ocean temperature index since it best demonstrates the problems with regression analysis.

    So, the question is, given that the data set is quite noisy, what conclusions can you draw from it? Does it show a trend? Does it show no trend? (Note that those could both be false.)

    More specifically, if you fit a simple linear model to the data, you get that the best fit line has a slope of 0.01 and that slope's standard error is 0.0015, which gives it a p-value of 3.5 * 10^-10. Now, that's not much of a slope, but that's also a tiny p-value. In case you don't know, a p-value is an estimation of how likely it is that that value is due to chance. If the assumptions underlying the model are sound, it's a pretty good estimate. If you're using a statistical package to do this analysis (such as R, which is free) you can also look at some diagnostic graphs to check if those assumptions hold, and in this case, those graphs look pretty reasonable to me.

    Now, the standard deviation of the temperature is 0.14, and the contention is that such a huge deviation relative to the slope means that you can't draw any conclusions at all from the data set.

    I think that there are two separate things getting confused here. There's the error associated with the regression line and the coefficients of that line, and the errors associated with any value predicted using that line, and those aren't the same thing. As you get more and more data points, the errors associated with the line itself go to zero since the individual errors average out. On the other hand, the error associated with any predictions are limited by the errors within the data itself. You can't make a guess more accurate than the data you started with.

    What I think this means in this case is that we can be confident that a small but real upwards trend exists within the data, but that trend is small enough that trying to say any more than that (for example, what the temperature will be next year) is pointless as the 0.01 degrees from the trend would be lost in the 0.14 degree error within the data.

    As a starting point for references: Mean and predicted response - Wikipedia, the free encyclopedia and http://www.stat.cmu.edu/~roeder/stat707/lectures.pdf. I'm looking for more and better references, but this isn't the kind of stuff you find in papers. It's textbook stuff, and I don't have any good stat textbooks on me. (If anyone has any better links, feel free to post them.)
    There are several issues with your approach. What I am going to say will parallel the criticism to the Dayton Miller experiment in the other thread.

    1. The most important one is that you ALWAYS need to start with the RAW data. Not averages, not means.
    2. The second most important thing is that the only manipulation on the raw data is to CALCULATE (let Excel or R do that for you), the standard deviation.
    3. The standard deviation gives you the error bars (in general, the error bars are taken 2x standard deviation)
    4. If the error bars are of the order of the signal itself, your raw data (point 1) is useless. No amount of filtering will make it useful.
    5. If you try to filter the data from point 4 or if you try to draw any conclusions from such data you are not doing science anymore, you are looking for "faces in the clouds".
    At least, this is how it is in physics. One thing that confuses me is that , in the other thread, you claimed that your thesis was not about climate change. Yet, all I am seeing in this thread is global warming analysis. How do you explain that?
    Last edited by Howard Roark; September 21st, 2014 at 07:20 PM.
    Reply With Quote  
     

  8. #7  
    Suspended
    Join Date
    Feb 2013
    Posts
    1,774
    duplicate
    Last edited by Howard Roark; September 21st, 2014 at 08:58 AM.
    Reply With Quote  
     

  9. #8  
    Your Mama! GiantEvil's Avatar
    Join Date
    Apr 2010
    Location
    Vancouver, Wa
    Posts
    1,909
    I was some of the mud that got to sit up and look around.
    Lucky me. Lucky mud.
    -Kurt Vonnegut Jr.-
    Cat's Cradle.
    Reply With Quote  
     

  10. #9  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    Quote Originally Posted by Howard Roark View Post
    1. The most important one is that you ALWAY need to start with the RAW data. Not averages, not means.
    2. The second most important thing is that the only manipulation on the raw data is to CALCULATE (let Excel or R do that for you), the standard deviation.
    3. The standard deviation gives you the error bars (in general, the error nars are taken 2x standard deviation)
    4. If the error bars are of the order of the signal itself, your raw data (point 1) is useless. No amount of filtering will make it useful.
    5. If you try to filter the data from point 4 or if you try to draw any conclusions from such data you are not doing science anymore, you are looking for "faces in the clouds".
    At least, this is how it is in physics. One thing that confuses me is that , in the other thread, you claimed that your thesis was not about climate change. Yet, all I am seeing in this thread is global warming analysis. How do you explain that?
    I can't make my dissertation data available online, at least not right now. It's also much less clean and pedagogically useful. So I'm just using the climate data since it's free, available online and what started this discussion. I'm also not really interested in analyzing this particular data set in terms of global warming, just as an example data set.

    I'd be interested in some citations for your bullet points, but I have a question about 4. What signal are you talking about here? The error bars are always going to be of similar order of magnitude to the data itself, but do you mean the trend line or are you talking about a mean difference comparison?
    Reply With Quote  
     

  11. #10  
    Suspended
    Join Date
    Feb 2013
    Posts
    1,774
    Quote Originally Posted by MagiMaster View Post
    Quote Originally Posted by Howard Roark View Post
    1. The most important one is that you ALWAY need to start with the RAW data. Not averages, not means.
    2. The second most important thing is that the only manipulation on the raw data is to CALCULATE (let Excel or R do that for you), the standard deviation.
    3. The standard deviation gives you the error bars (in general, the error nars are taken 2x standard deviation)
    4. If the error bars are of the order of the signal itself, your raw data (point 1) is useless. No amount of filtering will make it useful.
    5. If you try to filter the data from point 4 or if you try to draw any conclusions from such data you are not doing science anymore, you are looking for "faces in the clouds".
    At least, this is how it is in physics. One thing that confuses me is that , in the other thread, you claimed that your thesis was not about climate change. Yet, all I am seeing in this thread is global warming analysis. How do you explain that?
    I can't make my dissertation data available online, at least not right now. It's also much less clean and pedagogically useful. So I'm just using the climate data since it's free, available online and what started this discussion. I'm also not really interested in analyzing this particular data set in terms of global warming, just as an example data set.

    I'd be interested in some citations for your bullet points, but I have a question about 4. What signal are you talking about here? The error bars are always going to be of similar order of magnitude to the data itself, but do you mean the trend line or are you talking about a mean difference comparison?
    OK, I understand.
    The error bars for a set of good measurements tend to be much smaller than the signal itself. This is what I was talking about at point 4.
    Reply With Quote  
     

  12. #11  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    I think my question is more what you mean by signal in this context. I can think of a few ways to interpret that and your description of error bars only makes sense for some of them.
    Reply With Quote  
     

  13. #12  
    Suspended
    Join Date
    Feb 2013
    Posts
    1,774
    Quote Originally Posted by MagiMaster View Post
    I think my question is more what you mean by signal in this context. I can think of a few ways to interpret that and your description of error bars only makes sense for some of them.
    What you measure is signal+error.
    The error is systematic (due to something that you are doing wrong, due to the principles of measurement that you set up being incorrect) + random.
    Reply With Quote  
     

  14. #13  
    Forum Senior
    Join Date
    Aug 2014
    Posts
    316
    In the terms of qualifying the data and subsequent error bars. No way to know what has been done to it in this case. It is probably averaged over many data points, who knows.
    Given this type of data set, it is what it is.

    As you suggested in the last thread I did run a normal-QQ plot. Visually, it did seem to produce a normal probability distribution. For what that is worth.



    normal-QQ plot
    http://www.wessa.net/rwasp_varia1.wasp

    Reply With Quote  
     

  15. #14  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    Quote Originally Posted by Howard Roark View Post
    What you measure is signal+error.
    The error is systematic (due to something that you are doing wrong, due to the principles of measurement that you set up being incorrect) + random.
    Actually, I mean more specifically than that. In the case of one continuous dependent variable and one continuous independent variable, what is the signal? What part of the graph are you looking at to compare the signal to the error bars?
    Reply With Quote  
     

  16. #15  
    Suspended
    Join Date
    Feb 2013
    Posts
    1,774
    Quote Originally Posted by MagiMaster View Post
    Quote Originally Posted by Howard Roark View Post
    What you measure is signal+error.
    The error is systematic (due to something that you are doing wrong, due to the principles of measurement that you set up being incorrect) + random.
    Actually, I mean more specifically than that. In the case of one continuous dependent variable and one continuous independent variable, what is the signal? What part of the graph are you looking at to compare the signal to the error bars?
    What do you mean? the measurement consists of a discrete dataset: the temperature values as a function of time. The signal(+error) is the temperature , the error bars are calculated from the standard deviation, we have been over this twice already.
    Reply With Quote  
     

  17. #16  
    Forum Senior
    Join Date
    Aug 2014
    Posts
    316
    Quote Originally Posted by MagiMaster View Post
    Quote Originally Posted by Howard Roark View Post
    What you measure is signal+error.
    The error is systematic (due to something that you are doing wrong, due to the principles of measurement that you set up being incorrect) + random.
    Actually, I mean more specifically than that. In the case of one continuous dependent variable and one continuous independent variable, what is the signal? What part of the graph are you looking at to compare the signal to the error bars?
    I believe the error bars would apply to the dependent data set, in this case. Like Howard Roark stated the error bars would apply to raw data…
    I can not imagine these data points are raw…
    Reply With Quote  
     

  18. #17  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    Quote Originally Posted by Howard Roark View Post
    Quote Originally Posted by MagiMaster View Post
    Quote Originally Posted by Howard Roark View Post
    What you measure is signal+error.
    The error is systematic (due to something that you are doing wrong, due to the principles of measurement that you set up being incorrect) + random.
    Actually, I mean more specifically than that. In the case of one continuous dependent variable and one continuous independent variable, what is the signal? What part of the graph are you looking at to compare the signal to the error bars?
    What do you mean? the measurement consists of a discrete dataset: the temperature values as a function of time. The signal(+error) is the temperature , the error bars are calculated from the standard deviation, we have been over this twice already.
    Sorry, I don't seem to be getting my question across very well. You say the temperature is the signal plus the error. I completely agree with that as well as how the error bars were calculated, but you also took a look at the data and said that the error bars were too wide for the signal. Since all you can see on the graph is the signal and error as one, what are you looking at to make that comparison?
    Reply With Quote  
     

  19. #18  
    Suspended
    Join Date
    Feb 2013
    Posts
    1,774
    Quote Originally Posted by MagiMaster View Post
    Quote Originally Posted by Howard Roark View Post
    Quote Originally Posted by MagiMaster View Post
    Quote Originally Posted by Howard Roark View Post
    What you measure is signal+error.
    The error is systematic (due to something that you are doing wrong, due to the principles of measurement that you set up being incorrect) + random.
    Actually, I mean more specifically than that. In the case of one continuous dependent variable and one continuous independent variable, what is the signal? What part of the graph are you looking at to compare the signal to the error bars?
    What do you mean? the measurement consists of a discrete dataset: the temperature values as a function of time. The signal(+error) is the temperature , the error bars are calculated from the standard deviation, we have been over this twice already.
    Sorry, I don't seem to be getting my question across very well. You say the temperature is the signal plus the error. I completely agree with that as well as how the error bars were calculated, but you also took a look at the data and said that the error bars were too wide for the signal. Since all you can see on the graph is the signal and error as one, what are you looking at to make that comparison?
    The signal+error is of the same order as the error bars. This is what I am seeing. And one sees that, this signifies a bad dataset. You do not have to separate the signal from the measurement errors to decide that the dataset is bad. Do you understand now?
    Last edited by Howard Roark; September 22nd, 2014 at 08:51 AM.
    Reply With Quote  
     

  20. #19  
    Forum Senior
    Join Date
    Aug 2014
    Posts
    316
    The signal+error is of the same order as the error bars. This is what I am seeing. And one sees that, this signifies a bad dataset. You do not have to separate the signal from the measurement errors to decide that the dataset is bad. Do you understand now?



    I am sure this would be bad news to Nasa….


    Because MagiMaster maybe overstepped the statistical line and drew too many conclusions I still believe the slope is valid.


    Like I said I did do a normal QQ plot to verify the normal probability distribution of the data. I looks OK.


    The slop is intuitively verified here, it meets the claims of others about a gentle warming period.
    Reply With Quote  
     

  21. #20  
    Suspended
    Join Date
    Feb 2013
    Posts
    1,774
    Quote Originally Posted by GTCethos View Post
    The signal+error is of the same order as the error bars. This is what I am seeing. And one sees that, this signifies a bad dataset. You do not have to separate the signal from the measurement errors to decide that the dataset is bad. Do you understand now?



    I am sure this would be bad news to Nasa….


    Because MagiMaster maybe overstepped the statistical line and drew too many conclusions I still believe the slope is valid.
    What you believe is irrelevant. What the scientific analysis shows contradicts what you believe.




    The slop is intuitively verified here, it meets the claims of others about a gentle warming period.
    We are not discussing global warming. Pay attention.
    Reply With Quote  
     

  22. #21  
    Forum Senior
    Join Date
    Aug 2014
    Posts
    316
    What you believe is irrelevant. What the scientific analysis shows contradicts what you believe.



    Nonsense… you know nothing about the treatment of this data. Your conclusion could and probably is influenced by artifacts of its treatment.


    It is probably not raw data… you pay attention.


    It is what it is and passes the QQ…


    I must defend MagiMaster here… as hard as it is for me to do that. He can and did find the best fit… that is all.
    Reply With Quote  
     

  23. #22  
    Suspended
    Join Date
    Feb 2013
    Posts
    1,774
    Quote Originally Posted by GTCethos View Post
    What you believe is irrelevant. What the scientific analysis shows contradicts what you believe.




    It is probably not raw data… you pay attention.
    Actually, it is.



    I must defend MagiMaster here…
    I don't think he needs any defense. Especially from cranks.
    Reply With Quote  
     

  24. #23  
    Forum Senior
    Join Date
    Aug 2014
    Posts
    316
    I only try and defend the truth… Don’t let arrogance in the way of reasoning… It always ends badly.


    It is certainly not raw data… I am sure Nasa is not in the habit of taking one data point a month over several years…. Like your assessment of the data it is not reasonable.


    The only thing you have proven here is that you draw bad conclusions.
    Reply With Quote  
     

  25. #24  
    Suspended
    Join Date
    Feb 2013
    Posts
    1,774
    Quote Originally Posted by GTCethos View Post
    I only try and defend the truth… Don’t let arrogance in the way of reasoning… It always ends badly.
    Absolutely, apply the advice to yourself.


    It is certainly not raw data… I am sure Nasa is not in the habit of taking one data point a month over several years
    It isn't NASA data. This may explain why it is bad. Because, despite your protestations, it is bad.
    Reply With Quote  
     

  26. #25  
    Forum Senior
    Join Date
    Aug 2014
    Posts
    316
    You have me interested… Nasa is presenting the data from the following web site.


    http://data.giss.nasa.gov/gistemp/graphs_v3/Fig.C.txt


    Without any SE or SD numbers. Who is the source of the data then?


    I never said you are not better at this than I am, I only found your reasons in this case unacceptable.
    Reply With Quote  
     

  27. #26  
    Suspended
    Join Date
    Feb 2013
    Posts
    1,774
    Quote Originally Posted by GTCethos View Post
    You have me interested… Nasa is presenting the data from the following web site.


    http://data.giss.nasa.gov/gistemp/graphs_v3/Fig.C.txt


    Without any SE or SD numbers. Who is the source of the data then?


    I never said you are not better at this than I am, I only found your reasons in this case unacceptable.
    NASA did not exist in the 1800's . We are discussing 1800's data.
    Reply With Quote  
     

  28. #27  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    Quote Originally Posted by Howard Roark View Post
    The signal+error is of the same order as the error bars. This is what I am seeing. And one sees that, this signifies a bad dataset. You do not have to separate the signal from the measurement errors to decide that the dataset is bad. Do you understand now?
    Sorry, no. Are you saying that if the data hovers around, say, 5 instead of around 0 that that would make a difference in how it's analyzed?
    Reply With Quote  
     

  29. #28  
    Suspended
    Join Date
    Feb 2013
    Posts
    1,774
    Quote Originally Posted by MagiMaster View Post
    Quote Originally Posted by Howard Roark View Post
    The signal+error is of the same order as the error bars. This is what I am seeing. And one sees that, this signifies a bad dataset. You do not have to separate the signal from the measurement errors to decide that the dataset is bad. Do you understand now?
    Sorry, no. Are you saying that if the data hovers around, say, 5 instead of around 0 that that would make a difference in how it's analyzed?
    No, this is not what I am saying, if the data was around 10 and STD were around 1, then the experimental measurement is relatively free of systematic errors. The best explanation is found in the analysis of the Dayton Miller experiment I linked in earlier. In short, here is what happened. Dayton Miller did not compensate his setup for diurnal temperature changes. Instead, he decided to average the readings, thus rolling in his (systematic) error into the signal (the fringe displacement) he was supposed to measure. At the time he ran his experiment, error theory did not exist, so he did not calculate the resulting error bars. Had he done that , he would have realized his mistakes and he would not have claimed that he measured a sinusoidal variation of the fringes (what he measured, in reality, was the diurnal temperature change effect on the arms of his interferometer). When the author of the paper I linked in, Tom Roberts, got hold of the RAW data (not averaged) and did a simple calculation of the error bars, two things emerged:

    1. Dayton Miller did not do a proper temperature compensation of his setup, thus he measured the diurnal temperature changes effect on dilation/contraction of the interferometer arms.

    2. Dayton Miller compounded his error at point 1 by averaging the measurements over a day, thus rolling in the systematic error of his incorrect experimental setup into the signal he was supposed to measure

    The way Tom Roberts figured this is by observing the inordinately large error bars.

    The same exact thing can be guessed about the dataset we are discussing, the error bars calculated from the dataset are of the order of 0.26 , the signal is of the order of 0.3-0.4. When I calculate the error bars of the experiments I am running for my own papers, the error bars are of the order of 1/10 of the measured data. If they are bigger, this lets me know that my setup is not good enough, that I have systematic errors that need to be eliminated by improving the equipment.
    Reply With Quote  
     

  30. #29  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    Quote Originally Posted by Howard Roark View Post
    The same exact thing can be guessed about the dataset we are discussing, the error bars calculated from the dataset are of the order of 0.26 , the signal is of the order of 0.3-0.4. When I calculate the error bars of the experiments I am running for my own papers, the error bars are of the order of 1/10 of the measured data. If they are bigger, this lets me know that my setup is not good enough, that I have systematic errors that need to be eliminated by improving the equipment.
    I follow how you're calculating your error bars, but again, where are you getting that the signal is of the order 0.3-0.4? Looking at the graph I cannot see where that number is coming from.

    (Also, can you relink the experiment you're talking about in this thread for those that aren't following both?)
    Reply With Quote  
     

  31. #30  
    Suspended
    Join Date
    Feb 2013
    Posts
    1,774
    Quote Originally Posted by MagiMaster View Post
    Quote Originally Posted by Howard Roark View Post
    The same exact thing can be guessed about the dataset we are discussing, the error bars calculated from the dataset are of the order of 0.26 , the signal is of the order of 0.3-0.4. When I calculate the error bars of the experiments I am running for my own papers, the error bars are of the order of 1/10 of the measured data. If they are bigger, this lets me know that my setup is not good enough, that I have systematic errors that need to be eliminated by improving the equipment.
    I follow how you're calculating your error bars, but again, where are you getting that the signal is of the order 0.3-0.4? Looking at the graph I cannot see where that number is coming from.
    From the raw dataset that was used to create the graph.

    (Also, can you relink the experiment you're talking about in this thread for those that aren't following both?)
    Sure
    Reply With Quote  
     

  32. #31  
    Forum Senior
    Join Date
    Aug 2014
    Posts
    316
    NASA did not exist in the 1800's . We are discussing 1800's data.



    Did you even look at the years…. first year was 1996.


    You are pulling my leg right?

    First question on a senility test... What year is this?

    Just kidding...
    Reply With Quote  
     

  33. #32  
    Suspended
    Join Date
    Feb 2013
    Posts
    1,774
    Quote Originally Posted by GTCethos View Post
    NASA did not exist in the 1800's . We are discussing 1800's data.



    Did you even look at the years…. first year was 1996.
    First year is 1800. You aren't looking at the right data. Give it a rest.
    ONLY the LAST window (1996-2013) comes from NASA. Which , incidentally , shows that NASA can produce bad data just the same.

    First question on a senility test... What year is this?


    Are you able to answer? Seriously, you need to stop trolling this thread, your only contribution is to your own embarrassment.

    Reply With Quote  
     

  34. #33  
    Forum Senior
    Join Date
    Aug 2014
    Posts
    316
    Quote Originally Posted by Howard Roark View Post
    Quote Originally Posted by GTCethos View Post
    NASA did not exist in the 1800's . We are discussing 1800's data.



    Did you even look at the years…. first year was 1996.
    First year is 1800. You aren't looking at the right data. Give it a rest.
    ONLY the LAST window (1996-2013) comes from NASA. Which , incidentally , shows that NASA can produce bad data just the same.

    First question on a senility test... What year is this?


    Are you able to answer? Seriously, you need to stop trolling this thread, your only contribution is to your own embarrassment.

    Before you make a further fool of yourself look at post #25….


    http://data.giss.nasa.gov/gistemp/graphs_v3/Fig.C.txt


    This is the data we used in the global warming linear regression to show a pause in global warming.


    If you did not pick this up from MagiMaster you should have picked this up from me….
    Reply With Quote  
     

  35. #34  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    Quote Originally Posted by Howard Roark View Post
    From the raw dataset that was used to create the graph.
    Can you explain how you calculated that number.


    Quote Originally Posted by Howard Roark View Post
    (Also, can you relink the experiment you're talking about in this thread for those that aren't following both?)
    Sure
    Thanks.

    Quote Originally Posted by GTCethos
    This is the data we used in the global warming linear regression to show a pause in global warming.


    If you did not pick this up from MagiMaster you should have picked this up from me….
    For the record I have been trying to show that there is a statistically significant upwards trend even within these few years. Unless I've misunderstood something, Howard's point is that this data cannot be used to support either view.
    Reply With Quote  
     

  36. #35  
    Forum Senior
    Join Date
    Aug 2014
    Posts
    316
    I could see where Howard would be so off base if he is using a 100 year time span.
    Reply With Quote  
     

  37. #36  
    Suspended
    Join Date
    Feb 2013
    Posts
    1,774
    Quote Originally Posted by MagiMaster View Post
    Quote Originally Posted by Howard Roark View Post
    From the raw dataset that was used to create the graph.
    Can you explain how you calculated that number.
    What number? Where is this thing going?
    Reply With Quote  
     

  38. #37  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    Quote Originally Posted by Howard Roark View Post
    ... the error bars calculated from the dataset are of the order of 0.26 , the signal is of the order of 0.3-0.4.
    I follow how you are calculating the error here. I am not following how you are calculating the signal. Where is 0.3-0.4 coming from? How did you arrive at that number? This may not be going anywhere, but either way I want to understand what you're talking about here.

    (To be clear, I have been talking about the data set specified in the OP, not the extended data set Cogito Ergo Sum posted in the other thread.)
    Reply With Quote  
     

  39. #38  
    Universal Mind John Galt's Avatar
    Join Date
    Jul 2005
    Posts
    14,169
    Moderator Request: GTCethos, I wish to ask you a favour. Your current participation in this thread is clouding the discussion at the heart of the issue. I ask that you voluntarily absent yourself from this until the question being pursued by magimaster is answered to his satisfaction.
    Reply With Quote  
     

  40. #39  
    Suspended
    Join Date
    Feb 2013
    Posts
    1,774
    Quote Originally Posted by MagiMaster View Post
    Quote Originally Posted by Howard Roark View Post
    ... the error bars calculated from the dataset are of the order of 0.26 , the signal is of the order of 0.3-0.4.
    I follow how you are calculating the error here. I am not following how you are calculating the signal. Where is 0.3-0.4 coming from? How did you arrive at that number?
    From the raw dataset used b Cogito Ergo Sum to plot his graphs.
    You know, this is getting really ridiculous, you asked for help with your thesis, why don't you use the data you are familiar with , the one coming from your work. We do not need to know what it is, remove all labels, it will be just a bunch of numbers. Please do so, ok?


    (To be clear, I have been talking about the data set specified in the OP, not the extended data set Cogito Ergo Sum posted in the other thread.)
    Yes, we are talking about this the dataset See the numbers in the range 0.3-1.17?
    How about you start using the numbers from your thesis? You should be more familiar with them so we won't waste time with these kind of questions that you are asking. Please do so, if you want to continue the conversation.
    Last edited by Howard Roark; September 23rd, 2014 at 09:20 AM.
    Reply With Quote  
     

  41. #40  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    I'm pretty sure I need permission from my university's IRB before releasing any of those numbers (human subjects and all) and that takes time. Also, I did not ask for help on my dissertation. I asked for help understanding this specific bit of statistics, which will help me later when I'm interpreting the results from my study. (I do intend to contact the IRB about releasing my data as a public dataset since it contains much more information than what I ended up using.)

    Anyway, now that you've clarified I can see where that number is coming from. You're saying the signal size/strength is approximately the lowest value in the data set, correct? (Or is it the minimum discounting outliers?)

    Also, I specified the other temperature within that data set in the OP, but whatever, this works too.
    Last edited by MagiMaster; September 23rd, 2014 at 03:48 PM.
    Reply With Quote  
     

  42. #41  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    Well, assuming I've understood you correctly, I have to say, your rules of thumb are only useful for one very specific statistical question, whether or not the mean of the data is non-zero (not that that isn't a very common question), and even then it's overly simplistic. If you just add a constant to all the data points, the mean (and the likelihood that the mean is non-zero) will change, but the coefficients in a regression analysis won't. If you assume that your data points are signal + random noise, and that the noise is zero centered, then averaging many points will reduce the noise, so no matter how noisy the data is, if you have enough of it, you can work out the underlying signal. (That's also a simplification though. The standard one sample t-test takes all this in to account already.)

    In your link, that specific experiment had a particular feature (the periodic nature of the signal) that allowed them to separate the signal and the error much more readily. The temperature data does not have that feature.

    Now, I'm no expert in statistics, so it's certainly possible I've misunderstood something somewhere.

    Can anyone else comment on any of this?
    Reply With Quote  
     

  43. #42  
    Suspended
    Join Date
    Feb 2013
    Posts
    1,774
    Quote Originally Posted by MagiMaster View Post
    Well, assuming I've understood you correctly, your rules of thumb are only useful for one very specific statistical question, whether or not the mean of the data is non-zero
    No you didn't "understand correctly".

    1. These rules are not "rules of thumb", they are well established in statistical analysis.
    2. The rules are general, they apply for any data. Pretty standard in the analysis of experimental data in physics.

    In your link, that specific experiment had a particular feature (the periodic nature of the signal) that allowed them to separate the signal and the error much more readily. The temperature data does not have that feature.
    False. The experiment I cited showed precisely the influence of (diurnal / lunar/ yearly) variation of temperature on the fringe displacement in the Michelson-Morley experiment.
    Last edited by Howard Roark; September 26th, 2014 at 10:10 AM.
    Reply With Quote  
     

  44. #43  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    Quote Originally Posted by Howard Roark View Post
    Quote Originally Posted by MagiMaster View Post
    Well, assuming I've understood you correctly, your rules of thumb are only useful for one very specific statistical question, whether or not the mean of the data is non-zero
    No you didn't "understand correctly".

    1. These rules are not "rules of thumb", they are well established in statistical analysis.
    2. The rules are general, they apply for any data. Pretty standard in the analysis of experimental data in physics.
    Can you provide a citation for them then because I cannot see how looking at the minimum value has anything to do with the signal strength for any zero-centered signal, such as any data that compares differences from a baseline or any probability data converted to log-odds, or for the strength of the slope of a regression line. (Again, adding a constant value changes the minimum, but does nothing to the slope.)

    Also, the sarcasm quotes are not appreciated. I'm trying to discuss this seriously.

    Quote Originally Posted by Howard Roark
    In your link, that specific experiment had a particular feature (the periodic nature of the signal) that allowed them to separate the signal and the error much more readily. The temperature data does not have that feature.
    False. The experiment I cited showed precisely the influence of (diurnal / lunar/ yearly) variation of temperature on the fringe displacement in the Michelson-Morley experiment.
    In that experiment the signal they are looking for is of the form k*sin(t). In the climate data, the signal of interest is of the form k*t. That's not an insignificant difference. As they say in the paper, they subtracted the measurements from each cycle to get just the error. You cannot do that with the climate data.
    Reply With Quote  
     

  45. #44  
    Suspended
    Join Date
    Feb 2013
    Posts
    1,774
    Quote Originally Posted by MagiMaster View Post
    Quote Originally Posted by Howard Roark View Post
    Quote Originally Posted by MagiMaster View Post
    Well, assuming I've understood you correctly, your rules of thumb are only useful for one very specific statistical question, whether or not the mean of the data is non-zero
    No you didn't "understand correctly".

    1. These rules are not "rules of thumb", they are well established in statistical analysis.
    2. The rules are general, they apply for any data. Pretty standard in the analysis of experimental data in physics.
    Can you provide a citation for them then because I cannot see how looking at the minimum value has anything to do with the signal strength for any zero-centered signal, such as any data that compares differences from a baseline or any probability data converted to log-odds, or for the strength of the slope of a regression line. (Again, adding a constant value changes the minimum, but does nothing to the slope.)
    What, in any of the stuff I explained to you, gives you the notion that the methods are "looking at the minimum value"? Where did I say anything about "the signal strength for any zero-centered signal" ? Where did I mention "data that compares differences from a baseline or any probability data converted to log-odds, or for the strength of the slope of a regression line"?

    Also, the sarcasm quotes are not appreciated. I'm trying to discuss this seriously.
    Actually, I do not think that you are discussing in earnest, your agenda is getting more and more apparent.

    Quote Originally Posted by Howard Roark
    In your link, that specific experiment had a particular feature (the periodic nature of the signal) that allowed them to separate the signal and the error much more readily. The temperature data does not have that feature.
    False. The experiment I cited showed precisely the influence of (diurnal / lunar/ yearly) variation of temperature on the fringe displacement in the Michelson-Morley experiment.
    In that experiment the signal they are looking for is of the form k*sin(t).
    Nope, there is no "signal" in a well executed MMX. The fringe shift is zero as predicted by SR because light speed is isotropic, so there should be no effects from the Earth rotation.


    In the climate data, the signal of interest is of the form k*t.
    What gives you this idea? Can you provide the theoretical proof? I can provide the mathematical/physical proof that the signal in MMX is NOT of the form , contrary to your claim.


    That's not an insignificant difference. As they say in the paper, they subtracted the measurements from each cycle to get just the error. You cannot do that with the climate data.
    I don't think you understood anything in my previous explanation, I was explaining to your how improperly compensated temperature variations can masquerade as a signal in a badly prepared experiment. The same way how a bad data set for the global warming analysis can masquerade as a linear regression with a positive slope.
    Last edited by Howard Roark; September 26th, 2014 at 03:53 PM.
    Reply With Quote  
     

  46. #45  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    Ok. It's obvious there are some wires crossed somewhere here. Your argument about the climate data was that the standard deviation was within an order of magnitude of the data itself and therefore no conclusions can be drawn from it. You provided the link to the MMX paper as an example of how an error analysis should be done. Then you gave a step by step guide for how to analyze data with the implication that it applied to the original climate data. Did I at least get that much correct?

    Quote Originally Posted by Howard Roark View Post
    What, in any of the stuff I explained to you, gives you the notion that the methods are "looking at the minimum value"?
    Quote Originally Posted by Howard Roark
    ...the error bars calculated from the dataset are of the order of 0.26 , the signal is of the order of 0.3-0.4.
    ...
    Yes, we are talking about this the dataset See the numbers in the range 0.3-1.17?
    Your own words give me that idea. Where else is the 0.3 number coming from? Please explain. If it's not the minimum, what is it?

    Quote Originally Posted by Howard Roark View Post
    Where did I say anything about "the signal strength for any zero-centered signal" ? Where did I mention "data that compares differences from a baseline or any probability data converted to log-odds, or for the strength of the slope of a regression line"?
    Those are examples of statistical questions where your method (from post #6) does not make sense. In particular, you began by arguing that the data was too noisy for the slope of the regression line to be meaningful.

    Quote Originally Posted by Howard Roark View Post
    Also, the sarcasm quotes are not appreciated. I'm trying to discuss this seriously.
    Actually, I do not think that you are discussing in earnest, your agenda is getting more and more apparent.
    I'm really curious what you think my agenda is. Whatever you think it is, my agenda is to understand what you're claiming so that I can either learn from it or form a mathematically sound counterargument. If you think this has anything to do with the climate, feel free to suggest a different publicly available data set that you think has the same problem. (I will not post my human subjects data without IRB consent, so don't bother asking.)

    Quote Originally Posted by Howard Roark
    In your link, that specific experiment had a particular feature (the periodic nature of the signal) that allowed them to separate the signal and the error much more readily. The temperature data does not have that feature.
    False. The experiment I cited showed precisely the influence of (diurnal / lunar/ yearly) variation of temperature on the fringe displacement in the Michelson-Morley experiment.

    In that experiment the signal they are looking for is of the form k*sin(t).
    Nope, there is no "signal" in a well executed MMX. The fringe shift is zero as predicted by SR because light speed is isotropic, so there should be no effects from the Earth rotation.

    In the climate data, the signal of interest is of the form k*t.
    What gives you this idea? Can you provide the theoretical proof? I can provide the mathematical/physical proof that the signal in MMX is NOT of the form , contrary to your claim.
    If a signal existed in the MMX data it would be of the form k*sin(t). That no signal was found doesn't change the form the signal would have taken. (Also, there's a nice picture of a sine wave in the paper you linked. Figure 5, page 15.)

    In the climate data, I implicitly specified that the signal I was looking for was of the form k*t when I did the linear regression.

    Quote Originally Posted by Howard Roark View Post
    That's not an insignificant difference. As they say in the paper, they subtracted the measurements from each cycle to get just the error. You cannot do that with the climate data.
    I don't think you understood anything in my previous explanation, I was explaining to your how improperly compensated temperature variations can masquerade as a signal in a badly prepared experiment. The same way how a bad data set for the global warming analysis can masquerade as a linear regression with a positive slope.
    No, apparently I haven't understood you at all. None of what you just said is apparent in anything you said previously.
    Reply With Quote  
     

  47. #46  
    Suspended
    Join Date
    Feb 2013
    Posts
    1,774
    Quote Originally Posted by MagiMaster View Post
    Ok. It's obvious there are some wires crossed somewhere here. Your argument about the climate data was that the standard deviation was within an order of magnitude of the data itself and therefore no conclusions can be drawn from it. You provided the link to the MMX paper as an example of how an error analysis should be done. Then you gave a step by step guide for how to analyze data with the implication that it applied to the original climate data. Did I at least get that much correct?
    Yes, this is the only thing that you got right, the rest, you made up yourself.
    Quote Originally Posted by Howard Roark View Post
    What, in any of the stuff I explained to you, gives you the notion that the methods are "looking at the minimum value"?
    Quote Originally Posted by Howard Roark
    ...the error bars calculated from the dataset are of the order of 0.26 , the signal is of the order of 0.3-0.4.
    ...
    Yes, we are talking about this the dataset See the numbers in the range 0.3-1.17?
    Your own words give me that idea. Where else is the 0.3 number coming from? Please explain. If it's not the minimum, what is it?
    [/quote]

    It came from the data set. This is the third time I linked it for you.

    Quote Originally Posted by Howard Roark View Post
    Where did I say anything about "the signal strength for any zero-centered signal" ? Where did I mention "data that compares differences from a baseline or any probability data converted to log-odds, or for the strength of the slope of a regression line"?
    Those are examples of statistical questions where your method (from post #6) does not make sense. In particular, you began by arguing that the data was too noisy for the slope of the regression line to be meaningful.
    These are examples of words that I never used, yet you insist in putting them into my mouth.

    Quote Originally Posted by Howard Roark View Post
    Also, the sarcasm quotes are not appreciated. I'm trying to discuss this seriously.
    Actually, I do not think that you are discussing in earnest, your agenda is getting more and more apparent.
    I'm really curious what you think my agenda is. Whatever you think it is, my agenda is to understand what you're claiming so that I can either learn from it or form a mathematically sound counterargument. If you think this has anything to do with the climate, feel free to suggest a different publicly available data set that you think has the same problem. (I will not post my human subjects data without IRB consent, so don't bother asking.)
    You can post the data by stripping the labels. I have asked you that before, so please stop playing games. Or, you can enter by hand the data from the Dayton-Miller experiment. Your choice.

    Quote Originally Posted by Howard Roark
    In your link, that specific experiment had a particular feature (the periodic nature of the signal) that allowed them to separate the signal and the error much more readily. The temperature data does not have that feature.
    False. The experiment I cited showed precisely the influence of (diurnal / lunar/ yearly) variation of temperature on the fringe displacement in the Michelson-Morley experiment.

    In that experiment the signal they are looking for is of the form k*sin(t).
    Nope, there is no "signal" in a well executed MMX. The fringe shift is zero as predicted by SR because light speed is isotropic, so there should be no effects from the Earth rotation.

    In the climate data, the signal of interest is of the form k*t.
    What gives you this idea? Can you provide the theoretical proof? I can provide the mathematical/physical proof that the signal in MMX is NOT of the form , contrary to your claim.
    If a signal existed in the MMX data it would be of the form k*sin(t). That no signal was found doesn't change the form the signal would have taken. (Also, there's a nice picture of a sine wave in the paper you linked. Figure 5, page 15.)
    If pigs had wings, they would fly. There is no "signal" in MMX. I explained why. I also asked you to explain how do you get the k*t signal in the climate data.

    In the climate data, I implicitly specified that the signal I was looking for was of the form k*t when I did the linear regression.
    In other words, like Dayton Miller, you are looking for a signal and you are trying to manipulate the data to support your prejudices. This is not how science is done. You post-process the data , if the error bars are of the order of your data, your data collection is no good, so you need to go searching for the source of your systematic errors.


    Quote Originally Posted by Howard Roark View Post
    That's not an insignificant difference. As they say in the paper, they subtracted the measurements from each cycle to get just the error. You cannot do that with the climate data.
    I don't think you understood anything in my previous explanation, I was explaining to your how improperly compensated temperature variations can masquerade as a signal in a badly prepared experiment. The same way how a bad data set for the global warming analysis can masquerade as a linear regression with a positive slope.
    No, apparently I haven't understood you at all. None of what you just said is apparent in anything you said previously.
    [/quote]

    This is very basic stuff:

    -you measure the data
    -you calculate the error bars
    -if the error bars are of the order of magnitude of your data then your experiment is corrupted by systematic errors
    -you find and you eliminate your systematic errors
    -you do the experiment again
    -if the error bars have been diminished significantly wrt the magnitude of the data , your data is now valid and you can finally try to extract the trends from it (linear, sinusoidal, whatever) though your favorite regression scheme (least squares, Penrose pseudoinverse, etc).
    Last edited by Howard Roark; September 26th, 2014 at 06:37 PM.
    Reply With Quote  
     

  48. #47  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    It is fairly obvious that there are some communication difficulties here and that they aren't all on my end.

    For the moment I want to focus on one thing: your 0.3 estimate of the signal in the climate data. Now, it's blatantly obvious that "it came from the data set." I'm asking how you took the 223 numbers and arrived at 0.3 for the magnitude of the signal. The closest thing you've given to an explanation is saying that the range of numbers was 0.3 to 1.17. Now, that seems to imply that you are taking the minimum of the data points (minus outliers) as an estimation of the magnitude of the signal. However, you deny this. So I ask you once again, explain to me, and by explain I don't just mean point at the entire 223 numbers yet again, how you arrived at the 0.3-0.4 estimate for the magnitude of the signal. Give me an algorithm or a mathematical formula.
    Reply With Quote  
     

  49. #48  
    Suspended
    Join Date
    Feb 2013
    Posts
    1,774
    Quote Originally Posted by MagiMaster View Post
    It is fairly obvious that there are some communication difficulties here and that they aren't all on my end.
    You are playing some kind of silly game, not clear what the endpoint is.

    For the moment I want to focus on one thing: your 0.3 estimate of the signal in the climate data. Now, it's blatantly obvious that "it came from the data set." I'm asking how you took the 223 numbers and arrived at 0.3 for the magnitude of the signal.
    I didn't, I pointed out to you that the measurements recorded in the dataset range from 0.3 to 1.17. Are you still unable to see that? Do you think that the numbers are not in the range 0.3-1.17?
    Now, the standard deviation is 0.175 which makes the error bars to be 0.35. So, the error bars are larger than some of the measurements (0.35>0.3, I hope that you can follow that). Even when they are smaller, they are a very significant percentage of the measurements. For example, in the best case, 0.35/1.17 represents 30%. This is huge, making the set of measurements totally worthless. Do you get that? Do they teach error analysis at your college?
    Reply With Quote  
     

  50. #49  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    Quote Originally Posted by Howard Roark View Post
    For the moment I want to focus on one thing: your 0.3 estimate of the signal in the climate data. Now, it's blatantly obvious that "it came from the data set." I'm asking how you took the 223 numbers and arrived at 0.3 for the magnitude of the signal.
    I didn't, I pointed out to you that the measurements recorded in the dataset range from 0.3 to 1.17. Are you still unable to see that? Do you think that the numbers are not in the range 0.3-1.17?
    Now, the standard deviation is 0.175 which makes the error bars to be 0.35. So, the error bars are larger than some of the measurements (0.35>0.3, I hope that you can follow that). Even when they are smaller, they are a very significant percentage of the measurements. For example, in the best case, 0.35/1.17 represents 30%. This is huge, making the set of measurements totally worthless. Do you get that? Do they teach error analysis at your college?
    So yes, you were in fact estimating the signal strength using the minimum data points. Thank you for finally clarifying that instead of just repeating yourself yet again. Now we can move forward a bit.

    Now, my contention is that that estimate is useless if the question you're asking is anything other than "is the mean different from 0." The simple way to demonstrate this is to take the climate data and just add, say, 3 to all the data points. Suddenly two standard deviations is no where near the range of the data. Yet the slope and p-value of any regression line will remain exactly the same. Even in the case where you do want to ask that specific question, it's still overly simplistic in that it doesn't take the number of observations in to account. Every statistics article I've looked at shows the formulas for p-values with a square-root-n term in the denominator of the various standard errors, meaning that more terms eventually averages out the noise. Large error bars only mean that you need more data to get a reasonably small error.

    If this is, as you said, a standard statistical pre-test, give me a citation for it.

    Now, you seem to be trying to make a point about how noise can fake a signal. (You haven't done a good job of making that point so far though.) If the noise is identically and independently distributed, then that's exactly what a p-value represents. A small p-value means that it is unlikely the coefficient is due to chance. Almost all statistical methods I know of take it as assumed that noise is i.i.d., but that's the whole point of the diagnostic graphs. You can see most violations. In this slice of climate data, there are no obvious violations of those assumptions. Show me how noise can look i.i.d. and yet alter the slope of a regression line. (Also note that I have made no causal claims, only that a trend exists.)

    I'd also like you to explain how wide error bars automatically means there are critical systemic errors instead of just showing that you have noisy data either inherently or from noisy measurements. If it's only noise, then that can be accounted for regardless of its source.

    And no, I don't think there is a class titled "Error Analysis" here. Is it known by any other names? (Though perhaps it's covered in some of the statistics courses.) Maybe you can provide a link to a good introduction?

    Quote Originally Posted by Howard Roark View Post
    Quote Originally Posted by MagiMaster View Post
    It is fairly obvious that there are some communication difficulties here and that they aren't all on my end.
    You are playing some kind of silly game, not clear what the endpoint is.
    There is no game. I have no idea how to prove that to you, but then again, I don't really care either. I want to learn how to do this right, but you obviously have no idea how to teach it, and I'm starting to doubt whether or not you actually know what you're talking about. Your assertions seem to be in conflict with all of the textbooks, tutorials and articles I've seen and all of the classes I've taken. Not one has mentioned that the error bars must be smaller than the data points, implicitly or explicitly. Perhaps you have some citations to prove me wrong?

    The one paper you've linked to so far does not support your argument. In that paper they analyze what any signal should look like and then subtract that out. What they do not do is just calculate the standard deviation and compare that to the minimum values.

    Edit: Here's an introductory page on error analysis: http://teacher.pas.rochester.edu/PHY...AppendixB.html. It mentions nothing similar to your method in post #6.
    Last edited by MagiMaster; September 27th, 2014 at 06:14 AM.
    Reply With Quote  
     

  51. #50  
    Suspended
    Join Date
    Feb 2013
    Posts
    1,774
    Quote Originally Posted by MagiMaster View Post
    Quote Originally Posted by Howard Roark View Post
    For the moment I want to focus on one thing: your 0.3 estimate of the signal in the climate data. Now, it's blatantly obvious that "it came from the data set." I'm asking how you took the 223 numbers and arrived at 0.3 for the magnitude of the signal.
    I didn't, I pointed out to you that the measurements recorded in the dataset range from 0.3 to 1.17. Are you still unable to see that? Do you think that the numbers are not in the range 0.3-1.17?
    Now, the standard deviation is 0.175 which makes the error bars to be 0.35. So, the error bars are larger than some of the measurements (0.35>0.3, I hope that you can follow that). Even when they are smaller, they are a very significant percentage of the measurements. For example, in the best case, 0.35/1.17 represents 30%. This is huge, making the set of measurements totally worthless. Do you get that? Do they teach error analysis at your college?
    So yes, you were in fact estimating the signal strength using the minimum data points. Thank you for finally clarifying that instead of just repeating yourself yet again.
    No, I wasn't estimating any "signal strength". Stop putting words in my mouth, it is a dishonest way of debating..
    Let me make it even simpler for you, since you have so much difficulty with the basics,
    -the measurements in the dataset in discussion range from 0.3 to 1.17.
    -the error bars, calculated based on STD are of the order of 0.35.
    -this means that the error bar is a huge percentage (30% to 117%) of the measured data and this FACT renders the measured data worthless. It is perfectly ok to take ONE STD (the average one) and divide it into the whole range of measured data and get meaningful information. In my experience, from the papers PUBLISHED in my field , error bars in the 1% means invalid data, we are talking percentages 30 times bigger. Feel free to take the STDs associated with each one of the data points in the set (I believe there are about 60 of them) and divide them into the respective data. See what percentages you get, I bet they are in the high teens.

    Here
    is a link to a basic class on error analysis. In the simple exercise, the measured values are around 3.6 seconds, the error bars are 0.4 seconds. How valid would your measured data be if the error bars were 3.7 sec? Just think about it for a while before you shoot your mouth again.

    The simple way to demonstrate this is to take the climate data and just add, say, 3 to all the data points. Suddenly two standard deviations is no where near the range of the data.
    What you are arguing is using the standard error instead of using the standard deviation for calculating the error bars. But this is not how things are done, the error bar is related to the standard deviation, not to the standard error. This is where your erroneous thinking comes from, while adding more datapoints reduces the standard error, it does little to nothing to affect the standard deviation. Therefore it does NOT affect the error bars.

    The one paper you've linked to so far does not support your argument.
    Another lie that is so easy to disprove. From the intro to the paper:

    "This paper first discusses Miller's data reduction algorithm, including an error analysis of that algorithm, showing that the error bars are enormous and his stated results are not statistically significant."

    ...and later on, towards the end of paragraph II:

    "So the error bars on X and Y are huge. This is just one run out of hundreds, and some have smaller error bars, some have larger error bars. But all runs in the data sample have the property that the error bars exceed the variation in the final 1/2 turn plot , as in Fig.5"
    Last edited by Howard Roark; September 27th, 2014 at 04:28 PM.
    Reply With Quote  
     

  52. #51  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    I may have some difficulties getting my points across, but I'm not the only one.

    Quote Originally Posted by Howard Roark
    What you are arguing is using the standard error instead of using the standard deviation for calculating the error bars. But this is not how things are done, the error bar is related to the standard deviation, not to the standard error. This is where your erroneous thinking comes from, while adding more datapoints reduces the standard error, it does little to nothing to affect the standard deviation. Therefore it does NOT affect the error bars.


    Are you trying to tell me that adding a constant to all the values changes the sample standard deviation? What I said was, using your steps, adding a constant to all the values changes the outcome regardless of whether or not it makes a difference to your question of interest.

    Quote Originally Posted by Howard Roark
    Another lie that is so easy to disprove. From the intro to the paper:

    "This paper first discusses Miller's data reduction algorithm, including an error analysis of that algorithm, showing that the error bars are enormous and his stated results are not statistically significant."
    So you are going to continue to ignore how they calculated the error bars in that paper? From your own quote "...the error bars exceed the variation..." they are not using the standard deviation as the error bars. They also aren't comparing the error bars to the data itself but to the range/variance of the data.

    Quote Originally Posted by Howard Roark
    No, I wasn't estimating any "signal strength". Stop putting words in my mouth, it is a dishonest way of debating..
    Perhaps if you want me to stop "putting words in your mouth" you should make your points clearer and with less attitude. And yes, you did claim that 0.3 was the signal (bottom of post #28). You've since amended that from 0.3-0.4 to 0.3-1.17. That may not change any of either of our points this time, but that's also a dishonest way of debating.

    Quote Originally Posted by Howard Roark
    Let me make it even simpler for you, since you have so much difficulty with the basics,
    -the measurements in the dataset in discussion range from 0.3 to 1.17.
    -the error bars, calculated based on STD are of the order of 0.35.
    -this means that the error bar is a huge percentage (30% to 117%) of the measured data and this FACT renders the measured data worthless


    Citation needed.

    Quote Originally Posted by Howard Roark
    -if the error bars are of the order of magnitude of your data then your experiment is corrupted by systematic errors


    Citation needed.

    Quote Originally Posted by Howard Roark
    Here is a link to a basic class on error analysis. In the simple exercise, the measured values are around 3.6 seconds, the error bars are 0.4 seconds. How valid would your measured data be if the error bars were 3.7 sec? Just think about it for a while before you shoot your mouth again.


    Your link again does not support your arguments. Show me a page from that site with your list of steps to validate data. Show me where it says that a large standard deviation implies systemic errors.
    Reply With Quote  
     

  53. #52  
    Suspended
    Join Date
    Feb 2013
    Posts
    1,774
    Quote Originally Posted by MagiMaster View Post
    I may have some difficulties getting my points across, but I'm not the only one.
    You are the only one. Try to understand the simple exercise I gave you. Do you understand what happens when you change STD from 0.2s to say, 2.0s? How confident are you that the measurements are valid? I know I asked you before but you chose to ignore the question since it contradicts your prejudices , so I am asking you again.

    What I said was, using your steps, adding a constant to all the values changes the outcome regardless of whether or not it makes a difference to your question of interest.


    No one adds any constant, try to understand what is being explained to you rather than keep adding your non-related comments and building strawmen.
    Last edited by Howard Roark; September 27th, 2014 at 05:44 PM.
    Reply With Quote  
     

  54. #53  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    Quote Originally Posted by Howard Roark
    You are the only one. Try to understand the simple exercise I gave you. Do you understand what happens when you change STD from 0.2s to say, 2.0s? I know I asked you before but you chose to ignore the question since it contradicts your prejudices , so I am asking you again.


    Yes, I understand it. All of the error bars get wider. Those wider error bars increase the p-values and widen the confidence intervals and may make some coefficients statistically insignificant. (Do you want me to actually step you through a calculation of a p-value?)

    What does not happen is that the data suddenly becomes invalid.

    Quote Originally Posted by Howard Roark
    No one adds any constant, try to understand what is being explained to you rather than keep adding your non-related comments.
    Really? Not even in a hypothetically example meant to illustrate a particular feature of a problem? (You know, like that frictionless plane physicists talk about.)

    Or you know, when converting Celsius to Kelvin.

    Now, how about you address some of the point I raised. Perhaps you can start by giving a citation for your method from post #6?
    Reply With Quote  
     

  55. #54  
    Suspended
    Join Date
    Feb 2013
    Posts
    1,774
    Quote Originally Posted by MagiMaster View Post
    Quote Originally Posted by Howard Roark
    You are the only one. Try to understand the simple exercise I gave you. Do you understand what happens when you change STD from 0.2s to say, 2.0s? I know I asked you before but you chose to ignore the question since it contradicts your prejudices , so I am asking you again.


    Yes, I understand it. All of the error bars get wider. Those wider error bars increase the p-values and widen the confidence intervals and may make some coefficients statistically insignificant.

    What does not happen is that the data suddenly becomes invalid.


    Err, your measurements are garbage at this point. The real time is 3.6s , you are measuring 1.6 through 5.6. You never set foot in the lab. I give up, there is no way of getting through your prejudices.

    Last edited by Howard Roark; September 27th, 2014 at 06:12 PM.
    Reply With Quote  
     

  56. #55  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    You could provide citations. (You've cited two things, neither of which support your argument.)

    Or you could listen to what I'm saying and actually address my points.

    For example, if I measure the water temperature around the arctic ice to be between -2.1 and 1.9 degrees Celsius with a standard deviation of 1.0, what does that mean? How do you compare those numbers?

    Or if I convert those temperatures to Kelvin and now the range is between 271 and 275, does that change anything about the data?

    Or maybe the data I'm measuring is just that noisy. That doesn't automatically mean that it's garbage. You seem to think the whole world is just like the inside of a physics lab. Your rules of thumb may work for you within your field, but they are not as universally applicable as you think.
    Reply With Quote  
     

  57. #56  
    Suspended
    Join Date
    Feb 2013
    Posts
    1,774
    Quote Originally Posted by MagiMaster View Post

    For example, if I measure the water temperature around the arctic ice to be between -2.1 and 1.9 degrees Celsius with a standard deviation of 1.0, what does that mean?
    It means that you are a lousy experimenter, you do not know what you are doing, your measured data is garbage. You have demonstrated this repeatedly.

    Or if I convert those temperatures to Kelvin and now the range is between 271 and 275, does that change anything about the data?
    ...convert the 1 degree Celsius in degrees Kelvin and your "measurement" has a standard deviation of 274K. You will be laughed out of the lab. For good reason, you clearly do not know what you are doing. What is your major? Economics? Social sciences?

    Or maybe the data I'm measuring is just that noisy.
    Where does the "noise" come from in the NASA measurements of ocean/land temperature? Aliens? Martians?
    Reply With Quote  
     

  58. #57  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    Quote Originally Posted by Howard Roark View Post
    Where does the "noise" come from in the NASA measurements of ocean/land temperature? Aliens? Martians?
    How about the simple fact that weather is chaotic. For example, there are more clouds some days than others.

    Quote Originally Posted by Howard Roark
    It means that you are a lousy experimenter, you do not know what you are doing, your measured data is garbage. You have demonstrated this repeatedly.
    So any measurement near zero is automatically garbage?

    Quote Originally Posted by Howard Roark
    What is your major? Economics? Social sciences?
    Not that it matters, but my major is computer science plus a lesser degree in mathematics. I'm not going to attempt to prove that to you.

    Quote Originally Posted by Howard Roark
    ...convert the 1 degree Celsius in degrees Kelvin and your "measurement" has a standard deviation of 274K. You will be laughed out of the lab. For good reason, you clearly do not know what you are doing.
    You need to go take a statistics course or three. The standard deviation will not change at all. It will still be 1.0 degrees. Here, I'll even walk you through the computation:

    Let's define as the set of measurements and . We'll assume they're temperatures, measured in Celsius. Then we define , which is just the mean of . Now the sample standard deviation is .

    Now, we convert to Kelvin (maybe we want to merge or compare two data sets that were in different units). Define . Then carry out the same computations.






    Next, the standard deviation:





    Therefore, converting from Celsius to Kelvin has no effect on the standard deviation of the data. (For completeness, converting to Fahrenheit would multiply the standard deviation by .)

    Now, to speed things up a bit, let me put a few more words in your mouth, or at least predict what you are about to say. I suspect you will claim that you thought I was saying to convert the standard deviation itself directly to Kelvin (adding 273.15 to the 1.0 deviation). You know very well that wasn't what I said, but it's an easier thing to argue against. Next, I'll point out that that wasn't what I said or meant and as much should have been obvious. Then you'll get defensive and give up again saying that it's clear I'm the one making the error here. Of course, now that I've laid it all out, you will of course claim that isn't what you were going to say.

    So, with all that out of the way, perhaps you can instead actually back up your assertions. Provide a citation that shows where a large standard deviation implies systemic errors, or that the data is invalid. Provide a citation showing why comparing the standard deviation to the raw data makes any sense beyond the simple question of "is the mean different from zero." And don't just point at the paper you've already cited unless you want to give a page number or quote that shows where they compare the raw data's standard deviation to the raw data itself.

    At this point though, I'm pretty sure you can't actually do that since you've avoided that simple solution for a good 30 or 40 posts now.
    Last edited by MagiMaster; September 28th, 2014 at 05:26 AM.
    Reply With Quote  
     

  59. #58  
    Suspended
    Join Date
    Feb 2013
    Posts
    1,774
    Quote Originally Posted by MagiMaster View Post
    I suspect you will claim that you thought I was saying to convert the standard deviation itself directly to Kelvin (adding 273.15 to the 1.0 deviation).
    That's precisely what your sloppy post says. That, and claiming that you are measuring temperatures in the range of -2.1 to 1.9 with a STD of 1.0 makes you the laughing stock of any physics lab.

    So any measurement near zero is automatically garbage?
    No, you twist things again, has nothing to do with "measurement near zero", has everything to do with sloppy measurements. Let me spell it out for you: doing measurements with a 50% STD (1/2=50%) makes you a laughing stock as a physicist. No matter how you express your so-called "measurements", they are a disaster, whether you use Celsius scale or Kelvin scale. Your attempt at misdirection by changing the scale from Celsius to Kelvin doesn't change the fact that you have demonstrated that you are unable to take a set of valid measurements. Changing the scale doesn't change garbage measurements into valid ones.
    Last edited by Howard Roark; September 28th, 2014 at 09:41 AM.
    Reply With Quote  
     

  60. #59  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    Yet again you prove you have no idea what you are talking about. And now you're straight up lying to "prove" your point. At no point did my "sloppy post" make any such claim and you know it.

    You do understand that the range and the standard deviation are related right? If you have a wider range of values, you automatically have a wider standard deviation. If you have numbers between -10 and 10 and a standard deviation of 1, it means you have some serious outliers. If you have numbers between -1 and 1 and a standard deviation of 1, your data might not be normally distributed (normal data should range between roughly plus or minus 3 standard deviations), but it doesn't really mean anything else.

    (BTW, to have a range of -10 to 10 and a standard deviation of 1, you'd need a set of numbers with a 10, a -10 and 199 zeros.)

    Seriously, go roll some dice. Collect a few numbers. The measurements in this case are perfectly accurate, but the process itself is noisy. You'll find that the values range from 1 to 6 and that your standard deviation is approximately 1.7. Are these values somehow garbage despite having zero measurement error? Where is the systemic error in this?

    Or if you want something on a bell curve, take the sum of 3 dice. The values there range from 3 to 18 and the standard deviation should be close to 3. Are these numbers garbage?

    Or roll 10 dice. Then you get a theoretical between 10 and 60, but good luck actually rolling a 10. Your first 100 rolls will very likely be in the 20 to 50 range. Either way, those numbers will have a standard deviation of around 3.8. Are those numbers garbage? Does it matter which range you're looking at? 10 is still only about two and a half times the standard deviation.

    What if you measure the temperature of something to be between 80 and 84 degrees? Just looking at the range, you can tell that the standard deviation will be less than 2. Assuming there are no outliers, it'll probably be in the 0.6 to 1.0 range. Would those numbers be garbage?

    What if you measured numbers between 85 and 95 with a standard deviation of 1? Would those numbers be garbage? Why are some of your measurements 5 standard deviations away from the mean?

    What if you had an incredibly precise thermometer and you measured something to be between -0.001 degrees and 0.001 degrees with a standard deviation of 0.0005 degrees? Would those numbers be garbage?

    What if you're measuring a binary value, one that can only take the values of yes or no, such as "did the electron hit the detector?" or "did a person walking through this area turn left?" Such numbers, if you don't average them, will have a range of 0 to 1 and, assuming the odds are close to 50/50, a standard deviation of around 0.5. Are those numbers garbage?

    If your data has a mean of zero, what kind of standard deviation is reasonable? Is it garbage if the standard deviation is 1? What if it's 0.1? What if it's 0.001? Does it matter that none of those numbers have units attached and that you can't actually infer anything about them without more information?
    Last edited by MagiMaster; September 28th, 2014 at 02:50 PM.
    Reply With Quote  
     

  61. #60  
    Suspended
    Join Date
    Feb 2013
    Posts
    1,774
    If you didn't get it by now, you will never get it. It goes like this:

    MM: I measured the ocean temperature to be

    The physics community: This is laughable, your error is 50% of your measured data, you do not know how to do measurements or you have chosen a measuring device that has too coarse of a resolution for that data you need to measure. Change the device for one with a higher resolution or learn how to measure.

    MM: Ok, I changed the scale from Celsius to Kelvin, everything should be fine.

    The physics community: Show us the raw data from your measurements, if you continue to have a measurement error that is 50% of your data, you STILL don't know jack about doing measurements.

    If your data has a mean of zero, what kind of standard deviation is reasonable? Is it garbage if the standard deviation is 1? What if it's 0.1? What if it's 0.001?

    Very rarely the mean is 0 , in physics, because that means that you are measuring something that is NOT there. No self-respecting physicist does such a thing.
    What physicists DO is tightening the error bars further and further with each experiment. A good example are the experiments constraining light speed anisotropy . The error bars on the parameter which symbolizes the amount of anisotropy and has a theoretical value of 0.5 as predicted by SR have been marching over the years from to . THESE are examples of how science is being done. Feel free to google "One-Way Tests of Light-Speed Isotropy" on your own. Lots of papers will show up. The summary is that an STD of 1, 0.1, 0.0001 does not mean anything, either good or bad but an STD of 50%-100% of the measured data (i.e. error bars of 100%-200%), as you are doing, mean that you do not know what you are doing.
    Last edited by Howard Roark; September 28th, 2014 at 06:06 PM.
    Reply With Quote  
     

  62. #61  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    My point, if you were capable of reading, was that changing from Celsius to Kelvin would have no impact on most statistical questions. I even went so far as to show, mathematically, exactly what such a change of units would have on the mean and standard deviation.

    I also stated that the temperature was between -2.1 and 1.9 with a standard deviation of 1, which would give an average of -0.1 degrees plus or minus 2 degrees (at roughly 95% confidence).

    Quote Originally Posted by Howard Roark
    Very rarely the mean is 0 , in physics, because that means that you are measuring something that is NOT there.
    This is a large part of your inability to understand statistics. Not everything takes place in a physics lab and not every statistical question is a physics question. Nor is every statistical question limited to "is the mean different from zero" or "what is the true mean," which are the only questions your rules of thumb sort-of apply to. ("Is the mean different from zero" is basically the same question as "what is the true mean." You perform the same computation for both, but interpret the results slightly differently.)

    What do you suppose the mean temperature of water under ice is if you're measuring in Celsius?

    What do you think the log-odds of an atom decaying within one half-life are? (Analyzing the log-odds is a common approach to dealing with probability data as raw probabilities violate many of the assumptions of many methods of analysis.)

    What do you think the average profit of a company near the break-even point is?

    What do you think the average result of one standard six-sided die minus a second standard six-sided die is?

    Do you think it's impossible to answer statistical questions about dice since they have such high variability?

    Do you think it's impossible to fit a slope through zero mean data?

    Do you think it's impossible to answer statistical questions about wind data, since it's a vector quantity and often hovers around zero magnitude?

    Do you really think there is no data anyone might want to measure that is just inherently highly variable, such as almost anything involving human trials?

    Are you incapable of providing citations to back up your assertions?
    Reply With Quote  
     

  63. #62  
    Suspended
    Join Date
    Feb 2013
    Posts
    1,774
    Quote Originally Posted by MagiMaster View Post

    I also stated that the temperature was between -2.1 and 1.9 with a standard deviation of 1, which would give an average of -0.1 degrees plus or minus 2 degrees (at roughly 95% confidence).
    Not even wrong. You should be getting a series of measurements that oscillate around -2.1 and other measurements oscillating around 1.9 and all the values in-between. Since your standard deviation is 1 you are creating a disaster reflecting your inept handling of the measuring instruments, since you are recording values ranging from -3.1 all the way to +2.9. Like I said, you have never done a measurement.
    Reply With Quote  
     

  64. #63  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    And you have apparently never done math. Nor apparently ever taken a measurement of anything outside of a physics lab.


    Quote Originally Posted by Howard Roark
    MM: I measured the ocean temperature to be
    Here is a set of numbers with roughly the distribution I mentioned: {-2.1, -1.4, -0.8, -0.2, 0.0, 0.3, 0.4, 0.5, 1.1, 1.9}. Why do you think this is best represented as 1.9 plus or minus 1?

    Or here's another set: {-1.6, -1.2, -0.2, -0.1, 0.0, 0.7, 0.8, 2.0, 3.4}. Just from looking at those numbers can you say whether or not this data set is any good?

    And how about you start addressing some of my points? Or are you incapable of arguing against what I'm actually saying rather than some strawman version of it?

    Quote Originally Posted by MagiMaster
    What do you suppose the mean temperature of water under ice is if you're measuring in Celsius?

    What do you think the log-odds of an atom decaying within one half-life are? (Analyzing the log-odds is a common approach to dealing with probability data as raw probabilities violate many of the assumptions of many methods of analysis.)

    What do you think the average profit of a company near the break-even point is?

    What do you think the average result of one standard six-sided die minus a second standard six-sided die is?

    Do you think it's impossible to answer statistical questions about dice since they have such high variability?

    Do you think it's impossible to fit a slope through zero mean data?

    Do you think it's impossible to answer statistical questions about wind data, since it's a vector quantity and often hovers around zero magnitude?

    Do you really think there is no data anyone might want to measure that is just inherently highly variable, such as almost anything involving human trials?

    Are you incapable of providing citations to back up your assertions?
    Last edited by MagiMaster; September 28th, 2014 at 10:14 PM.
    Reply With Quote  
     

  65. #64  
    Suspended
    Join Date
    Feb 2013
    Posts
    1,774
    Quote Originally Posted by MagiMaster View Post
    And you have apparently never done math. Nor apparently ever taken a measurement of anything outside of a physics lab.


    Quote Originally Posted by Howard Roark
    MM: I measured the ocean temperature to be
    Here is a set of numbers with roughly the distribution I mentioned: {-2.1, -1.4, -0.8, -0.2, 0.0, 0.3, 0.4, 0.5, 1.1, 1.9}. Why do you think this is best represented as 1.9 plus or minus 1?

    It isn't, the complete representation is, as explained (multiple times already) despite your inability to comprehend:



    STD being 1, the error bars are 2.

    It is not my problem that you don't know jack about measurement.


    Or here's another set: {-1.6, -1.2, -0.2, -0.1, 0.0, 0.7, 0.8, 2.0, 3.4}. Just from looking at those numbers can you say whether or not this data set is any good?
    What is the standard deviation associated with the above dataset?
    Asking the above question without specifying the standard deviation renders your question meaningless, reflecting once again your lack of understanding of the basics. When Cogito Ergo Sum put up the graphs in the other thread, I immediately asked for the STD and he immediately posted them. This was a model of rational, scientific interaction. It is not my problem that you don't know jack about measurement.
    Last edited by Howard Roark; September 29th, 2014 at 06:47 PM.
    Reply With Quote  
     

  66. #65  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    Do you have any idea how a standard deviation is calculated?
    Reply With Quote  
     

  67. #66  
    Suspended
    Join Date
    Feb 2013
    Posts
    1,774
    Quote Originally Posted by MagiMaster View Post
    Do you have any idea how a standard deviation is calculated?
    Unlike you, I do. I make use of it on a daily basis and it plays a major role in most if not all my peer reviewed published papers. Do yourself (and the world of physics) a favor, stay away from the labs.
    Reply With Quote  
     

  68. #67  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    Prove it. Calculate the standard deviation for that set of numbers. (To be sure you understand, I'll repeat, this set of numbers: {-1.6, -1.2, -0.2, -0.1, 0.0, 0.7, 0.8, 2.0, 3.4})
    Reply With Quote  
     

  69. #68  
    Suspended
    Join Date
    Feb 2013
    Posts
    1,774
    Quote Originally Posted by MagiMaster View Post
    Prove it. Calculate the standard deviation for that set of numbers. (To be sure you understand, I'll repeat, this set of numbers: {-1.6, -1.2, -0.2, -0.1, 0.0, 0.7, 0.8, 2.0, 3.4})
    I don't have to prove anything to ignorant trolls. 1.546591. Go find a different hobby and remember, stay away from the labs, the physicists will apply many boots to your arrogant rear end.
    Reply With Quote  
     

  70. #69  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    If you're capable of that, why did you feel the need to ask this:

    Quote Originally Posted by Howard Roark
    What is the standard deviation associated with the above dataset?
    Either way, now that you have answered your own question, maybe you can answer mine? Is that set of numbers garbage?
    Reply With Quote  
     

  71. #70  
    Suspended
    Join Date
    Feb 2013
    Posts
    1,774
    Quote Originally Posted by MagiMaster View Post
    If you're capable of that, why did you feel the need to ask this:

    Quote Originally Posted by Howard Roark
    What is the standard deviation associated with the above dataset?
    There is a very good reason, if you weren't such an arrogant prick, listening only to the droning of your own voice , like Sheldon Cooper from the Big Bang Theory and if you paid attention to some of my prior posts , you would have understood why. Occasionally, I get a student like you, an utterly obnoxious mister know it all. So, I let mister know it all fall flat on his face.
    Reply With Quote  
     

  72. #71  
    New Member
    Join Date
    Sep 2014
    Posts
    2
    Hi all awesome science specialists,

    I found a great tutorial online by Princeton University on Regression analysis framework. i'll give you a link to view it.

    just google 'regression analysis' and thousands of cool websites turn up online and are visible to view.
    cool.......
    Reply With Quote  
     

  73. #72  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    Quote Originally Posted by Howard Roark View Post
    There is a very good reason, if you weren't such an arrogant prick, listening only to the droning of your own voice , like Sheldon Cooper from the Big Bang Theory and if you paid attention to some of my prior posts , you would have understood why. Occasionally, I get a student like you, an utterly obnoxious mister know it all. So, I let mister know it all fall flat on his face.
    All you've managed to do throughout this entire thread is show how much of a jerk you are. You've completely failed to actual support your arguments.

    So I take it you choose to bad mouth me because you can't actually answer my questions? You could have cited your arguments or offered some formal proof and shut me up a long time ago. Or you could have put your attitude aside and we could have discussed this civilly, regardless of who's right or wrong.

    So again, is the set of numbers {-1.6, -1.2, -0.2, -0.1, 0.0, 0.7, 0.8, 2.0, 3.4} garbage? It's mean is about 0.4 and it's standard deviation is about 1.5. Does that mean that it's useless and can't be used to show any results?
    Reply With Quote  
     

  74. #73  
    Suspended
    Join Date
    Feb 2013
    Posts
    1,774
    Like I said, I let little arrogant pricks like you fall flat on their faces.
    Reply With Quote  
     

  75. #74  
    Suspended
    Join Date
    Apr 2007
    Location
    Pennsylvania
    Posts
    8,822
    Quote Originally Posted by Howard Roark View Post
    Like I said, I let little arrogant pricks like you fall flat on their faces.
    Ad hominem attacks = 3 days off for Howard
    Reply With Quote  
     

  76. #75  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    Well, since Howard is currently unable (and apparently unwilling) to defend his assertions, can anyone else point out what I might be misunderstanding?

    To summarize, here's what I think he was trying to say:
    - Take your data
    - Take it's standard deviation
    - If any data point is within 2 standard deviations of zero, it's too noisy/errorful (is that a word?) to be used

    It seemed to me that he was applying this to any and all statistical questions.

    My contention with this was:
    - Zero mean data (and I gave a few examples) would always fail that test
    - Adding or subtracting a constant from each data point would alter the outcome of that test, yet there are several statistical questions you can ask that that wouldn't affect
    - Even for the question of "is the mean different from zero" or "what is the true mean" if you have enough data points, you can overcome any amount of random noise in the data (though that's not always practical)

    Now, I don't want to claim his method is completely invalid. If you have strictly-positive data and the question you want to ask is "is the mean different from zero" or "what is the true mean" (mathematically, those are nearly the same question) then it's a decent rule of thumb, and that's not an uncommon situation. And I suppose in a physics lab, you really shouldn't be getting huge deviations in your measurements, but not all statistics takes place in a physics lab.

    So, like I mentioned near the beginning, I would actually like to learn something here, or at least validate my existing knowledge. So can anyone tell me whether or not I'm getting something wrong here?
    Reply With Quote  
     

  77. #76  
    Forum Professor river_rat's Avatar
    Join Date
    Jun 2006
    Location
    South Africa
    Posts
    1,497
    The 2 standard deviation trick assumes fairly thin tails, which is seldom true for most interesting time series now a days. For example, you would throw away many of the crashes and rallies in the market if that was your criteria. But even assuming thin tails, the degree of filtering required depends on the question being asked. For example, if your series happens to be generated by iid normal variables then we know the error in the mean scales in the square root of the standard deviation, so then given sufficiently many sample points we can estimate the mean of the series quite well.
    As is often the case with technical subjects we are presented with an unfortunate choice: an explanation that is accurate but incomprehensible, or comprehensible but wrong.
    Reply With Quote  
     

  78. #77  
    Forum Radioactive Isotope MagiMaster's Avatar
    Join Date
    Jul 2006
    Posts
    3,440
    Thank you for the reply. That sounds all sounds pretty reasonable given what I'd previously learned. What's your opinion of the data set mentioned in the OP and the idea of fitting a simple regression line to it?
    Reply With Quote  
     

Similar Threads

  1. regression analysis help
    By cledussnow in forum Mathematics
    Replies: 0
    Last Post: January 14th, 2013, 01:38 PM
  2. Replies: 6
    Last Post: January 17th, 2012, 10:51 PM
  3. Stats question: Regression Analysis
    By Rhabdomyolysis in forum Mathematics
    Replies: 0
    Last Post: July 27th, 2011, 01:17 PM
  4. Regression Therapy
    By Nox in forum Behavior and Psychology
    Replies: 1
    Last Post: October 18th, 2010, 10:49 PM
  5. Multivariate nonlinear regression analysis DRS
    By eaihua in forum Mathematics
    Replies: 0
    Last Post: October 30th, 2008, 01:09 AM
Bookmarks
Bookmarks
Posting Permissions
  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •