Notices
Results 1 to 10 of 10

Thread: Possible Fine Structure Large-Scale Variance

  1. #1 Possible Fine Structure Large-Scale Variance 
    Suspended
    Join Date
    Aug 2010
    Posts
    179
    My article looking at the recent fine-structure variance discoveries by a research group of English astrophysicists from the University of Cambridge and Australian astrophysicists based at the Swinburne University of Technology.

    Article: http://hicexsistoeverto.wordpress.co...ture-variance/

    Let the debate begin!

    Paper: http://arxiv.org/PS_cache/arxiv/pdf/...008.3907v1.pdf

    Enjoy!


    Reply With Quote  
     

  2.  
     

  3. #2  
    Forum Isotope Bunbury's Avatar
    Join Date
    Sep 2007
    Location
    Colorado
    Posts
    2,590
    The fact that the result is obtained by comparing data from two instruments of completely different type makes me skeptical that a difference of 1 in 100,000 is a real difference and not just an instrument error.

    On the other hand, if true, what would it mean? Why do things still hold together when alpha is not quite "right"? Maybe alpha is the same and pi is different.


    Reply With Quote  
     

  4. #3  
    Suspended
    Join Date
    Aug 2010
    Posts
    179
    The Keck and VLT are essentially collecting the same physical information, photons of electromagnetic radiation.

    Calibration would have been carried out otherwise they wouldnt have gone to print with their findings.

    I don't really think they are of "different type" :-D

    Dishmaster might correct me on this!
    Reply With Quote  
     

  5. #4  
    Forum Isotope Bunbury's Avatar
    Join Date
    Sep 2007
    Location
    Colorado
    Posts
    2,590
    No, you're right, since they are both optical they are the same type in principle. Somehow I had the idea that the VLT was not an optical instrument, probably from looking at pictures of it that look nothing like a traditional observatory.

    By the way, how would you calibrate two instruments that are looking in opposite directions? Wouldn't calibration require them both to observe the same object?
    Reply With Quote  
     

  6. #5  
    Moderator Moderator Dishmaster's Avatar
    Join Date
    Apr 2008
    Location
    Heidelberg, Germany
    Posts
    1,624
    Interesting topic, that have come up already a while ago, but I thought that the previous report on such a variation was falsified. Nevertheless, looking at the current attempted publication, I want to raise some caution. I haven't read it entirely, but I find it quite peculiar that the authors defer the most crucial discussion, i.e. the methods and immediate accuracy estimates, to a different paper. I find this very awkward given that the quoted significance is only just above 4 sigma, a value where any decent astrophysicist begins to be very sceptical. Figs. 2 and 3 are pretty revealing, because they tell us something about the significance of the derived results - even if the calibration is assumed to be optimal. And this shows that the central conclusion is based on only two data points (Fig. 2). If you discard these, the relation breaks down. This is conspicuous. The authors also say in their discussion about statistical and systematic errors that they have done everything to push it down. They even discarded 15% of the data to derive a standard deviation. They call it "Least Trimmed Square ... where only 85% of data, those points with the smallest squared residuals, are fitted". This appears fishy to me.

    Furthermore, what I am missing is a discussion about the relation between the fitting and the significance of the results. For instance, they say that they kept the redshift constant. I assume that this is needed to derive the value. But for this, you first have to derive a precise redshift estimate for each object, which also something that is not apparent to me. Is there a degeneracy in the fitting of redshifts and ? If you assume a redshift, how does that affect the value of that you derive? There is not even an object list. What is the covered redshift range?

    So, let's see, what the referees will conclude.

    As to the technical questions: The Keck and the VLT are very similar in their technology. But since the authors don't say anything about the instrument used here, I can't comment more on that. I suppose it is UVES (they mention a UVES POPPLER) which is a UV to optical high resolution spectrograph.

    Calibration is similar at all telescopes. It does not matter, where you point your telescope or what you are looking at. All you want is that the instrument provides correct wavelengths and intensities for the spectral lines. The very idea of calibration is that the results are independent of the observer and the instrument used.

    The VLT is only peculiar in a sense that it actually consists of 4 large telescopes and a series of smaller ones. The large telescopes are mostly used as individual facilities covering a wavelength range from the UV to the thermal IR, but can also be operated as an optical interferometer, where all the large and small units are combined.
    Reply With Quote  
     

  7. #6  
    Suspended
    Join Date
    Aug 2010
    Posts
    179
    Furthermore, what I am missing is a discussion about the relation between the fitting and the significance of the results. For instance, they say that they kept the redshift constant. I assume that this is needed to derive the value. But for this, you first have to derive a precise redshift estimate for each object, which also something that is not apparent to me. Is there a degeneracy in the fitting of redshifts and ? If you assume a redshift, how does that affect the value of that you derive? There is not even an object list. What is the covered redshift range?
    The redshift values they obtained have to be the most accurate they can take since they are using VLT.

    They have also stated that they have taken the most conservative estimation of the error, taking into account both statistical and systematic errors that they spoke about.

    A little part of me wants the peer review to accept these findings. :-D
    Reply With Quote  
     

  8. #7  
    Moderator Moderator Dishmaster's Avatar
    Join Date
    Apr 2008
    Location
    Heidelberg, Germany
    Posts
    1,624
    Quote Originally Posted by Michael_Roberts
    The redshift values they obtained have to be the most accurate they can take since they are using VLT.
    But my question is: Can they actually use the same data to independently derive the redshift and alpha at the same time? And as I wrote, there is no word about the accuracy and precision of the redshifts involved here. Have they all been obtained using the same technique? There is no information given.

    Quote Originally Posted by Michael_Roberts
    They have also stated that they have taken the most conservative estimation of the error, taking into account both statistical and systematic errors that they spoke about.
    And then they discard the 15% of the data they don't like?
    Reply With Quote  
     

  9. #8  
    Suspended
    Join Date
    Aug 2010
    Posts
    179
    Quote Originally Posted by Dishmaster
    Quote Originally Posted by Michael_Roberts
    The redshift values they obtained have to be the most accurate they can take since they are using VLT.
    But my question is: Can they actually use the same data to independently derive the redshift and alpha at the same time? And as I wrote, there is no word about the accuracy and precision of the redshifts involved here. Have they all been obtained using the same technique? There is no information given.

    Quote Originally Posted by Michael_Roberts
    They have also stated that they have taken the most conservative estimation of the error, taking into account both statistical and systematic errors that they spoke about.
    And then they discard the 15% of the data they don't like?
    I guess they can use the same data to derive both z and alpha, as they have surely taken out all sources of bias in the results. We have to presume that they will answer this once the peer review comes out tho.

    I guess that it pretty bad practise, we were always told not to disregard anything, but to analyse why it could be disregarded. I suppose if they give good enough reason as to why they can disregard it and maintain reliabiltiy in the results then it might possibly be ok.[/code]
    Reply With Quote  
     

  10. #9  
    Moderator Moderator Dishmaster's Avatar
    Join Date
    Apr 2008
    Location
    Heidelberg, Germany
    Posts
    1,624
    The problem is that any publication must be written in a way that everybody can fully follow the way how one comes to a conclusion. This publication is missing even the most fundamental requirements in this respect. Assuming or being sure about their honesty is just not enough. They have to demonstrate from the beginning to the end, how they finally came to their claim. This is not the case here. If I were the referee, I would rip this manuscript into pieces. Peer review evaluation usually does not include an intimate chat with the authors about the details. In many cases, the authors don't even know, who the referee is. The manuscript must be the standalone reference for all enquiries and details of the analysis.

    I am not saying that the final result is wrong. It is just that based on this manuscript, it is impossible to judge, if it is.
    Reply With Quote  
     

  11. #10  
    Suspended
    Join Date
    Aug 2010
    Posts
    179
    I am going to suppose that they have explained all statistical analysis and data collection in the paper they are due to bring out.

    I'll keep my eyes peeled for any news of the peer review Dishmaster. :-D
    Reply With Quote  
     

Bookmarks
Bookmarks
Posting Permissions
  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •