首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
It is assumed that   O − C   ('observed minus calculated') values of periodic variable stars are determined by three processes, namely measurement errors, random cycle-to-cycle jitter in the period, and possibly long-term changes in the mean period. By modelling the latter as a random walk, the covariances of all   O − C   values can be calculated. The covariances can then be used to estimate unknown model parameters, and to choose between alternative models. Pseudo-residuals which could be used in model fit assessment are also defined. The theory is illustrated by four applications to spotted stars in eclipsing binaries.  相似文献   

3.
The estimation of the frequency, amplitude and phase of a sinusoid from observations contaminated by correlated noise is considered. It is assumed that the observations are regularly spaced, but may suffer missing values or long time stretches with no data. The typical astronomical source of such data is high-speed photoelectric photometry of pulsating stars. The study of the observational noise properties of nearly 200 real data sets is reported: noise can almost always be characterized as a random walk with superposed white noise. A scheme for obtaining weighted non-linear least-squares estimates of the parameters of interest, as well as standard errors of these estimates, is described. Simulation results are presented for both complete and incomplete data. It is shown that, in finite data sets, results are sensitive to the initial phase of the sinusoid.  相似文献   

4.
The theory of low-order linear stochastic differential equations is reviewed. Solutions to these equations give the continuous time analogues of discrete time autoregressive time-series. Explicit forms for the power spectra and covariance functions of first- and second-order forms are given. A conceptually simple method is described for fitting continuous time autoregressive models to data. Formulae giving the standard errors of the parameter estimates are derived. Simulated data are used to verify the performance of the methods. Irregularly spaced observations of the two hydrogen-deficient stars FQ Aqr and NO Ser are analysed. In the case of FQ Aqr the best-fitting model is of second order, and describes a quasi-periodicity of about 20 d with an e-folding time of 3.7 d. The NO Ser data are best fitted by a first-order model with an e-folding time of 7.2 d.  相似文献   

5.
One of the tools used to identify the pulsation modes of stars is a comparison of the amplitudes and phases as observed photometrically at different wavelengths. Proper application of the method requires that the errors on the measured quantities, and the correlations between them, be known (or at least estimated). It is assumed that contemporaneous measurements of the light intensity of a pulsating star are obtained in several wavebands. It is also assumed that the measurements are regularly spaced in time, although there may be missing observations. The amplitude and phase of the pulsation are estimated separately for each of the wavebands, and amplitude ratios and phase differences are calculated. A general scheme for estimating the covariance matrix of the amplitude ratios and phase differences is described. The first step is to fit a time series to the residuals after pre-whitening the observations by the best-fitting sinusoid. The residuals are then cross-correlated to study the interdependence between the errors in the different wavebands. Once the multivariate time-series structure can be modelled, the covariance matrix can be found by bootstrapping. An illustrative application is described in detail.  相似文献   

6.
A statistical model is formulated that enables one to analyse jointly the times between maxima and minima in the light curves of monoperiodic pulsating stars. It is shown that the combination of both sets of data into one leads to analyses that are more sensitive. Illustrative applications to the American Association of Variable Star Observers data for a number of long-period variables demonstrate the usefulness of the approach.  相似文献   

7.
A probabilistic technique for the joint estimation of background and sources with the aim of detecting faint and extended celestial objects is described. Bayesian probability theory is applied to gain insight into the co-existence of background and sources through a probabilistic two-component mixture model, which provides consistent uncertainties of background and sources. A multiresolution analysis is used for revealing faint and extended objects in the frame of the Bayesian mixture model. All the revealed sources are parametrized automatically providing source position, net counts, morphological parameters and their errors.
We demonstrate the capability of our method by applying it to three simulated data sets characterized by different background and source intensities. The results of employing two different prior knowledge on the source signal distribution are shown. The probabilistic method allows for the detection of bright and faint sources independently of their morphology and the kind of background. The results from our analysis of the three simulated data sets are compared with other source detection methods. Additionally, the technique is applied to ROSAT All-Sky Survey data.  相似文献   

8.
We present a detrending algorithm for the removal of trends in time series. Trends in time series could be caused by various systematic and random noise sources such as cloud passages, changes of airmass, telescope vibration, CCD noise or defects of photometry. Those trends undermine the intrinsic signals of stars and should be removed. We determine the trends from subsets of stars that are highly correlated among themselves. These subsets are selected based on a hierarchical tree clustering algorithm. A bottom-up merging algorithm based on the departure from normal distribution in the correlation is developed to identify subsets, which we call clusters. After identification of clusters, we determine a trend per cluster by weighted sum of normalized light curves. We then use quadratic programming to detrend all individual light curves based on these determined trends. Experimental results with synthetic light curves containing artificial trends and events are presented. Results from other detrending methods are also compared. The developed algorithm can be applied to time series for trend removal in both narrow and wide field astronomy.  相似文献   

9.
In two recent papers a new method for searching for periodicity in time series was introduced. It takes advantage of the Shannon entropy to compute the amount of information contained in the light curve of a given signal as a function of a supposed period p . The basic result is that, if the signal is T -periodic, the entropy is then minimum when p T . Also, there is theoretical and numerical evidence that the minimum entropy method is more sensitive to the presence of periodicity and has a higher resolution power than other classical techniques. In the present work the discussion is focused on the way in which the observational errors have to be included in the method. The application of the resulting modified algorithm to real data and a performance comparison with the former algorithm are presented. The dependence of both periodograms on the size of the partition is also investigated. Analytical estimates are given only for the limiting case of small errors. The numerical results show that the new algorithm leads to a smoother periodogram and provides a higher significance for the minimum than the former algorithm.  相似文献   

10.
The auroras on Jupiter and Saturn can be studied with a high sensitivity and resolution by the Hubble Space Telescope ( HST ) ultraviolet (UV) and far-ultraviolet Space Telescope Imaging Spectrograph (STIS) and Advanced Camera for Surveys (ACS) instruments. We present results of automatic detection and segmentation of Jupiter's auroral emissions as observed by the HST ACS instrument with the VOronoi Image SEgmentation (VOISE). VOISE is a dynamic algorithm for partitioning the underlying pixel grid of an image into regions according to a prescribed homogeneity criterion. The algorithm consists of an iterative procedure that dynamically constructs a tessellation of the image plane based on a Voronoi diagram, until the intensity of the underlying image within each region is classified as homogeneous. The computed tessellations allow the extraction of quantitative information about the auroral features, such as mean intensity, latitudinal and longitudinal extents and length-scales. These outputs thus represent a more automated and objective method of characterizing auroral emissions than manual inspection.  相似文献   

11.
12.
13.
14.
15.
16.
We present a new algorithm, Eclipsing Binary Automated Solver (EBAS), to analyse light curves of eclipsing binaries. The algorithm is designed to analyse large numbers of light curves, and is therefore based on the relatively fast ebop code. To facilitate the search for the best solution, EBAS uses two parameter transformations. Instead of the radii of the two stellar components, EBAS uses the sum of radii and their ratio, while the inclination is transformed into the impact parameter. To replace human visual assessment, we introduce a new 'alarm' goodness-of-fit statistic that takes into account correlation between neighbouring residuals. We perform extensive tests and simulations that show that our algorithm converges well, finds a good set of parameters and provides reasonable error estimation.  相似文献   

17.
A new fast Bayesian approach is introduced for the detection of discrete objects immersed in a diffuse background. This new method, called PowellSnakes, speeds up traditional Bayesian techniques by (i) replacing the standard form of the likelihood for the parameters characterizing the discrete objects by an alternative exact form that is much quicker to evaluate; (ii) using a simultaneous multiple minimization code based on Powell's direction set algorithm to locate rapidly the local maxima in the posterior and (iii) deciding whether each located posterior peak corresponds to a real object by performing a Bayesian model selection using an approximate evidence value based on a local Gaussian approximation to the peak. The construction of this Gaussian approximation also provides the covariance matrix of the uncertainties in the derived parameter values for the object in question. This new approach provides a speed up in performance by a factor of '100' as compared to existing Bayesian source extraction methods that use Monte Carlo Markov chain to explore the parameter space, such as that presented by Hobson & McLachlan. The method can be implemented in either real or Fourier space. In the case of objects embedded in a homogeneous random field, working in Fourier space provides a further speed up that takes advantage of the fact that the correlation matrix of the background is circulant. We illustrate the capabilities of the method by applying to some simplified toy models. Furthermore, PowellSnakes has the advantage of consistently defining the threshold for acceptance/rejection based on priors which cannot be said of the frequentist methods. We present here the first implementation of this technique (version I). Further improvements to this implementation are currently under investigation and will be published shortly. The application of the method to realistic simulated Planck observations will be presented in a forthcoming publication.  相似文献   

18.
The principles of measuring the shapes of galaxies by a model-fitting approach are discussed in the context of shape measurement for surveys of weak gravitational lensing. It is argued that such an approach should be optimal, allowing measurement with maximal signal-to-noise ratio, coupled with estimation of measurement errors. The distinction between likelihood-based and Bayesian methods is discussed. Systematic biases in the Bayesian method may be evaluated as part of the fitting process, and overall such an approach should yield unbiased shear estimation without requiring external calibration from simulations. The principal disadvantage of model fitting for large surveys is the computational time required, but here an algorithm is presented that enables large surveys to be analysed in feasible computation times. The method and algorithm is tested on simulated galaxies from the Shear TEsting Programme (STEP).  相似文献   

19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号