# Quelle:Rh/Chen et al. 2006

## < Quelle:Rh

32.432Seiten in
diesem Wiki

Angaben zur Quelle [Bearbeiten]

 Autor Tao Chen, Julian Morris, Elaine Martin Titel Probability Density Estimation via Infinite Gaussian Mixture Model: Application to Statistical Process Monitoring Zeitschrift Journal of the Royal Statistical Society. Series C. Applied Statistics Ort Oxford Verlag Wiley-Blackwell Jahr 2006 Nummer 55 Seiten 699-715 DOI 10.1.1.122.8834 URL http://epubs.surrey.ac.uk/6482/2/tchen06-jrscc.pdf (preprint) Literaturverz. no Fußnoten no Fragmente 4

Fragmente der Quelle:
 Zuletzt bearbeitet: 2012-08-06 16:18:12 Hindemith

 Typus Verschleierung Bearbeiter Hindemith Gesichtet
Untersuchte Arbeit:
Seite: 144, Zeilen: 33-35
Quelle: Chen et al. 2006
Seite(n): 4, Zeilen: 17-21
Given the set y the classical approach to estimate the parameters (μl, Ωl), is to maximize the likelihood by using the EM algorithm (Dempster et al. (1977)). The EM algorithm guarantees convergence to a local maximum, with the quality of the maximum being heavily [dependent on the random initialization of the algorithm.] Given a set of training data with N observations, x={x1,...,xN}, the classical approach to estimating the Gaussian mixture model parameters, (μ,τ,π), is to maximize the likelihood using the expectation-maximization (EM) algorithm (Dempster et al., 1977). The EM algorithm guarantees convergence to a local maximum, with the quality of the maximum being heavily dependant [sic] on the random initialization of the algorithm.
 Anmerkungen Eine Quellenangabe fehlt. Die Übernahme setzt sich auf der nächsten Seite fort: Rh/Fragment_145_01 Sichter (Hindemith) Plagiatsfischer

 Zuletzt bearbeitet: 2012-08-06 16:18:23 Hindemith

 Typus Verschleierung Bearbeiter Hindemith Gesichtet
Untersuchte Arbeit:
Seite: 145, Zeilen: 1-8
Quelle: Chen et al. 2006
Seite(n): 4, Zeilen: 22-28
Alternatively, a Bayesian approach can be used to combine the prior distribution for the parameters and the likelihood, resulting in a joint posterior distribution:

[EQUATION, equivalent to equation in the source] (5.5.2)

However, the joint posterior takes a highly complicated form. Thus, it is generally not feasible to perform analytical inference based on the above posterior distribution. The MCMC approaches have typically been used to calculate the joint posterior and, of the approaches that have been proposed in the literature, Gibbs sampling is suitable for mixture models.

Alternatively, a Bayesian approach can be used to combine the prior distribution for the parameters and the likelihood, resulting in a joint posterior distribution:

[EQUATION] (3)

However the joint posterior takes a highly complicated form. Thus it is generally not feasible to perform any analytical inference based on the above posterior distribution. MCMC approaches have typically been used to calculate the joint posterior and of the approaches proposed in the literature, Gibbs sampling is suitable for mixture models

 Anmerkungen Keine Quellenangabe vorhanden. Die oben nicht dokumentierte Gleichung ist in der Dissertation equivalent zur Gleichung in der Quelle. Variablen und Verteilungen haben nur andere Namen. Sichter (Hindemith) Plagiatsfischer

 Zuletzt bearbeitet: 2014-01-07 15:29:45 Schumann

 Typus Verschleierung Bearbeiter Hindemith Gesichtet
Untersuchte Arbeit:
Seite: 148, Zeilen: 23-27, 29-32
Quelle: Chen et al. 2006
Seite(n): 7, 8, Zeilen: 7: 1-5; 8: 14-17
Let k- denote the number of represented mixtures. For represented mixtures, the previously derived conditional posteriors of (μl, Ωl) in (5.4.4) and (5.4.5) still hold. In contrast, in the absence of training data, the parameters in unrepresented mixtures are solely determined by their priors f(μl &#x007C ml, &#x03A3 / &#x03BA ) and f(Ω &#x007C &#x03BD1,&#x03C8-1). Thus the inference of the indicators, ci, must incorporate the effect of infinite mixtures. [...]

The conditional posteriors of (μl, Ωl)are Gaussian and Wishart distributions respectively, from which samples can be generated by using standard procedures. The sampling of the indicators requires the evaluation of the integral in equation (5.5.7), which is only analytically feasible if the conjugate prior is used.

Let krep denote the number of represented mixtures. For represented mixtures, the previously derived conditional posteriors of μj and &#x03C4j still hold (Eq. (4) and (5)). On the other hand, in the absence of training data, the parameters in unrepresented mixtures are

solely determined by their priors (p(μj &#x007C &#x03BB , &#x03B3 ) and p (&#x03C4j &#x007C &#x03B2 , &#x03C9 )), Thus the inference of the indicators, c, has to incorporate the effect of infinite mixtures.

[Seite 8]

The conditional posteriors of μj and &#x03C4j are Gaussian and Gamma distributions respectively, from which samples can be generated using standard procedures. The sampling of the indicators requires the evaluation of the integral in Eq. (12), which is only analytically feasible if the conjugate prior is used [...].

 Anmerkungen Ein Verweis auf die Quelle fehlt, obwohl es sich hier um eine fast wörtliche Übernahme handelt (nur die Notation wurde angepasst). Sichter (Hindemith) Plagiatsfischer

 Zuletzt bearbeitet: 2012-07-30 23:32:08 Hindemith

 Typus Verschleierung Bearbeiter Graf Isolan Gesichtet
Untersuchte Arbeit:
Seite: 150, Zeilen: 10-18
Quelle: Chen et al. 2006
Seite(n): 8 (preprint), Zeilen: 26-36
5.5.5. Prediction. The calculation of the predictive probability of new data will be averaged over a number of MCMC samples, which are selected from those samples where the algorithm tends to stabilize. Stabilization will be assessed heuristically based on the value of the log-likelihood. Additionally to eliminate the auto-correlation, one sample will be selected from each consecutive set of 10 iterations. For a particular MCMC sample, the predictive probability is attained from two components: the represented and the unrepresented mixtures. In a similar manner to that adopted in the sampling stage, the probability from unrepresented mixtures will be approximated by a finite mixture of Gaussians, whose parameters (&#x03BCl, &#x03A9l) are drawn from the prior. 2.4 Prediction

The calculation of the predictive probability of new data will be averaged over a number of MCMC samples, which are selected from those where the algorithm tends to stabilize. Stabilization will be assessed heuristically based on the value of the log-likelihood. Additionally to eliminate the auto-correlation, one sample will be selected from each consecutive set of 10 iterations.

For a particular MCMC sample, the predictive probability is attained from two components: the represented and the unrepresented mixtures. In a similar manner to that adopted in the sampling stage, the probability from unrepresented mixtures will be approximated by a finite mixture of Gaussians, whose parameters, (&#x03BCj, &#x03C4j) are drawn from the prior.

 Anmerkungen identisch; ohne Kennzeichnung und ohne Quellenangabe. Sichter (Graf Isolan), KnallErbse, Hindemith