Positron emission tomography (PET) is an imaging technique which can be used to investigate chemical changes in human biological processes such as cancer development or neurochemical reactions. the methodology is both robust to typical brain imaging noise levels while also being computationally efficient. The new methodology is investigated through simulations in both one-dimensional functions and 2D images and also applied to a neuroimaging study whose goal is the quantification of opioid receptor concentration in the brain. can be used to determine the receptor density of the underlying neurotransmitter (Innis et al. 2007). As advocated by O’Sullivan et al. (2009), this will be approximated by the integral of the deconvolved response function generated from the observed data, which in itself is a more meaningful measure as it is Ramelteon less dependent on the particular compartmental model fit assumed. The article proceeds as follows. In the next section, the general methodology moderately, inspired by PET data, is introduced for deconvolution of multiply observed functions through the use of FPCA. In Section 3, the methods are assessed through simulation, not only on 1D functions, but also on moderately realistic 2D image slices where both spatial correlations and nonhomogeneous noise models, typical of those found in PET studies, are used. In Section 4, the methods are applied to measured [11C]-diprenorphine scans taken from healthy volunteers and are used to Ramelteon provide voxelwise quantification of receptor concentration without resorting to compartmental assumptions. The final section discusses some of the possible extensions of this work. 2. ?METHODOLOGY Let in PET analysis, where is a generic index representing a spatial location. The conventional assumption is that = where is the known decay constant of the radioisotope (in the case of 11 voxels and observations per voxel. Hence, the observations for the = are independent noise for = 1, , and = 1, , to infinity (as this would require a parametric model), but this finite truncated version could well be preferred in many situations (O’Sullivan et al. 2009), particularly given the known difficulties of function extrapolation. 2.1 . Spatial Curve Pre-Regularization With the presence of noise in the output data across all time points (for the is three-dimensional, so a four-dimensional Ramelteon smoother is employed. This may seem a formidable task, given the large amount of available data (32 time points and 150,784 brain voxels), but it is feasible if one adopts an computationally efficient approach. For those who are interested in the theoretical parts of this step, the following are the specific assumptions we make. We assume that the orders of bandwidths are all of the same order as 0 and . Let be the smoothed estimate of arg min is a four-dimensional kernel function (an Epanechnikov kernel was used in the data analysis), is the spatial location for voxel is the variable bandwidth, and is the calibration coefficient for is assumed to be a symmetric probability density function with bounded support. Note that constant bandwidths are employed for spatial coordinates (in the application, one bandwidth is chosen for all three dimensions), but an adaptive local bandwidth for the time dimension is applied (see Section 2.2 for details). The reconstructed concentration function for where the time-course data were observed, were selected (we used = 13 in the application, which was approximately 1/3 of the time points in the time course). At each Rabbit Polyclonal to GPR142 location, the bandwidth was chosen such that the interval [? + was close to zero. A fourth-order polynomial was applied to the pair set {(= 1, , (shown in Figure 1) was further multiplied by a constant . The constant serves to facilitate calibration of the final local bandwidths, because the choice of local Ramelteon bandwidths for voxels, use the observations of the remaining ? voxels to estimate the mean.