## Similar protocols

## Protocol publication

[…] T1w and T2w images were brain‐extracted (FSL's Brain Extraction Tool; FSL 5.0.8; http://fsl.fmrib.ox.ac.uk/fsl) and corrected for bias field inhomogeneities. Each subject's T1w image was aligned to an age‐appropriate template using nonlinear registration. Voxel‐wise maps of volume change induced by the transformation were characterized by the determinant of the Jacobian operator, referred to here as the Jacobian map. Each map was log‐transformed so that values greater than 0 represent local areal expansion in the subject relative to the target and values less than 0 represent areal contraction. Before transformation into template space, T2w tissue intensities were first matched to the population‐based T2w template using a piece‐wise linear transform to allow for quantitative comparison across subjects. T2w images were then aligned to the corresponding T1w images with rigid‐body registration and transformed into template space using the previously calculated deformations. T1w‐derived Jacobian maps and T2w intensity images were iteratively smoothed to a full width at half maximum (FWHM) of 8mm (AFNI's 3dBlurToFWHM; http://afni.nimh.nih.gov/afni) before linked independent component analysis (ICA).Diffusion data were visually assessed and gradient volumes removed if affected by motion‐induced slice dropout artefacts. In total, 31.8% (143 of 449) of subjects had at least one gradient volume removed (mean, 2.35; range, 1–9). Motion and eddy current correction was then performed by aligning all diffusion volumes to the reference b = 0 image and the corresponding b‐vectors rotated accordingly. Diffusion tensors were modeled at each voxel using a weighted least squares fit to derive maps of fractional anisotropy (FA) and mean diffusivity (MD) for each subject.Skeletonized white matter FA maps were produced following the tract‐based spatial statistics protocol. FA maps were aligned to a study‐specific template and averaged to create a mean FA map. The mean map was skeletonized and maximal FA values from nearby voxels in the individual, aligned maps projected to the skeleton. An analogous approach (gray‐matter–based spatial statistics) was used to create skeletonized cortical mean diffusivity maps. MD maps were aligned to a study‐specific T2w template, alongside probabilistic cortical segmentations derived from the corresponding T2w images. Mean diffusivity values from voxels with maximal cortical probability in the aligned cortical segmentation maps were projected onto a mean cortical skeleton to create skeletonized MD maps. In addition, in order to include microstructural measures from the deep gray matter, a set of atlas‐defined deep gray matter labels was applied to the mean diffusivity maps after transformation into template space. MD within the masks was smoothed to 5mm FWHM and added to the cortical skeleton to provide spatial maps of both cortical and deep gray matter MD for analysis. [...] In order to investigate the relationship between clinical and imaging data, we performed canonical correlation analysis (CCA). CCA seeks to maximize the correlation between successive linear transformations of two variable sets, X and Y (Fig B). The result is a canonical correlation between two variates, U=aX and V=bY, where a and b are the canonical vectors or weights sought by the model. Once a pair of canonical variates is found, a successive pair is sought subject to the constraint that they are uncorrelated with the first pair, and so on.Here, we enter the component weights derived from linked ICA (Fig A) alongside a set of clinical and environmental variables (Table ) into CCA to identify multivariate clinical‐image pairs. The statistical significance of the correlation between canonical pairs was assessed sequentially with a permutation test, swapping the rows of one feature matrix with respect to the other 10,000 times and recording the maximum correlation between pairs. Canonical correlation analysis was performed using **Scikit**‐learn 0.17.
To determine the relationship between each clinical variable in X and the model, we calculate the loading, or correlation, between the original variable (e.g: X
1,…,X
n) and their respective canonical variate (U
1,…,U
m) in each pair, where n is the number of original variables in X and m is the number of canonical pairs. In each case, loading strength was assessed with permutation testing (10,000 permutations). Note that the canonical weights for each pair show the unique contribution of each variable to the synthetic canonical variate whereas loadings show the overall correlation; hence, variables with positive canonical loadings may still have negative coefficients that reflect a dependence on, or interaction with, other contributing variables. To estimate confidence intervals (CIs) for canonical correlations, weights, and loadings, we implemented a bootstrapping procedure, resampling our data with replacement 10,000 times and fitting the CCA model to each sample.In order to visualize the imaging phenotype associated with each clinical covariate, we performed an analogous procedure, estimating voxel‐wise loadings by calculating correlations between the original imaging data sets and each canonical variate (V
1,…,V
m). Loading maps were assessed for significance using permutation testing and corrected for multiple comparisons across voxels using FSL's randomise tool (http://fsl.fmrib.ox.ac.uk/fsl).Associations between the canonical variates in each pair and neurodevelopmental outcome were assessed using linear regression (SPSS v21; IBM Corp., Armonk, NY). [...] To allow detailed exploration of all significant imaging‐clinical pairs, we have made available interactive statistical maps for each imaging modality to view online at **NeuroVault** (http://neurovault.org/collections/2178). […]

## Pipeline specifications

Software tools | Scikit-Learn, NeuroVault |
---|---|

Application | Neuroimaging analysis |

Organisms | Homo sapiens |

Diseases | Central Nervous System Diseases, Cardiovascular Abnormalities, Cerebrovascular Trauma, Leukoencephalopathies |