[MR 연구] RAKI
The two most common parallel imaging approaches are
- SENSE (sensitivity encoding)
- GRAPPA (generalized autocalibrating partially parallel acquisition)
SENSE
SENSE is a reconstruction approach that is formulated in the image domain as a linear inverse problem with a priori knowledge of the receiver sensitivities.
GRAPA
GRAPPA: the reconstrution is an interpolation in k-space using shift-invariant convolutional kernels to estimate missing k-space lines from acquired lines, which constitutes a linear reconstruction method.
- These convolutional kernels are determined from a small amount of fully sampled reference data, referred to as autocalibration signal (ACS).
SPIRiT
SPIRiT: iterative self-consistent parallel imaging reconstruction, which iteratively performs local interpolation and enforces data consistency has also been proposed, facilitating the use of regularization in the reconstruction process.
Clinical Radiologists
- Although the inherent denoising properties of CS-based techniques are desirable, the need for incoherent undersampled acquisitions, as well as its residual artifacts that are not readily accepted by clinical radiologists, have hindered its deployment.
- Therefore, even though noise amplification remains a challenge for the linear reconstruction algorithms used in parallel imaging, these approaches continue to be the prominent technique for accelerated imaging in clinical MRI.
The focus of the MRI reconstruction
- The focus of the MRI reconstruction community has been on generating more advanced regularizers by training on large amounts of data sets, along with both random and uniform undersampling patterns.
- Commonly, this training is performed in a manner that is not patient- or scan-specific, and there fore requires large databases of MR images.
RAKI
- RAKI is a k-space reconstruction method that uses deep learning on a small amount of scan-specific ACSdata.
- This study focuses on uniformly undersampled acquisitions, to improve the quality of parallel imaging reconstructions and for integration with existing data acquisition approaches.
- However, extensions to other sampling patterns such as random and non-Cartesian are possible within the described k-space method.
- Unlike most recent deep learning approaches, the proposed scheme does not require any training database of images.
- Instead, the neural networks are trained for each specific scan, or set of scans, with a limited amount of ACS data, simliar to GRAPPA.
RAKI contribution
- Instead of generating linear convolutional kernels from the ACS data as in k-space interpolation-based methods such as GRAPPA, RAKI learns nonlinear convolutional neural networks from the ACS data.
- Using scan- and subject-specific deep learning, and learning all the necessary neural network parameters from the ACS data, any dependence on training databases or assumptions about image compressibility are avoided.
- RAKI is non-linear reconstruction approach.
- Phantom experiments and in vivo imaging are performed, comparing RAKI to the GRAPPA parallel imaging approach.
Redundancies in the multi-coil acquisition
- Because the encdoing process in a multi-coil MRI acquisition is linear, the reconstruction for sub-sampled data is also expected to be linear.
- This is the premise of linear parallel imaging methods, which aim to estimate the underlying reconstruction process that exploits redundancies in the multi-coil acquisition, using a linear system with a few degrees of freedom.
- These degrees of freedom
- captured in the small convolutional kernel sizes for GRAPPA
- or the smooth coil sensitivity maps for SENSE
- In essence, linear functions with such limited degrees of freedoms form a subset of all linear functions.
- In linear parallel imaging, the underlying reconstruction is approximated with a function from this restricted subset.
RAKI에서는 non-linear mapping 사용
- We hypothesize that this restricted, yet non-linear, function space defined through CNNs enables to non-linearly learn redundancies among coils, without learning specific k-space characteristics, which may lead to overfitting.
- This is simliar to the GRAPPA reconstruction, which successfully extracts such redundancies with a linear procedure, without adapting to k-space signal characteristics.
- The idea of using a larager class of nonlinear functions in modeling redundancies among coils for GRAPPA has also been explored previously, although a fixed nonlinear kernel mapping was used instead of the deep learning strategies explored here.
- Therefore, the modeling of the multi-coil system with a non-linear approximation with few degrees of freedom, instead of linear approximation with similarly few degrees of freedom, is hypothesized to extract comparable coil information without overfitting, while offering improved noise resilience.
Noise in the Calibration data & non-linear estimation
- Another reason for using non-linear estimation relates to the presence of noise in the calibration data.
- For the GRAPPA formulation above, there is noise in both the target data, b, and the calibration matrix, A.
- When both sources of noise are present, it has been shown that linear convolutional kernels incur a bias when estimated via least squares, and this bias leads to non-linear effects on the estimation of missing k-space lines from acquired ones.
- In the presence of such data imperfections, which are in addition to the model mistmatches related to the degrees of freedom described earlier, a non-linear approximation procedure may improve reconstruction quality and reduce noise amplification.
CNN for MRI reconstruction in k-space
- RAKI replaces the linear estimation in GRAPPA that uses convolutional kerenels with a non-linear estimation that uses CNNs.
- RAKI is designed to calibrate the CNN from ACS data without necessitating use of any external training database for learning.
CNN은 real-valued numbers만 사용합니다.
- CNN literature and the associated deep learning procedures are commonly based on mappings over the real field.
- Therefore, before any processing, we map all the complex k-space data to real-valued numbers.
RAKI에서는 모든 layer들은 kernel dilation을 사용하여, acquired k-space lines만 process하도록 합니다.
The main difference with GRAPPA
- GRAPPA: solve a linear least squares problem to calculate one set of convolutional kernels.
- RAKI: solve a non-linear least squares problem to calculate three sets of convolutional kernels. -> 별다른게 아니고 conv filter 3개에 대한 파라미터를 gradient descent로 찾습니다.
ACS data
- ACS data of size 40 was taken from the central k-space lines, and this was used for the training of the CNN in RAKI reconstruction and for the determination of the convolutional kernels for GRAPPA reconstruction.
- RAKI는 3 layer CNN을 학습하고, GRAPPA는 convolutional kernels을 determine 합니다.
- Specifically, the CNNs in RAKI and the convolutional kernels in GRAPPA were estimated using ACS data from only one T1-weighted image.
- These CNNs and convolutional kernels then used to reconstruct all 11 images with varying T1 weights.
- Therefore, this tested whether RAKI captured the coil information, as in GRAPP or perfromed over-fitting to specific image or k-space features.
GRAPPA(linear)와 RAKI(non-linear)
- GRAPPA는 subject-specific linear convolution kerenels를 사용하여 parallel imaging reconstruction을 합니다.
- RAKI는 CNNs으로 non-linear function을 생성하여, k-space interpolation 을 합니다.
- RAKI는 Reduction factor, $R \in {1,2,3,4,5,6}$ 중 $R=4,5,6$ 에서 noise performance에서 GRAPPA보다 눈에 띄는 우위를 차지했습니다. (in phantom & in vivo imaging)
- 다시말해, high acceleration rate에서 noise reduction에 강합니다.
Immense MRI data
- Recent works have focused on training reconstruction algorithms on large amounts of data sets.
- Big data training has led to different regularization approaches for MR images, most often as regularizers without an explicit formula and based on CNNs, or as unsupervised learning of parameters for variational networks.
- RAKI took an MRI-focused approach and considered the problem as a k-space estimation problem instead of an image domain regualrization problem.
- This approach avoids dependency on large amounts of training data, and instead uses a small amount of ACS data, similar to existing parallel imaging reconstructions that are used in clinical practice.
- 10 몇년 전에 scan-specific한 SENSE reconstruction을 improve하기 위해 neural networks가 사용되었지만, 이런 work들은 CNNs을 사용하지 않았습니다.
Sparsity
- The nature of the ReLU function enables sparsitiy in representations.
Nonelinear methods to GRAPPA
- The use of nonlinear methods to improve GRAPPA reconstruction has been explored before but without the use of concepts from neural networks.
- A fixed set of virtual channels for a higher-dimensional feature space
- or transform-domain sparsity has also been proposed before.
- RAKI differs from these techniques, by regraining form any explicit assumptions on k-space or transform domains.
- RAKI instead uses neural networks to effectively represent a nonlinear function for estimating missing k-space data, successfully facilitating model-free reconstrcution.
ACS data –> shared across multiple scans (Multi-Contrast MR reconstruction)
- In most applications, where one is interested in these higher rates (e.g., diffusion, perfusion, or qauntitative MRI), it is not difficult to obtatin oneset of high-quality ACS data of sufficient size, which can be shared across multiple scans.
- 다른 contrast image의 ACS data를 충분히 취하여 사용할 수 있습니다. (multi contrast 연구로 확장가능)
- 다만 ACS를 CNN으로 처리하면, biase가 생겨서 multi-constrast에 사용하기 어렵습니다.
- Because of CNNs’ additive nature, inclusion of biases increases the dependence on how the $l_2$ norm of the k-space is scaled.
- This creates major limitations in processing data with multiple-varying contrast, as in quantitative MR parameter mapping, when training is done on one set of ACS data with a specific contrast weighting.
GRAPPA & RAKI의 Computation 방식 차이
- GRAPPA: The calibration of GRAPPA kernel can be done with a simple linear least squares problem.
- RAKI: CNN needs to be trained with a gradient descent.
RAKI 정리
- a scan-specific deep learning approach with convolutional neural networks trained on limitied ACS data to improve on the reconstruction quality of linear k-space interpolation-based methods (such as GRAPPA).
Leave a comment