Shape analysis has achieved much attention to access the morphometry of human organs in pathological factors, such as neuro-degenerative diseases [1,2] or lung cancer [3], owing to its potential to detect and localize the structural abnormalities of human organs. The necessity of the surface-based morphometry study is to represent the anatomical shape characteristics of human organs in a computational shape model. In regard to shape modeling for morphometry study, the following issues arose; 1) to robustly restore the individual shape details against large variations of shape and size, and 2) to guarantee the anatomical point correspondences across many subjects’ data for comparisons.
For modeling the individual shape of human organs, several approaches have been proposed during the past decade. The point distribution model (PDM) for shape analysis was investigated first in [4], and many modeling methods based on PDM have been proposed [5]. For example, [6] introduced a 3D atlas-based PDM construction using volumetric non-rigid registration. In addition, [7,8] used the closest point propagation algorithm to locate the anatomical landmarks on the surface boundary of a target structure in segmented images. However, these approaches are usually focused on optimizing the PDM by selecting the adequate number of landmarks and their positions for statistical shape modeling. From the viewpoint of a clinical morphometry study, the individual shape of the target structure needs to be regularly and consistently sampled in the group of subjects to accurately capture the shape characteristics and to make point-wise comparisons among the subjects. For this reason, surface parameterization methods, such as spherical harmonics mapping (SPHARM) [9], have been applied to shape analysis approaches [3,9,10]. However, most of these parameterization methods are limited to genus zero topology [5].
In this paper, we propose the progressive weighting scheme of a rigidity parameter within the Laplacian deformation framework for the robust shape modeling of human organs. In our modeling method, a template mesh model represents the generic shape for the organ of interest. Each vertex of the template model is iteratively attracted into image boundaries of the target organ by deforming the template model under the Laplacian deformation framework [11,12]. In this framework, we implement a weighting scheme for model parameters to flexibly control the rigidity property of the deformable model in a step-wise manner during the shape recovery process. The purpose of this weighting scheme is to preserve the proportional distance between vertices of the template model against shape and size variations. Our deformation framework helps to build inter-subject shape correspondences by maintaining a consistent point (vertex) distribution of the template model.
In our approach, the template model is used as a shape prior model representing the anatomical shape characteristics of the target structure for individual shape reconstruction. In addition, it also has a role as the PDM including the anatomical landmarks for shape analysis. For the construction of the template model, we employed the strategy of mean shape image construction using nonrigid image registration between the segmented images [13,14]. From the mean shape image, the template model can be constructed by marching cubes and a mesh decimation algorithm. To obtain a more uniform distribution of points in the template model for shape analysis, a surface re-tiling algorithm [15] or mesh regularization [16] can be applied after the construction of the template model.
In the following sections, we first describe our shape modeling method using the Laplacian deformation framework and its characteristics. Next, we present how the template model is deformed under the flexible control of modeling parameters, named as progressive weighting scheme, for robust shape modeling with area-preserving deformation.
>
A. Laplacian Deformation Framework
After the template model is placed close to the target structure in the image space, the template model is deformed to the image boundary while preserving the proportional distance to neighboring vertices. The model placement in the image space can be achieved by matching the center of mass and the principal axes between the template model and targets in segmented images. Our model deformation is based on a Laplacian surface, which encodes the local shape features of the template model as the difference between the position for each vertex and the center of mass regarding neighboring vertices.
Let
be a surface representation as a graph, where
is the set of vertices on the surface and
is the connectivity between the vertices and their adjacent vertices. The differential coordinates of a vertex are formulated as a form of a discrete Laplacian operator,
where
is a set of the connected vertices in N-ring neighborhood of
and
with weighting parameters, denoted by
where
is
The least-squares solution of this deformable model smoothly distributes the geometric errors across the entire domain. Fig. 1 demonstrates how the geometric errors are distributed across the surface model with respect to the model rigidity parameter,
>
B. Model Deformation using Progressive Weighting Scheme
Our strategy for preserving the point distribution in the
template model is to start the model deformation with a larger value of
where
where
Fig. 2 shows an example of model deformation with the progressive weighting scheme. Note that we used a sphere as a template model to demonstrate how to recover the shape features under the progressive weighting scheme in this snowman case. At earlier iterations, the template model is transformed to fit larger shape features of the target structure with a large value of
smaller regions. Plots of model rigidity weights and vertex displacement in Fig. 2 show that the reduced
In our approach, we also apply a strong constraint for an image boundary search based on vertex normal to prevent the self-intersection in the template model, which can be caused by improper boundary decisions. Especially, when the target structure has a non-zero genus topology, the distance threshold for boundary search is not easy to determine, because the size of holes regarding the target structure is various in subjects. In addition, the self-intersection can occur at holes, when the distance threshold is larger than the size of holes by referring to the image boundary at opposite sides of holes. To prevent this problem, we first perform the intersection test between the template model and the vertex-wise lines from
We performed experiments using synthetic data to evaluate our shape modeling framework with respect to the two goals of this research: 1) to accurately restore the individual shape of targets and 2) to build correct shape correspondences across subjects. In this study, we applied our modeling method to the second cervical vertebrae (C2) of humans to validate the robustness of our method in human organs of non-spherical topology, because C2 has an anatomically complex shape with three holes. In the experiment, we first extracted the surface mesh of C2 from the manually segmented CT image. Furthermore, we built the synthetic binary images, which include the different shape and size of C2. To construct the binary images, we applied uniform scale transformations in cases 1 and 2. In cases 3 and 4, we applied non-uniform scale transformations. In cases 5 and 6, we additionally rotated the models along the x- and y-axis, respectively after applying the non-uniform scale transformations. Then, we converted the template model into binary images (voxel spacing: 0.2 × 0.2 × 0.2) to perform individual shape modeling. We designed this configuration of the binary image construction to evaluate our non-rigid modeling method in stages of size and shape differences. For example, the binary images of cases 1 and 2 include only size differences of C2, but the images of cases 5 and 6 include the size and shape differences of C2 by nonuniform scale and rotation. Table 1 describes the scale
Description of affine transformations for constructing the synthetic volume data of the second cervical vertebrae (C2)
and rotation factors applied in each case. The volume of the target surfaces in each case is presented in the same table. With the synthetic binary images, we performed the non-rigid deformation of the template model using our method.
The first row in Fig. 3 demonstrates the initial state of the template model and the target surfaces in each case. The second row in Fig. 3 shows the results of the individual shape modeling using the progressive weighting scheme. From the scale and rotation, the size and shape of C2 were largely altered from that of the template model (see the first row in Fig. 3). In order to visually evaluate the anatomical shape correspondences, we also assigned anatomical landmarks to some vertices in the template model and displayed the position of same vertices in the result models reconstructed from the synthetic images. As a result, the anatomical landmarks were correctly aligned at the corresponding regions for each result model by our method.
We also compared the modeling results using the progressive weighting scheme and fixed weight parameters with the binary image of case 6. The binary image of case 6 has the largest differences in the size and shape of C2. Fig. 4 shows the results using fixed weight parameters (1.0 and 20.0 for
To quantitatively compare the results in all cases from the viewpoint of accuracy regarding shape restoration, we measured the volume overlap between the results and the binary images. In addition, the edge length and the variation of point distribution along the x-, y-, and z-axis were measured in the modeling results regarding the inter-subject shape correspondences. Table 2 shows the results of volume overlap, edge length and point distribution measurement. First, we calculated the volume overlap using the similarity index between the binary images and the result models. The similarity index (
Edge length and point distribution along the x-, y-, and z-axis in the result models. Unit of volume is mm3, and edge length is mm. The relative standard deviation (SD) is the proportion of the SD for each vertex coordinate to that of the template model
For the size and shape differences in each case, we found that the deformed models in each case covered the regions of C2 in binary images very well (minimum
We investigated the edge length between all vertices, and the average and standard deviation (SD) of the edge length changed according to the volume variation between each case. Our non-rigid deformation method well preserved the relative distance between vertices against shape and size variations. We also verified the preservation of the proportional point distribution in our method by the relative SD of the vertex position along each axis. The relative SD is defined as
where V is the position of vertices in the modeling results and
is the position of vertices in the template model. Since we applied the non-uniform scale factor for the latter cases, we used the relative SD to check the point distribution results along each axis rather than explicitly comparing the vertex position between the non-rigid modeling results and the transformed template model. As a result, the values of the relative SD in each case were very similar to the scale factors, which were applied to the template model for binary image construction. This indicates that our non-rigid modeling method preserved the original point distribution in the template model against large changes in the size and shape of C2 between each case in each case.
For surface-based morphometry studies, it is important to robustly restore the individual shape details of the target structure in each subject and to build the anatomical point correspondences across subjects for the shape comparison between them. For these purposes, we propose an organ shape modeling method based on the Laplacian deformation framework. In particular, we implemented the progressive weighting scheme controlling the rigidity parameter of the deformable model during the model deformation process to capture the shape details while preserving the relative distance of neighboring vertices in regards to the deformable model as much as possible. With our method, the geometric errors between the template model and the target surface in the initial state are smoothly distributed across the entire domain during the model deformation process. It allows us to obtain the robust shape modeling results, which represent the accurate shape features of the target structure with the correct shape correspondences between subjects. Moreover, our modeling method is not limited to the organs of spherical topology. This feature promises to extend the range of applications for morphometry study.