Organ Shape Modeling Based on the Laplacian Deformation Framework for Surface-Based Morphometry Studies

  • cc icon
  • ABSTRACT

    Recently, shape analysis of human organs has achieved much attention, owing to its potential to localize structural abnormalities. For a group-wise shape analysis, it is important to accurately restore the shape of a target structure in each subject and to build the inter-subject shape correspondences. To accomplish this, we propose a shape modeling method based on the Laplacian deformation framework. We deform a template model of a target structure in the segmented images while restoring subject-specific shape features by using Laplacian surface representation. In order to build the inter-subject shape correspondences, we implemented the progressive weighting scheme for adaptively controlling the rigidity parameter of the deformable model. This weighting scheme helps to preserve the relative distance between each point in the template model as much as possible during model deformation. This area-preserving deformation allows each point of the template model to be located at an anatomically consistent position in the target structure. Another advantage of our method is its application to human organs of non-spherical topology. We present the experiments for evaluating the robustness of shape modeling against large variations in shape and size with the synthetic sets of the second cervical vertebrae (C2), which has a complex shape with holes.


  • KEYWORD

    Shape modeling , Laplacian deformation framework , Progressive weighting scheme , Surface-based shape analysis

  • I. INTRODUCTION

    Shape analysis has achieved much attention to access the morphometry of human organs in pathological factors, such as neuro-degenerative diseases [1,2] or lung cancer [3], owing to its potential to detect and localize the structural abnormalities of human organs. The necessity of the surface-based morphometry study is to represent the anatomical shape characteristics of human organs in a computational shape model. In regard to shape modeling for morphometry study, the following issues arose; 1) to robustly restore the individual shape details against large variations of shape and size, and 2) to guarantee the anatomical point correspondences across many subjects’ data for comparisons.

    For modeling the individual shape of human organs, several approaches have been proposed during the past decade. The point distribution model (PDM) for shape analysis was investigated first in [4], and many modeling methods based on PDM have been proposed [5]. For example, [6] introduced a 3D atlas-based PDM construction using volumetric non-rigid registration. In addition, [7,8] used the closest point propagation algorithm to locate the anatomical landmarks on the surface boundary of a target structure in segmented images. However, these approaches are usually focused on optimizing the PDM by selecting the adequate number of landmarks and their positions for statistical shape modeling. From the viewpoint of a clinical morphometry study, the individual shape of the target structure needs to be regularly and consistently sampled in the group of subjects to accurately capture the shape characteristics and to make point-wise comparisons among the subjects. For this reason, surface parameterization methods, such as spherical harmonics mapping (SPHARM) [9], have been applied to shape analysis approaches [3,9,10]. However, most of these parameterization methods are limited to genus zero topology [5].

    In this paper, we propose the progressive weighting scheme of a rigidity parameter within the Laplacian deformation framework for the robust shape modeling of human organs. In our modeling method, a template mesh model represents the generic shape for the organ of interest. Each vertex of the template model is iteratively attracted into image boundaries of the target organ by deforming the template model under the Laplacian deformation framework [11,12]. In this framework, we implement a weighting scheme for model parameters to flexibly control the rigidity property of the deformable model in a step-wise manner during the shape recovery process. The purpose of this weighting scheme is to preserve the proportional distance between vertices of the template model against shape and size variations. Our deformation framework helps to build inter-subject shape correspondences by maintaining a consistent point (vertex) distribution of the template model.

    II. Method

    In our approach, the template model is used as a shape prior model representing the anatomical shape characteristics of the target structure for individual shape reconstruction. In addition, it also has a role as the PDM including the anatomical landmarks for shape analysis. For the construction of the template model, we employed the strategy of mean shape image construction using nonrigid image registration between the segmented images [13,14]. From the mean shape image, the template model can be constructed by marching cubes and a mesh decimation algorithm. To obtain a more uniform distribution of points in the template model for shape analysis, a surface re-tiling algorithm [15] or mesh regularization [16] can be applied after the construction of the template model.

    In the following sections, we first describe our shape modeling method using the Laplacian deformation framework and its characteristics. Next, we present how the template model is deformed under the flexible control of modeling parameters, named as progressive weighting scheme, for robust shape modeling with area-preserving deformation.

      >  A. Laplacian Deformation Framework

    After the template model is placed close to the target structure in the image space, the template model is deformed to the image boundary while preserving the proportional distance to neighboring vertices. The model placement in the image space can be achieved by matching the center of mass and the principal axes between the template model and targets in segmented images. Our model deformation is based on a Laplacian surface, which encodes the local shape features of the template model as the difference between the position for each vertex and the center of mass regarding neighboring vertices.

    Let

    image

    be a surface representation as a graph, where

    image

    is the set of vertices on the surface and

    image

    is the connectivity between the vertices and their adjacent vertices. The differential coordinates of a vertex are formulated as a form of a discrete Laplacian operator,

    image

    where

    image

    is a set of the connected vertices in N-ring neighborhood of vi,

    image

    and di is the number of the adjacent vertices. The shape restoration in target images is accomplished by finding the optimal position of vertices that minimize the following energy function of two error terms,

    image

    with weighting parameters, denoted by α and β. α is a parameter for controlling the model rigidity, and β is a parameter for external reference, ci, which is the vector from each vertex to the closest image boundary of the target structure in images. The first term in Equation (2) represents the error of the local shape changes, caused by vertex translation, in differential coordinates. The second term is the geometric error between the current position of vertices and the closest image boundary. L is a n × n matrix of the discrete Laplacian operator, where n is the number of vertices. V is the vector of all vertices, V = [vo, ..., vn]. m is the number of vertices where the closest boundary is found in vertex normal or opposite direction. In practical applications, the optimal V' can be achieved by solving a linear system AV' = B,

    image

    where

    image

    is αL and I is a m × n matrix where each row has one element equal to 1 and all others equal 0. The column index of the element equal to 1 is determined by the index of the vertex where theclosest boundary is found. This linear system is an overdetermined linear system as the system matrix is a (n + m) × n matrix, and this has a unique solution by solving the associated normal equation (AT A)V' = AT B in least-squares sense. In this deformation framework, we iteratively find the image boundary at each vertex and solve this system to fit the template model to the target structure in images.

    The least-squares solution of this deformable model smoothly distributes the geometric errors across the entire domain. Fig. 1 demonstrates how the geometric errors are distributed across the surface model with respect to the model rigidity parameter, α. We set a reference point,which drives the translation of the closest vertex. We also add the positional constraints of other vertices as their current position at each iteration. When α is set as a small value (α = 1.0), the deformation occurs at a small region of the surface model, and the distribution of points (vertices) is more skewed than that of the results derived by larger values for α. As α gets larger, more vertices are translated together while preserving the relative distance to neighbors. In other words, the geometric errors are distributed across more vertices. However, in practical applications, the larger values of α can cause insufficient model propagation to boundaries, and they can generate surfaces, which are too smoothed to represent the local shape characteristics of the target structure in images. Based on this behavior of the Laplacian deformation framework, we implement a flexible weighting scheme for model rigidity to minimize the distortion of the point distribution in the template model during the model deformation process.

      >  B. Model Deformation using Progressive Weighting Scheme

    Our strategy for preserving the point distribution in the

    template model is to start the model deformation with a larger value of α and to maintain the value until the template model is deformed less than at the threshold. When the template model is not deformed with a value of α, we reduce the α value in a step-wise manner to drive the model deformation in smaller regions of the template model. Let Edeform be the amount of model deformation, formulated as

    image

    where v'i is the position of each vertex transformed after each iteration. When Edeform is smaller than a threshold ε, we reduce α by the following equation,

    image

    where αmax and αmin are the maximum and minimum value of α. κmax is a scalar value to subdivide the range of α into smaller intervals and κ is a scalar value increased by 1. αmin mindetermines the surface quality of the result models.

    Fig. 2 shows an example of model deformation with the progressive weighting scheme. Note that we used a sphere as a template model to demonstrate how to recover the shape features under the progressive weighting scheme in this snowman case. At earlier iterations, the template model is transformed to fit larger shape features of the target structure with a large value of α (α = 50). At the same time, our deformable model keeps the relative geometric position between each vertex and their neighbors. When the model is deformed less than the threshold after some iteration, α is reduced by our weighting scheme, and it derives model deformation in

    smaller regions. Plots of model rigidity weights and vertex displacement in Fig. 2 show that the reduced α caused additional model deformation (see the peaks after dotted lines in Fig. 2). This scheme for progressively adjusting α is repeated until the end of model deformation. Our progressive weighting scheme promises more robust shape reconstruction and inter-subject shape correspondences.

    In our approach, we also apply a strong constraint for an image boundary search based on vertex normal to prevent the self-intersection in the template model, which can be caused by improper boundary decisions. Especially, when the target structure has a non-zero genus topology, the distance threshold for boundary search is not easy to determine, because the size of holes regarding the target structure is various in subjects. In addition, the self-intersection can occur at holes, when the distance threshold is larger than the size of holes by referring to the image boundary at opposite sides of holes. To prevent this problem, we first perform the intersection test between the template model and the vertex-wise lines from vi + ni·tsampling to vi + ni·tboundary. ni is the vertexnormal of vertex vi. tboundary and tsampling are the distance threshold and the sampling rate for the boundary search, respectively. When this line crosses the template model at a vertex, we do not apply the external reference, ci in Equation (2), at the vertex. During model deformation, tboundary is reduced according to the model rigidity to resume the boundary search and to recover the local shape variation in smaller regions.

    III. EXPERIMENTAL RESULTS

    We performed experiments using synthetic data to evaluate our shape modeling framework with respect to the two goals of this research: 1) to accurately restore the individual shape of targets and 2) to build correct shape correspondences across subjects. In this study, we applied our modeling method to the second cervical vertebrae (C2) of humans to validate the robustness of our method in human organs of non-spherical topology, because C2 has an anatomically complex shape with three holes. In the experiment, we first extracted the surface mesh of C2 from the manually segmented CT image. Furthermore, we built the synthetic binary images, which include the different shape and size of C2. To construct the binary images, we applied uniform scale transformations in cases 1 and 2. In cases 3 and 4, we applied non-uniform scale transformations. In cases 5 and 6, we additionally rotated the models along the x- and y-axis, respectively after applying the non-uniform scale transformations. Then, we converted the template model into binary images (voxel spacing: 0.2 × 0.2 × 0.2) to perform individual shape modeling. We designed this configuration of the binary image construction to evaluate our non-rigid modeling method in stages of size and shape differences. For example, the binary images of cases 1 and 2 include only size differences of C2, but the images of cases 5 and 6 include the size and shape differences of C2 by nonuniform scale and rotation. Table 1 describes the scale

    and rotation factors applied in each case. The volume of the target surfaces in each case is presented in the same table. With the synthetic binary images, we performed the non-rigid deformation of the template model using our method.

    The first row in Fig. 3 demonstrates the initial state of the template model and the target surfaces in each case. The second row in Fig. 3 shows the results of the individual shape modeling using the progressive weighting scheme. From the scale and rotation, the size and shape of C2 were largely altered from that of the template model (see the first row in Fig. 3). In order to visually evaluate the anatomical shape correspondences, we also assigned anatomical landmarks to some vertices in the template model and displayed the position of same vertices in the result models reconstructed from the synthetic images. As a result, the anatomical landmarks were correctly aligned at the corresponding regions for each result model by our method.

    We also compared the modeling results using the progressive weighting scheme and fixed weight parameters with the binary image of case 6. The binary image of case 6 has the largest differences in the size and shape of C2. Fig. 4 shows the results using fixed weight parameters (1.0 and 20.0 for α) and the progressive weighting scheme (α = 20→1). In this experiment, we applied same values for other modeling parameters. With the small rigidity (α = 1), the vertices in the model had tendency to stick to the closest boundary more strongly, and this caused insufficient model deformation to the target surface. This indicates that the template model did not fit to the anatomically corresponding boundary. And this small rigidity parameter also provoked the distortion of point distribution (see Fig. 4b). With the large rigidity (α = 20) the relative distance between vertices was preserved well during model deformation, but template model did not properly fit into the target boundary. In contrast, our progressive weighting scheme successfully restored the shape of the target surface with the correct shape correspondences with the template model.

    To quantitatively compare the results in all cases from the viewpoint of accuracy regarding shape restoration, we measured the volume overlap between the results and the binary images. In addition, the edge length and the variation of point distribution along the x-, y-, and z-axis were measured in the modeling results regarding the inter-subject shape correspondences. Table 2 shows the results of volume overlap, edge length and point distribution measurement. First, we calculated the volume overlap using the similarity index between the binary images and the result models. The similarity index (SI) between two objects, A and B, is defined as the following equation.

    image

    For the size and shape differences in each case, we found that the deformed models in each case covered the regions of C2 in binary images very well (minimum SI: 0.94, maximum SI: 0.97).

    We investigated the edge length between all vertices, and the average and standard deviation (SD) of the edge length changed according to the volume variation between each case. Our non-rigid deformation method well preserved the relative distance between vertices against shape and size variations. We also verified the preservation of the proportional point distribution in our method by the relative SD of the vertex position along each axis. The relative SD is defined as

    image

    where V is the position of vertices in the modeling results and

    image

    is the position of vertices in the template model. Since we applied the non-uniform scale factor for the latter cases, we used the relative SD to check the point distribution results along each axis rather than explicitly comparing the vertex position between the non-rigid modeling results and the transformed template model. As a result, the values of the relative SD in each case were very similar to the scale factors, which were applied to the template model for binary image construction. This indicates that our non-rigid modeling method preserved the original point distribution in the template model against large changes in the size and shape of C2 between each case in each case.

    IV. CONCLUSION

    For surface-based morphometry studies, it is important to robustly restore the individual shape details of the target structure in each subject and to build the anatomical point correspondences across subjects for the shape comparison between them. For these purposes, we propose an organ shape modeling method based on the Laplacian deformation framework. In particular, we implemented the progressive weighting scheme controlling the rigidity parameter of the deformable model during the model deformation process to capture the shape details while preserving the relative distance of neighboring vertices in regards to the deformable model as much as possible. With our method, the geometric errors between the template model and the target surface in the initial state are smoothly distributed across the entire domain during the model deformation process. It allows us to obtain the robust shape modeling results, which represent the accurate shape features of the target structure with the correct shape correspondences between subjects. Moreover, our modeling method is not limited to the organs of spherical topology. This feature promises to extend the range of applications for morphometry study.

  • 1. Frisoni G. B., Ganzola R., Canu E., Rub U., Pizzini F. B., Alessandrini F., Zoccatelli G., Beltramello A., Caltagirone C., Thompson P. M. 2008 “Mapping local hippocampal changes in Alzheimer’s disease and normal ageing with MRI at 3 Tesla” [Brain] Vol.131 P.3266-3276 google
  • 2. Patenaude B., Smith S. M., Kennedy D. N., Jenkinson M. 2011 “A Bayesian model of shape and appearance for subcortical brain segmentation” [Neuroimage] Vol.56 P.907-922 google
  • 3. El-Bazl A., Nitzken M., Khalifa F., Elnakib A., Gimel’farb G., Falk R., El-Ghar M. A., Szekely G., Hahn H. K. 2011 “3D shape analysis for early diagnosis of malignant lung nodules” Information Processing in Medical Imaging, Lecture Notes in Computer Science vol. 6801 P.772-783 google
  • 4. Cootes T. F., Taylor C. J., Cooper D. H., Graham J. 1995 “Active shape models-their training and application” [Computer Vision and Image Understanding] Vol.61 P.38-59 google
  • 5. Heimann T., Meinzer H. P. 2009 “Statistical shape models for 3D medical image segmentation: a review” [Medical Image Analysis] Vol.13 P.543-563 google
  • 6. Frangi A. F., Niessen W. J., Rueckert D., Schnabel J. A., Insana M. F., Leahy R. M. 2001 “Automatic 3D ASM construction via atlas-based landmarking and volumetric elastic registration” Information Processing in Medical Imaging, Lecture Notes in Computer Science vol. 2082 P.78-91 google
  • 7. Souza A., Udupa J. K. 2005 “Automatic landmark selection for active shape models” [Proceedings of the SPIE] Vol.5747 P.1377-1383 google
  • 8. Rueda S., Udupa J. K., Bai L. 2010 “Shape modeling via local curvature scale” [Pattern Recognition Letters] Vol.31 P.324-336 google
  • 9. Styner M., Oguz I., Xu S., Brechbuhler C., Pantazis D., Levitt J. J., Shenton M. E., Gerig G. 2006 “Framework for the statistical shape analysis of brain structures using SPHARMPDM” [Insight Journal] P.242-250 google
  • 10. Kim H., Mansi T., Bernasconi A., Bernasconi N. 2011 “Vertex- wise shape analysis of the hippocampus: disentangling positional differences from volume changes” [Proceedings of the 14th International Conference on Medical Image Computing and Computer-Assisted Intervention] P.352-359 google
  • 11. Alexa M. 2003 “Differential coordinates for local mesh morphing and deformation” [The Visual Computer] Vol.19 P.105-114 google
  • 12. Sorkine O. 2006 “Differential representations for mesh processing” [Computer Graphics Forum] Vol.25 P.789-807 google
  • 13. Guimond A., Meunier J., Thirion J. P. 2000 “Average brain models: a convergence study” [Computer Vision and Image Understanding] Vol.77 P.192-210 google
  • 14. Heitz G., Rohlfing T., Maurer Jr C. R. 2005 “Statistical shape model generation using nonrigid deformation of a template mesh” [Proceedings of SPIE] Vol.5747 P.1411-1421 google
  • 15. Turk G. 1992 “Re-tiling polygonal surfaces” [ACM SIGGRAPH Computer Graphics] Vol.26 P.55-64 google
  • 16. Nasri A. H., Kim T., Lee K. 2003 “Polygonal mesh regularization for subdivision surfaces interpolating meshes of curves” [The Visual Computer] Vol.19 P.80-93 google
  • [Fig. 1.] Model deformation with the different values for the model rigidity parameter, α. (a) Initial state, (b) deformed model - rigidity 1.0, (c) rigidity 5.0, and (d) rigidity 10.0. SD: standard deviation.
    Model deformation with the different values for the model rigidity parameter, α. (a) Initial state, (b) deformed model - rigidity 1.0, (c) rigidity 5.0, and (d) rigidity 10.0. SD: standard deviation.
  • [Fig. 2.] Model deformation with the progressive weighting scheme. In the first row, the target structure and plots of rigidity weight and the average vertex displacement are presented. The initial model (yellow sphere) and the deformed models at each period (a-f ) are demonstrated in the second row. The colors of the deformed models represent the displacement for each vertex during each period. (The readers are referred to an online version of the paper, as figures are more meaningfully displayed and interpreted in color than in blackand- white.)
    Model deformation with the progressive weighting scheme. In the first row, the target structure and plots of rigidity weight and the average vertex displacement are presented. The initial model (yellow sphere) and the deformed models at each period (a-f ) are demonstrated in the second row. The colors of the deformed models represent the displacement for each vertex during each period. (The readers are referred to an online version of the paper, as figures are more meaningfully displayed and interpreted in color than in blackand- white.)
  • [Table 1.] Description of affine transformations for constructing the synthetic volume data of the second cervical vertebrae (C2)
    Description of affine transformations for constructing the synthetic volume data of the second cervical vertebrae (C2)
  • [Fig. 3.] Shape modeling with synthetic data of the second cervical vertebrae (C2). Green objects in the first row represent the surfaces of C2 in the target binary images. They are reconstructed by the marching cubes algorithm. Template model is represented as wireframe mesh in the first row. At the far left column, the template model and the anatomical landmarks on the model are presented. Purple objects represent the results of shape modeling, and the yellow bubble dots indicate the position of anatomical landmarks corresponding to the landmarks on the template model.
    Shape modeling with synthetic data of the second cervical vertebrae (C2). Green objects in the first row represent the surfaces of C2 in the target binary images. They are reconstructed by the marching cubes algorithm. Template model is represented as wireframe mesh in the first row. At the far left column, the template model and the anatomical landmarks on the model are presented. Purple objects represent the results of shape modeling, and the yellow bubble dots indicate the position of anatomical landmarks corresponding to the landmarks on the template model.
  • [Fig. 4.] Comparison of the progressive weighting scheme and fixed weight parameters for model rigidity. (a) shows the target surface (red) and the modeling results (purple) overlaid with the target surface. The arrows in the first row indicate the unexpected results of surface modeling, which includes insufficient boundary fitting and creases. (b) shows the correspondences of the anatomical landmarks (bubble dots) between the template model and the modeling results. The arrows in the second row indicate the landmarks located at the anatomically inconsistent position in the modeling results.
    Comparison of the progressive weighting scheme and fixed weight parameters for model rigidity. (a) shows the target surface (red) and the modeling results (purple) overlaid with the target surface. The arrows in the first row indicate the unexpected results of surface modeling, which includes insufficient boundary fitting and creases. (b) shows the correspondences of the anatomical landmarks (bubble dots) between the template model and the modeling results. The arrows in the second row indicate the landmarks located at the anatomically inconsistent position in the modeling results.
  • [Table 2.] Edge length and point distribution along the x-, y-, and z-axis in the result models. Unit of volume is mm3, and edge length is mm. The relative standard deviation (SD) is the proportion of the SD for each vertex coordinate to that of the template model
    Edge length and point distribution along the x-, y-, and z-axis in the result models. Unit of volume is mm3, and edge length is mm. The relative standard deviation (SD) is the proportion of the SD for each vertex coordinate to that of the template model