Menu OPAL / ModelConstruction

OPAL Home About People International Collaborators

sidehead OldTop sidehead OPAL Home-old sidehead About-old sidehead People-old


sidehead Activities-old * Imaging & Modeling * Fast Fluid Flow * Inverse Modeling * OSA * Mastication and Swallowing * Dysphagia ---- Publications


sidehead Opportunities-old * Post-doc & RA * Graduate Students * Undergrads


sidehead Events? * PMHA 2013-old * PMHA 2014-old

How to contribute Contact us

Related Links

* Swallowing

Website problems?

Internal documents

edit SideBar

Model Construction for the OPAL Complex

The OPAL complex.

We are creating a comprehensive, adaptable and physically-based three-dimensional (3D) computer model of the human oral, pharyngeal and laryngeal (OPAL) complex, directed toward physiological research and clinical applications. We believe that physical simulations will become increasingly important for medical technology, permitting researchers to estimate quantities such as neuromotor activation levels or internal forces that are difficult to measure in vivo, and allowing clinicians to gauge the outcome of surgical procedures or formulate new treatments altogether. We believe modeling would improve our understanding of the biomechanics of airway collapse and could help in planning or developing treatments. For stroke-induced dysphagia, modeling would help us better understand how the neural-motor control of swallowing is affected by stroke, and could suggest therapies involving the stimulation of alternate neural pathways. Cancer-related surgical deficits arise when the removal of tissue for cancer treatment restricts the ability of patients to chew and swallow; here, simulation could permit better surgical planning and facilitate the design of prostheses. Other uses for our model are anticipated: as a training device, it will offer an immersive virtual environment that permits students to observe both static structures and dynamic behaviors (such as what happens when a particular muscle group is activated).

In our modeling efforts, we are developing a generic reference model of the OPAL complex anatomy using a mixture of medical image data and semi-automated segmentation methods. These models will be available to the research and medical communities through the ArtiSynth open-source software platform, providing a tool that can be extended or modified to suit different applications. Much as we have created a dynamic physical model of the jaw-tongue-hyoid model, we will continue to add and refine different components creating a library of upper airway anatomy. In our imaging efforts, we consider how to register reference models to fit specific patients.

3D Segmentation of the Tongue in MRI: A Minimally Interactive Model-Based Approach

Demo video for the proposed segmentation

Static MRI partially resolves soft tissue details of the oropharynx, which are crucial in swallowing and speech studies. However, delineation of tongue tissue remains a challenge due to the lack of definitive boundary features. We propose a real-time, force-based user interaction platform coupled with mesh-to-image registration technique as proposed by Gilles and Pai(2008). The proposed method expands the application of such methodology from musculoskeletal structures to highly deformable soft-tissues, such as the tongue. Both shape and intensity priors are incorporated in the form of a source image volume and its corresponding surface-mesh, which is delineated by a dental expert. The choice of the source dataset is arbitrary. We use a discrete surface mesh representation to deal with the regularity and shape constraints. The overall pipeline of the proposed method is shown below.

The mesh is deformed according to local intensity similarity between the source and the target volumes. The deformation is regularized using an extended version of the shape matching (Gilles and Pai 2008). We also enable effective minimal user interaction by incorporating additional boundary labels in areas where automatic segmentation is deemed inadequate. We further deploy an effective minimal user interaction mechanism to help attain higher clinical acceptance. This provides real-time visualization of the surface evolution.We validate our method on 18 normal-subjects using expert manual delineation as the ground truth. Results indicate an average dice segmentation accuracy of 0.904 0.004, achieved within an expert interaction time of 2 1 minutes per volume. Our proposed method was fully implemented under the Simulation Open Framework Architecture (SOFA an open-source modular framework based on C++. This allows for the registration algorithm to be interpreted as a real-time simulation process during which the source model iteratively deforms to match the target configuration starting from its initial position.

Subject-Specific Model of Tongue-Jaw-Hyoid

Generic vs. the subject-specific coupled models
of the tongue, jaw and hyoid.

Speech, chewing and swallowing are critical and complex neuromuscular functions. Various associated disorders result in medical complications that, if not properly treated, may significantly degrade the quality of life of those afflicted. The tongue is the primary organ in the oropharynx and plays an essential role in oropharyngeal functions. It consists of interwoven muscle fibres that undergo a wide range of muscular contractions and relaxations whose exact timings and levels of activation are still unknown.

Computer-aided modelling and simulation of the oropharyngeal structures is beneficial for 3D visualization, and for the understanding of the associated physiology. Generic biomechanical models of the Oral, Pharyngeal, and Laryngeal (OPAL) structures are adopted into the ArtiSynth framework. Forward-dynamics tracking of FE model of the tongue was previously addressed through solving the inverse problem. The estimated biomechanics were evaluated using either the average motion reported in the literature or those of a different subject. We expand the existing generic platform to allow for subject-specific simulations, in order to (1) better evaluate the simulated biomechanics, (2) investigate the inter-subject variability and (3) provide additional insight into the speech production.

Airway Model Construction

Airway reconstruction from MBSImP animations

Airway extracted from dynamic CT images.

Artistic rendering of OPAL
complex for MBSImP.

Currently MBSImP uses 2D video animation to differentiate between impairment profiles and train the clinicians to identify different impairments. We are trying to create a 3D biomechanical model for each impairment profile so that the clinicians will have better visual and understating of the problem to ease the identification process. The end goal is to create a 3D swallowing model that allows the clinician to tweak physical parameters of complex biomechanics of swallowing components. This model will then help them by assisting in diagnosis and treatment planning for dysphagia. We have the standardized animations for each of the impairment and the videofluoroscopic images that were used to create the animation. The animations were created by Computer Graphics Professionals and iterated extensively by Bonnie Martin-Harris and her research team. From the 2D/3D animations and videofluoroscopic images, the 3D biomechanical model based on SPH simulation will be created.

Airway segmentation from dynamic CT images

We are currently performing modelling of an airway based on dynamic CT images of human swallowing. The resulting model will allow us to use fluid simulation techniques (see Mastication & Swallowing modelling) to simulate the bolus and study the effects of boundary conditions and bolus viscosities.

Relevant Publications

3D Segmentation of the Tongue in MRI: A Minimally Interactive Model-Based Approach. Negar M. Harandi, Rafeef Abugharbieh and Sidney Fels. Journal of Computer Methods in Biomechanics and Biomedical Engineering:Imaging & Visualization, Published online: 05 Feb 2014. (BIB)

Minimally Interactive MRI Segmentation for Subject-Specific Modelling of the Human Tongue. Negar M. Harandi, Rafeef Abugharbieh and Sidney Fels. In Proceedings of MICCAI workshop on Bio-Imaging and Visualization for Patient-Customized Simulations (BIVPCS), Nagoya-Japan, September 2013. (BIB)

A fast and robust patient specific Finite Element mesh registration technique: application to 60 clinical cases.. Marek Bucki, Claudio Lobos and Yohan Payan. Medical Image Analysis, 13(3):303-317, 2010. (BIB)

Active Learning for Interactive 3D Image Segmentation. Andrew Top, Ghassan Hamarneh and Rafeef Abugharbieh. In Proceedings of Medical Image Computing and Computer-Assisted Intervention (MICCAI), pages 603-610. Springer, 2011. (BIB)

A biomechanical model of cardinal vowel production: Muscle activations and the impact of gravity on tongue positioning. Stephanie Buchaillard, Pascal Perrier and Yohan Payan. The Journal of the Acoustical Society of America, 126, 2009. (BIB)