In This Issue
The Future of Computing
March 1, 2003 Volume 33 Issue 1

Entering the Brain: New Tools for Precision Surgery

Saturday, March 1, 2003

Author: Eric Grimson

Noninvasive technology allows surgeons to see beneath the surface of the brain.

Standard approaches to removing a brain tumor today are not necessarily refined. Suppose you have the misfortune of having a brain tumor near the motor cortex. You'd like the surgeon to remove it without paralyzing you, but with the standard approach, the surgeon is only able to see the surface of the brain. He has to cut through to find the piece he wants to remove. I would just as soon not have someone blindly rooting around in my brain looking for the right spot. I’d like him to be able to see it.

Imagine, instead, that we are able to show the neurosurgeon an augmented-reality visualization - in effect, letting him look at a patient and see through skin and bone to the interior structure in exact alignment, so he or she can home in on a precise target. Suppose, moreover, that we can enable the surgeon to see a variety of internal structures, a patient-specific anatomical reconstruction showing the exact location of all key structures. Even better, we can provide the surgeon during the surgical procedure with a visualization of the patient's anatomy - that highlights where the tumors lie, as well as where the critical structures are, including vessels, motor cortex, and other connections, and how their positions change during surgery. That is our goal. We want to build models of all of these structures to identify surgical targets, so that surgery will become minimally invasive.


Our approach marries computer vision and machine learning techniques. The result, I like to say, is to give the surgeon the capabilities of Superman - the ability to sense critical information that is not normally visible. The goal is to build a 3-D model of the individual patient, rather than a generalized prototype, because an individual model provides much more information to the surgeon. On top of that, we want to fold in data on tissue properties and use our adaptable patient model for surgical visualization, surgical planning, and actual surgical navigation.

Our research has concentrated primarily on neurosurgery, but this technology has applications for many other kinds of surgery. For instance, it would be enormously useful in treating prostate cancer. Brachytherapy involves implanting small radioactive seeds around the cancer. First, however, the patient is scanned to determine the ideal locations for the seeds; then needles are driven in to place the seeds correctly. To do this, it is necessary to know where the target tumor is. Currently, an open magnetic resonance system is used to guide the placement of the needles. This new technology could use a preoperative scan of the patient to build a model of the prostate; during the procedure, the surgeon would then use the scan as a guide to placing the seeds.


The technology could also be used for vascular surgery. For instance, a patient with vessel problems in the chest could undergo a CT scan; then a computer algorithm could be used to create a detailed model showing what the structure looks like and, ideally, would automatically measure - without any invasion - the width of the vessel, enabling the surgeon to identify trouble spots.

Different tissue types respond differently to magnetic fields, revealing different intensities. We are striving for a technology that automatically identifies where particular tissue structures are in the individual brain. We have begun by modeling prototypes, but we would ultimately like to be able to identify automatically where the tissue structures are in each individual patient.


In the standard approach to brain surgery, a radiologist produces a set of visual slices of the brain, then manually traces out the significant structures - an incredibly time-consuming process. It must be done one slice at a time, and most radiologists don't have the time to do it. In addition, thin structures that pass through the slice - vessels or nerves - are very hard to spot, so the approach does not always work well. An alternative is to let the radiologist, by clicking with the mouse, highlight a small set of recognized points, building a little statistical distribution of what intensity response looks like from magnetic resonance. Ideally this statistical distribution could then be used to label all the other elements in the medical scan, using the nearest known intensity to determine the label.


But problems can arise. When a patient is put into the magnetic field, there is a very significant nonlinear gain artifact inside the field that affects the intensity responses of different tissues, so that the same type of matter may appear different in different locations, thus shifting the statistics in a nonstationary way. If we knew the gain field, we could correct for it and measure the tissue type. But we do not. If we knew the tissue type, we could predict the image and use the difference to solve for the gain field. But we do not.

Fortunately, a statistical tool called expectation maximization enables us to solve both problems. It allows us to go through the iterative sequence of estimating one, solving for the best estimate of the other, and coming back. That produces a nice segmentation that allows us to identify tissue types very well. To verify our results, we applied our algorithm to 50 scans of 35 patients, each taken by a radiologist two or three weeks apart. Although there were significant variations in labeling over time, even by the same radiologist, our algorithm produced solid, repeatable measurements.


But we are still addressing the wrong question. True, the algorithm finds the same answer every time, but is it the correct answer? To find the right answer requires ground truth.

We asked six Harvard radiologists to segment each scan by hand and noted how much the labeling by two radiologists varied and how the algorithm compared to the ground truth. We found that two radiologists differ by 22 per cent and that the variations occur at the boundaries of tissues, where it is very difficult to see differences. When the algorithm is thrown into the mix, its performance is similar to a radiologist’s. To get ground truth, we said that if five out of six radiologists labeled a voxel the same way, we would be confident that the tissue identification was correct, and we could compare it to the algorithm’s identification. Within the intercranial cavity, the algorithm’s identification was correct about 98 percent of the time.


However, there were two other key considerations: (1) we were working in the intercranial cavity; and (2) we were working on normal brains. Brains with abnormal structures, such as tumors, could cause tissue-response problems. There are two quick methods of addressing these issues.


The first is to do what a radiologist does. Radiologists know where structures belong in the brain, and if they see something in the wrong place, they know there is a problem. The same principle applies to this modeling technology. The system starts with a small amount of anatomical knowledge and builds a very detailed atlas of the brain. We then elastically compare new scans to the atlas, taking into account the different sizes and shapes of heads, and we use that information to set a prior probability on tissue types. The system can take over from there. Over a period of eight months, we built a very detailed neuroatlas that is now available on the web from Brigham and Women’s Hospital.

When we get a new scan, we run a rough classification using our first method, then warp it to the atlas using an elastic registration to set a guideline for what things should look like. We then take a new scan, segmenting out certain structures, such as skin or ventricles, to gradually map the location of the tumor. We are basically bringing anatomical knowledge to bear so the system can find the pieces it wants and highlight the location of the tumor.


This works well for most structures, but a few are still difficult to locate, especially in places where the boundaries between the edges of objects are very subtle. To deal with that, we build a machine learning system into the algorithm so it can learn the differences in shapes across populations; then we bias the segmentation toward a likely shape. For instance, by giving it a set of corpora callosi from a sequence of subjects, the algorithm learns the average shape of a person's corpus callosum and, more importantly, the standard modes of variation. Then we fold the information into a system that automatically performs the segmentations. I use statistical information to tell me where to look, use the atlas to refine that, then use shape to fill in across the narrow boundaries. As a consequence, this system is able to find very subtle boundaries.

What is the result? We can now let the system segment out structures and build models. Its performance is indistinguishable from the performance of a human, and it generates plenty of information for the surgeon to use. We build models like this for our surgeon friends.


In the case of neurosurgery, we now have the ability to find all of the structures, including white matter, gray matter, cerebral spinal fluid - and all of the structural pieces. In some cases, that is sufficient. In others, however, it is also important to know the connections because they could be sites of the malfunction. To do that, we are building on a new technique from medical imaging called diffusion-tensor MRI, a way of taking a quick sequence of images to measure how water diffuses in different directions. This is useful because in brain cellular tissue water is likely to diffuse in any direction; but if there are fiber bundles, the water is likely to diffuse along them. In other words, by treating this as a fluid-flow problem, we can build a surgical model that uses the diffusion to show connections between different brain structures, thus enabling the surgeon to identify disruptions and determine where to start.

Once we have built models of the patient’s anatomy, we want to apply that to actual surgery. To do this, we gather information about the position of the patient in the operating room and use that to align our model with the actual patient position. We can then use augmented reality visualization to present information to the surgeon. We render a synthetic view of the patient’s interior anatomy, then create a video mix that shows the surgeon the interior structures exactly overlaid on a live view of the surgical cavity. We then put markers on the surgical instruments to track their position relative to the model as the surgeon inserts them. That way, we can show the surgeon in real time exactly where the tip of his instrument is in relation to the reconstructed model of the patient, and in relation to the preoperative scans.


We use this technology for visualizations, for surgical guidance, and for simulation. In a colonoscopy, for example, we use the technology to show the surgeon precisely where worrisome sites are located, so he can take a look and decide whether they require surgery. I hope by the time I have my next colonoscopy, this technique will be ready for general use.

These reconstructions are useful for planning surgeries, but the real question is how the model can be used during surgery as a guide. We can create wonderful 3-D models, incredibly accurate, registered to the patient to submillimeter accuracy. But the minute the surgeon picks up the scalpel, the model is no longer accurate because the tissue moves, and sometimes the shift is huge. For the models to remain accurate, we need intraoperative sensing to guide the surgeon.


One way to create computer-assisted surgery would be to incorporate another new technology, open MR, which was developed at Brigham and Women’s Hospital in collaboration with General Electric. Open MR is basically an MR scanner that allows a surgeon to stand in its center. The patient is placed within the scanner and surgery proceeds normally. At any point in time, however, the surgeon can obtain an image of what is going on inside the patient, not a full scan - that would take 20 minutes - but a little slab of data. Starting with one of our preoperative models, as the surgeon works, at regular intervals he uses a trackable probe to monitor tissue changes, receiving a new image and a little slab of data within a few seconds; then the new piece of data wraps into the original model to reflect the changes in what the surgeon sees.


In another case, we created a visualization; the surgeon planned the surgical path, then, during surgery, used this technology to acquire images on the fly and warp them into the model. Part of the image came from the original model, part came from intraoperative acquisitions. He was able to use the plan to get to the exact location he wanted and extract the piece he wanted.

How well does this work? Several years ago we built a system like this, called a neuronavigator system, for use at the Brigham and Women’s Hospital. The system was retired after it had been used in several hundred neurosurgical cases with no side effects. Only a formal outcomes analysis can accurately assess the impact, but anecdotal information can highlight the utility of the system. Surgeons using the system report that they now take half the time to complete many surgeries. In neurosurgery, a lot of time is spent getting to the target without damaging anything along the way, so it makes perfect sense that, you can see through tissue, see where the structures are that you want to avoid, you can get there much more quickly. Perhaps the most striking thing the surgeons note is that they are now doing surgeries that three years ago they would have considered inoperable.


I’d like to close with a comment from my colleague, another surgeon, who thinks the next generation of surgeons will be accustomed to hand-eye coordination, to dealing with graphics, and to manipulating things while looking at screens. The surgeon of the future, he said, will be sort of a "Nintendo surgeon." I particularly like this comment, because now, when my sons, ages 12 and 14, spend too much time in front of the PlayStation or the Nintendo, I can assure my wife, "They’re not wasting time. They’re practicing to be surgeons. Really."

About the Author:Eric Grimson is the Bernard Gordon Professor of Medical Engineering at Massachusetts Institute of Technology and a lecturer on radiology at Brigham and Women’s Hospital.