Utilizing autopatching, MIT engineers have devised a way to monitor neurons in a living brain using a computer algorithm that analyzes microscope images and guides a robotic arm to the target cell. By combining several imaging processing techniques, the researchers came up with an algorithm that guides the pipette to within about 25 microns of the target cell. At that point, the system begins to rely on a combination of imagery and impedance, which is more accurate at detecting contact between the pipette and the target cell than either signal alone. Amazingly, the success rate is comparable to that of highly trained scientists performing the process manually.
In this image, a pipette guided by a robotic arm approaches a neuron identified with a fluorescent stain.
“It’s almost like trying to hit a moving target inside the brain, which is a delicate tissue,” Suk says. “For machines it’s easier because they can keep track of where the cell is, they can automatically move the focus of the microscope, and they can automatically move the pipette.”