InitRech 2015/2016, sujet 4

De Wiki d'activités IMA
Révision datée du 17 juin 2016 à 22:12 par Gpiekacz (discussion | contributions) (Applications)
(diff) ← Version précédente | Voir la version actuelle (diff) | Version suivante → (diff)

Summary

This scientific document deals with Augmented Reality(AR) for liver surgery. Considerable advances in medicine have seen the development of new surgery techniques, such as MIS (Minimally Invasive Surgery), which give the opportunity for surgeon to reduce several risks. This technique is less invasive as traditional surgery. Indeed, only two or three small incisions are done, and manipulated-instruments are placed through these openings. The main purpose of this technique is to reduce time recovery of the patient but also reducing bleeding, pain and risks of infection. Robotic arms embedded a high resolution stereoscopic camera which give a 3D visual support for surgery.

At this point, stream from stereoscopic vision only gave information about liver's surface. More information is needed to allow manipulated-instruments to navigate in the human's abdominal cavity. Thanks to the improvement of computing, it is now possible to create a digital mapping in real-time of any organ in the human body. The technique consists to modeling an organ (liver for example), with high details (internal structure of the organ), and superimposed digital model with real images from stereoscopic camera. Surgeon can now placed his tools with high accuracy and process data which were invisible or very complex to see before. Research in this kind of work exist for a decade, but previous work deals with static organ and good environments. In reality, organs move (breath, beat heart), have elastic properties and surgery's instruments could create occlusion and smokes. These are really important points and can't be neglected because of the complexity of this kind of surgery.

Several mathematical strategies had been used to create a “digital real-time model”. This model is based on a feature-based tracking algorithm where salient landmarks are detected in each image pair using the SURF descriptor (Speed-Up Robust Features) and are tracked thanks to the Lucas-Kanade optical flow. The biomechanical model is calculated with several parameters as internal forces (deformation), elasticity coefficients because of the vascular structure (which is heterogeneous). In clear, the compute system create points over the liver, tracks in real-time the organ, recalculate his position and estimate deformations. This allow with superimposed to detect defects and localize deformations (as tumors for example). As a result, this create an accurate 3D-real-time-view of the liver and allow surgeons to operate with robotic arms with high efficiency.

Main Contribution

This document has the objective to develop new techniques in liver's surgery. It presents how scientist researchers focus on liver's experiments, for Augmented Reality surgery. They investigated on a real-time deformable liver model based on a biomechanical approach, create by a computer and superimposed on stereoscopic live stream from the DaVinci camera in the abdominal cavity. This paper try to understand how liver is deformable, and how its internal structure change when some forces are applied on its surface (when a surgeon manipulate the liver). All this information is given by the 3D model, and permit surgeons to see through the organ (X-ray like, without disadvantages -> pure digital solution). As a consequence, when a surgeon operate to find defects (a tumor commonly), he is informed where sensitive parts are located (Hepatic vein, portal vein) and able to treat problems without damaging the liver.

This work in creating an accurate mapping of the liver, tends to automate this kind of operation. First by a surgeon operating on a controlled robot (laparoscopic surgery), further by a completely autonomous robot. It will permit to treat a patient with high efficiency, with a high rate of success, minimize time recovery, and other risks for the patient.

Applications

Augmented reality surgery has several applications. This technique is now used for any organ in the human body. The only limit is to design models for all kind of surgery. These models need to characterize organ's surfaces but also internal structure based on mathematical parameters. Every physiological and physical parameters are estimated by these equations. These models are nowadays pretty accurate but automation need to improve. Indeed, in one hand, they are pretty effective in the hands of a surgeon, in the other hand, are insufficient for an fully automatic robot.

For the future, research need to develop algorithms for every-known surgical operation and also develop robotic intelligence to treat autonomously a patient. We can imagine in a close future, robots saving life on there own. But in which form? Even in early 1800's, doctors have been searching for tools and techniques that allow them to see and work inside the human body, without slicing it (very) wide open. And minimally invasive (or laparoscopic) surgery already exists since the early 1980's. So, which solution will be develop? A team of surgeons at Columbia University, is working on a small robotic arm (very small when compared to the Da Vinci system few mm vs a very hudge machine), that can sneak into a 15 millimeter incision. There is also teams who want to pass through a "natural orifice" and make small intern incisions. But it is a really improvement?

3D built models are well known and accurate and will be improve in the future. This work is acquired and we can imagine one day, nano robots, at a cellular size, will be able to operate on any organ. Based on mathematical models used for AR surgery, some might have a positional behavior, others will treat the organ. (Explanations) Imagine a patient who have a liver cancer. He goes to the closest pharmacy and buy nano robots to swallow. Robots will go to the concerned zone, half of them will go all around the liver to map it by triangulation (or another mathematical process able to give positioning)(robots acting like points), and the others will destroy all cancer cells (of course, nano robots can communicate like a bee swarm). This is a future possible solution, without making a single incision. Time will tell.