Our long term goal is to have a fully trained machine learning model that is able to detect the cartilage in knee MRI images of patients in order to help facilitate the diagnosis of Osteoarthritis. While we have not yet achieved this goal, one big part in working towards this goal is to create a model that can actually detect the three bones present in each MRI image: the Femur, Tibia, and Patella. As stated in previous blog posts, we have been working on training a model for each bone, one at a time. In order to train a model to detect where the bone is in the MRI image we need to feed the images with a corresponding bone mask image that is manually generated with software. We had cases labeled for the Femur when I started working on the project, so I had to manually label the cases for the Tibia model so that they could be fed to the model. The way the model works is that when it is fully trained with the manually labeled cases, any new unlabeled cases that are fed into the model could have an image mask generated by the model without the need to manually label it.
I worked closely with my mentor, Dr. Shan, throughout both semesters and last year, meting weekly to make sure that all goals are being met and making sure everything was getting completed efficiently. At this point in the semester we were able to get great results in our trained models. We have a working Femur model that runs at around 97.78% accuracy on any new images piped into the model. The Tibia model runs at 96.85% accuracy on images. Since our next goal is to have a combined model with all three bones, it was necessary to train a model with just the Tibia and Femur bone images present in the mask since the Patella model isn’t completely labeled and trained yet. I needed to write code to combine the image masks for the Tibia and the Patella patient cases that were manually labeled. I was then able to feed these combined masks with the original image to the model and create a combined Tibia and Femur model. This model had an accuracy 97.16% on any new images fed into the model which is high for this kind of model. This means that we are ready to combine the Femur and Tibia with the Patella once those are fully labeled and a model has been trained on those cases. We can use the combined model that has been trained to detect the bone in MRI slices to then detect cartilage between the bones much easier, which is of course our main objective. Overall, this year we were able to achieve a lot of results for this project and Dr. Shan and I are planning on continuing to work on the project next semester.
We are making great progress in meeting our goal in creating a successful fully trained model with excellent accuracy. As stated in the previous blog post, we have a trained model on the Femur dataset which has a very high accuracy, around 97%. We had originally planned on training a Tibia model without using all the cases in the training division, because they had not been properly corrected, and our tests with 30% of the training dataset proved to have a good outcome on the Femur data which as stated previously already had a properly trained model. Since the tests with a condensed dataset on the Femur had positive outcomes, we decided to move onto tests on the Tibia. We trained the Tibia model using only 35 training cases, 15 validation and 15 testing cases. The model obtained some I interesting results.
This model trained surprisingly had an accuracy of around 95.97% accuracy after it was trained, which was quite a surprise to both Dr. Shan and myself considering the amount of the training data used. The model makes predictions on a group of testing cases in order to see if the model is well trained, and for this model the predictions were wildly inaccurate and had many areas of the bone in the predicated image mask that had incorrect image segmentation. So we decided that it was best if I just manually corrected the entire dataset so that we can train the model with our original dataset of 70 training cases, 15 validation, and 15 testing cases. This model had an accuracy of 96.85% which is as expected and similar to the Femur tests. The predictions in this case were also accurate and had very little to no incorrectly segmented areas of the Tibia bone.
Now all that is left is to complete labeling of the Patella data and train that model, which is currently being done by another student. Currently our goal is to combine the Tibia and Femur models into one and to see what accuracy can be achieved on this model. This involves coding a script to properly combine the data which can then be used in the code to create and train the model.
Our project has been successful so far and we have been able to make a lot of progress the last few months. As stated in the previous blog post, we first have to train the model to detect which parts of the MRI images are the Tibia, Femur, and Patella. The Femur bone dataset has been previously labeled and a model has been trained on that dataset already. This makes it easier because we already know the performance percentage and have a good idea of how well formed the model is. We are aiming to have a Tibia model that runs around 96% accuracy or better in tracing the bone. The past few months I and other students have been labeling the Tibia database which consists of 100 cases, the cases are divided into three subfolders. 70% of the data is dedicated to the training folders with the remaining 30% being dedicated to testing and validation folders. The model obviously uses the training data to train the model, and then checks how accurate it is with the validation data. Then the testing data can be used to make predictions and further test the model. There were some complications with the labeled Tibia dataset and some of the cases labeled by me and by other students needed to be relabeled as they weren’t as accurate as they could have been.
We have found that the model doesn’t need the full 70% of the training data to accurately train the model. Both Dr. Shan and I had agreed on a plan to correct the labeling on only a certain percentage of the data (around 30%) and then have the accurate model predict the rest of the cases that we need for the model. With this knowledge we are working on finishing up fitting the model for the Tibia. Once we have completed this model, we plan on moving to the Patella model. The dataset for the Patella is not yet completely labeled and as such we plan on potentially using a trained model to also predict the image masks on the unlabeled Patella data. Once we have all three models complete, we have to combine them into one finished model that has all three bones labeled. This could raise further questions because certain bones start appearing in the MRI image at different times, and we would need to alter the code in ways that could stop any potential errors. Next semester we plan on finishing up the models as quickly as possible so that we can work on getting the main model completed, and move on to detecting the cartilage, one of our final goals.
The title of our project is “Bone Segmentation in 3D MRI Images” and it is tied in with the detection and diagnosis of knee osteoarthritis. The purpose of the project is to make an easier diagnosis of the ailment by training a computer to read in 3D MRI images and differentiate between bones and cartilage. We plan on training a machine learning model, which can be tricky if not done in certain stages. To train the model to detect the cartilage, an important marker of osteoarthritis, and we first need to train the model to detect which parts of the MRI images are the bones that need to be singled out and not included in the final result. The computer needs to be able to know the Femur, Tibia, Patella and have the ability to trace these bones automatically. It is a semi-complicated process, but it is a great learning experience because machine learning is an important concept to understand.
I expect to learn a lot about machine learning and convolutional neural networks during this project because these topics go hand in hand in training a computer model. We will be using Tensorflow as our platform for machine learning because it is open source and has a lot of thorough documentation. The code for the U-Net model we are using is written in python which is a popular programming language that I have also started to learn because of the project. We will need to start by manually labeling the MRI database of patient cases that we will use in training the model. By manually training the image set we can then use a certain amount of the cases to train the model and then another set of cases will be used to test the model and validate that the computer came up with the proper results. We are hoping to achieve an accurate model that will be able to segment the bones on the MRI images automatically so that we can remove this segmentation afterwards in order to detect certain biomarkers, such as the cartilage.