infant_train_test_unet.prototxt: define a unet-like architecture The architecture is for CNN, and it is not hard to define,į. infant_train_test_point.prototxt: define the network architecture compare the infant_train_test_finetune.prototxt and infant_train_test.prototxt, you will know how it works.Į. Notice that we should keep almost everything (instead of learning rate) of the layers you like to initialize the same between the old trained model and the new model, and then you can add new layers in the new model. infant_train_test_finetune.prototxt: finetune prototxt when you want to use previous model the initialize the training of a new model. Input (replace the original input (e.g., HDF5) with the dimension described format which I have a example in infant_deploy.prototxt.Īnd also, you have to replace to output softmaxWithLoss with softmax layer.ĭ. You only have to make two changes based on the train_test.prototxt: infant_deploy.prototxt: the deploy prototxt when you want to evaluate your trained caffe model. MaxIter should be the maximum iterations for the network to train, usually you can set two times as large as you dataset.Īnd if you want to use other optimization instead of SGD, you have to set 'type', for example, type: "Adam".Ĭ. stepsize( this is necessary when you set the learning policy as step, which means it will decrease gamma times when it reaches every stepsize steps). The most important parameters you should take care is learning rate (lr), and learning rate decay strategy (learning_policy: fixed or step or inv, I have examples in this prototxt). infant_solver.prototxt: define the network hyperparameters Ĭonvolution paramter settings: just see my example (infant_train_test.prototxt), and you can improve it based on this, for example, you can use other initialization methods (I like to use Xavier, you can use others)įully connected layer setting: refer to prototxt in. Of course, you should search what receptive field is, and I can give you a suggestion one to read. Number of total layers: the number of layers could be easily initially decided, you can just make the receptive field for the last layer as large as the input. The architecture actually is not hard to define, infant_train_test.prototxt: define the network architecture
#WANG TRANSDATA HOW TO#
Here is a list of what the network related files mean, and for the below thre prototxt, I'd like to share some basic experience about how to write them and where we need be careful:Ī. And I will introduce detailed steps for developing deep learning applications in medical image analysis with our server based caffe in the following steps. If you want to do projects with caffe in the server, please first refer to (if you are familiar with caffe, please ignore it). (this one mainly provides a transformation module and fusion module to better aggregate the high-resolution features with the highly-semantic features.)" 3D Fully Convolutional Networks for Multi-Modal Isointense Infant Brain Image Segmentation, IEEE Transactions on Cybernetics, 2018. "Dong Nie, Li Wang, Ehsan Adeli, Cuijing Lao, Weili Lin, Dinggang Shen. IEEE, 2016." (this one mainly discusses about early fusion or later fusion) "Fully convolutional networks for multi-modality isointense infant brain image segmentation." Biomedical Imaging (ISBI), 2016 IEEE 13th International Symposium on. If you think it is useful for you, please star it. Train the network with caffe (actually, I also suggestion you try Keras, it is really the simplest deep learning tool I have ever seen, if you are a beginner, Keras may be your best choice).Įvaluate the caffe model with evalCaffeModel4ImgNie.py, I store them in nifti format. And if your network is convolution-free (which I mean is MLP), you can refer to my another proj: Prepare the network: define infant_train_test.prototxt, and decide infant_solver.prototxt. Note, we have released the caffe3D platform for deep learning algorithms, please refer to my another repository for more details:Ĭrop patches from medical image data with readMedImg4CaffeCropNie4SingleS.py, I use SimpleITK as readers. We develop deep learning based methods to deal with the segmentation task. This is the project for multi-modality infant brain segmentation at the isointense stage.