Multimedia information indexing and retrieval nowadays is more and more penetrating an important domain for society: healthcare. Feature-based classification approaches which are being developed for medical image classification for computer-aided diagnosis borrow the approaches from classical CBIR in feature engineering and cascaded classification. Deep learning classifiers which are being extensively studied and applied for concept recognition in multimedia data, image and video understanding are being applied for prediction of patients categories on the basis of physiological parameters such as gaze fixations. Information fusion approaches which are necessary for understanding and content â€“ based indexing of highly dimensional multimedia data are applied for fusion of different modalities in medical image recognition. Video analysis and summarization approaches are being developed for automatic visual reporting in surgery. Similarly, video content analysis and retrieval in archived video data collected from surgeries becomes more and more important and provides the basis for later usage of these valuable data for scenarios such as case comparisons/similarity search, teaching of new operation techniques, as well as quality control/error inspection. Finally, the multimedia nowadays is more and more multimodal â€“ not only image, video, textual and sound modalities supply the information, but also and specifically in medical and healthcare applications, a large variety of different sensors either measuring the context or physiological parameters are deployed. Future multimedia becomes multimodal and this happens in the healthcare domain in priority.
We are now looking for papers (6 pages, IEEE style) for this exciting special session (deadline: February 1, 2016).
More information can be found here.