Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

The segmentation of medical scans (CT, MRI, etc.) and the subsequent identification of key features therein, such as organs and tumours, is an important precursor to many medical imaging applications. It is a difficult problem, not least because of the extent to which the shapes of organs can vary from one image to the next. One interesting approach is to start by partitioning the image into a region hierarchy, in which each node represents a contiguous region of the image. This is a well-known approach in the literature: the resulting hierarchy is variously referred to as a partition tree, an image tree, or a semantic segmentation tree. Such trees summarise the image information in a helpful way, and allow efficient searches for regions which satisfy certain criteria. However, once built, the hierarchy tends to be static, making the results very dependent on the initial tree construction process (which, in the case of medical images, is done independently of any anatomical knowledge we might wish to bring to bear). In this paper, we describe our approach to the automatic feature identification problem, in particular explaining why modifying the hierarchy at a later stage can be useful, and how it can be achieved. We illustrate the efficacy of our method with some preliminary results showing the automatic identification of ribs. Copyright 2008 ACM.

Original publication




Conference paper

Publication Date



432 - 437