Robust and works well in high dimensional data sets e.g. gene expression data. Results: Here we propose a new robust divisive clustering algorithm, the bisecting k-spatialMedian, based on the statistical spatial depth. A new subcluster selection rule, Relative Average Depth, is also introduced. We demonstrate that the proposed clustering algorithm outperforms the componentwise-median-based bisecting k-median algorithm for high dimension and low sample size (HDLSS) data via applications of the algorithms on two real HDLSS gene expression data sets. When further applied on noisy real data sets, the proposed algorithm compares favorably in terms of robustness with the componentwise-median-based bisecting k-median algorithm. Conclusion: Statistical data depths provide an alternative way to find the “center” of multivariate data sets and are useful and robust for clustering.BackgroundIn gene expression studies, the number of samples in most data sets is limited, while the total number of genes assayed is easily ten or twenty thousand. Such high dimension and low sample size data arise not only commonly in genomics but also frequently emerge in various other areas of science. In radiology and biomedical imaging, for example, one is typically able to collect far fewer measurements about an image of interest than the number of pixels.These HDLSS data present a substantial challenge to many methods of classical analysis, including cluster analysis. In high dimensional data, it is not uncommon for many attributes to be irrelevant. In fact, the extraneous data can make identifying clusters very POR-8 solubility difficult [1]. Robust clustering methods are needed that are resistant to small perturbations of the data and the inclusion of unrelated variables [2].Page 1 of(page number not for citation purposes)BMC Bioinformatics 2007, 8(Suppl 7):Shttp://www.biomedcentral.com/1471-2105/8/S7/SThe bisecting k-means algorithm is a hybrid of hierarchical clustering and the k-means algorithm. It proceeds topdown, splitting a cluster into two in each step, after which it will select one cluster based on a selection rule (commonly the cluster with the largest variance) to further split. In each splitting step, it randomly picks a pair of data points that are symmetric about the “center” of the data and assigns all other data points to one cluster or the other based on distance to the two selected points, thus the algorithm is similar to the k-means algorithm. The center is usually the mean. This whole process continues until each point is a cluster or a predefined number of clusters is reached. Similar to other commonly used methods that are based on mean, e.g. k-means, bisecting k-means is not robust because the mean is susceptible to outliers and noise [3]. As a common PubMed ID:https://www.ncbi.nlm.nih.gov/pubmed/29072704 remedy, the bisecting k-median algorithm, which replaces the mean by the componentwise median, is less sensitive to outliers. However, the componentwise median may be a very poor center representative of data, because it disregards the interdependence information among the components and is calculated separately on each component (dimension). For example, the componentwise median of the points (a, 0, 0), (0, b, 0) and (0, 0, c) for arbitrary reals a, b, c is (0, 0, 0) which even does not lie on the plane passing through the three points. A new center representative for multivariate data that is robust and takes into account the interdependence among the dimensions is clearly needed. Of the various multivariat.