Most of the existing person re-identification methods usually follow a supervised learning framework and train models based on a large number of labeled pedestrian images. However, directly deploying these trained models in real scenes will lead to poor performances, because the target domain data may be completely different from the training data, thus the model parameters cannot be well fitted. Furthermore, it is very time-consuming and impractical to label a large number of data. In order to solve these problems, we propose a simple and effective strategy for segmentation based on key parts aiming to obtain the discriminative appearance features. Simultaneously, we constructs a hybrid Gaussian model by calculating the time difference of pedestrian groups to acquire spatialoral features. Finally, a measure fusion model is used to combine the appearance measure matrix and spatialoral distance matrix, which greatly improves the performance of the unsupervised person re-identification. We conduct extensive experiments on the large-scale image datasets, including Market-1501 and DukeMTMC-reID. The experimental results demonstrate that our algorithm is superior to state-of-the-art unsupervised re-identification approaches.