In this work, a new approach to gesture recognition using the properties of Spherical Self- Organizing Map (SSOM) is investigated. Bounded mapping of data onto a SSOM creates not only a powerful tool for visualization but also for modeling spatiotemporal information of gesture data. The SSOM allows for the automated decomposition of a variety of gestures into a set of distinct postures. The decomposition naturally organizes this set into a spatial map that preserves associations between postures, upon which we formalize the notion of a gesture as a trajectory through learned posture space. Trajectories from different gestures may share postures. However, the path traversed through posture space is relatively unique. Different variations of posture transitions occurring within a gesture trajectory are used to classify new unknown gestures. Four mechanisms for detecting the occurrence of a trajectory of an unknown gesture are proposed and evaluated on two data sets involving both hand gestures (public sign language database) and full body gestures (Microsoft Kinect database collected in-house) showing the effectiveness of the proposed approach.