Most interactive user interfaces (UIs) for virtual reality (VR) applications are based on the traditional eye-centred UI design principle, which primarily considers the user’s visual searching efficiency and comfort, but the hand operation performance and ergonomics are relatively less considered. As a result, the hand interaction in VR is often criticized as being less efficient and precise. In this paper, the user’s arm movement features, such as the choice of the hand being used and hand interaction position, are hypothesized to influence the interaction results derived from a VR study. To verify this, we conducted a free hand target selection experiment with 24 participants. The results showed that (a) the hand choice had a significant effect on the target selection results: for a left hand interaction, the targets located in spaces to the left were selected more efficiently and accurately than those in spaces to the right; however, in a right hand interaction, the result was reversed, and (b) the free hand interactions at lower positions were more efficient and accurate than those at higher positions. Based on the above findings, this paper proposes a hand-adaptive UI technique to improve free hand interaction performance in VR. A comprehensive comparison between the hand-adaptive UI and traditional eye-centred UI was also conducted. It was shown that the hand-adaptive UI resulted in a higher interaction efficiency and a lower physical exertion and perceived task difficulty than the traditional UI.
With the exponential growth of user-generated content, policies and guidelines are not always enforced in social media, resulting in the prevalence of deviant content violating policies and guidelines. The adverse effects of deviant content are devastating and far-reaching. However, the detection of deviant content from sparse and imbalanced textual data is challenging, as a large number of stakeholders are involved with different stands and the subtle linguistic cues are highly dependent on complex context. To address this problem, we propose a multi-view attention-based deep learning system, which combines random subspace and binary particle swarm optimization (RS-BPSO) to distill content of interest (candidates) from imbalanced data, and applies the context and view attention mechanisms in convolutional neural network (dubbed as SSCNN) for the extraction of structural and semantic features. We evaluate the proposed approach on a large-scale dataset collected from Facebook, and find that RS-BPSO is able to detect whether the content is associated with marijuana with an accuracy of 87.55%, and SSCNN outperforms baselines with an accuracy of 94.50%.