Powerful and economic sensors such as high definition cameras and corresponding recognition software have become readily available, e.g. for face and motion recognition. However, designing user interfaces for robots, phones and computers that facilitate a seamless, intuitive, and apparently effortless communication as between humans is still highly challenging. This has shifted the focus from developing ever faster and higher resolution sensors to interpreting available sensor data for understanding social signals and recognising users' intentions. Psychologists, Ethnologists, Linguists and Sociologists have investigated social behaviour in human-human interaction. But their findings are rarely applied in the human-robot interaction domain. Instead, robot designers tend to rely on either proof-of-concept or machine learning based methods. In proving the concept, developers effectively demonstrate that users are able to adapt to robots deployed in the public space. Typically, an initial period of collecting human-robot interaction data is used for identifying frequently occurring problems. These are then addressed by adjusting the interaction policies on the basis of the collected data. However, the updated policies are strongly biased by the initial design of the robot and might not reflect natural, spontaneous user behaviour. In the machine learning approach, learning algorithms are used for finding a mapping between the sensor data space and a hypothesised or estimated set of intentions. However, this brute-force approach ignores the possibility that some signals or modalities are superfluous or even disruptive in intention recognition. Furthermore, this method is very sensitive to peculiarities of the training data. In sum, both methods cannot reliably support natural interaction as they crucially depend on an accurate model of human intention recognition. Therefore, approaches to social robotics from engineers and computer scientists urgently have to be informed by studies of intention recognition in natural human-human communication. Combining the investigation of natural human behaviour and the design of computer and robot interfaces can significantly improve the usability of modern technology. For example, robots will be easier to use by a broad public if they can interpret the social signals that users spontaneously produce for conveying their intentions anyway. By correctly identifying and even anticipating the user's intention, the user will perceive that the system truly understands her/his needs. Vice versa, if a robot produces socially appropriate signals, it will be easier for its users to understand the robot's intentions. Furthermore, studying natural behaviour as a basis for controlling robots and other devices results in greater robustness, responsiveness and approachability. Thus, we welcome submissions that (a) investigate how relevant social signals can be identified in human behaviour, (b) investigate the meaning of social signals in a specific context or task, (c) identify the minimal set of intentions for describing a context or task, (d) demonstrate how insights from the analysis of social behaviour can improve a robot's capabilities, or (e) demonstrate how a robot can make itself more understandable to the user by producing more human-like social signals.