The book’s third part details the key activities relevant to the ontology engineering life cycle. For each activity, a general introduction, methodological guidelines, and practical examples are provided.
In particular, it discusses the most successful application of UFO, namely, the development of the conceptual modeling language OntoUML. The paper also discusses a number of methodological and computational tools, which have been developed over the years to support the OntoUML community. Examples of these methodological tools include ontological patterns and anti-patterns; examples of these computational tools include automated support for pattern-based model construction, formal model verification, formal model validation via visual simulation, model verbalization, code generation and anti-pattern detection and rectification. In addition, the paper reports on a variety of applications in which the language as well as its associated tools have been employed to engineer models in several institutional contexts and domains. Finally, it reflects on some of these lessons learned by observing how OntoUML has been actually used in practice by its community and on how these have influenced both the evolution of the language as well as the advancement of some of the core ontological notions in UFO.
In addition, new approaches to interacting with multimedia applications have emerged such as multi-touch interfaces, voice processing, and brain-computer interfaces, giving rise to new kinds of complex interactive systems. In this article, we underpin fundamental challenges for delivering multisensory effects to heterogeneous systems. We propose an interoperable mulsemedia framework for coping with these challenges, meeting the emerging requirements.
Stimulations by vibration effects, however, generate more satisfaction in people with a high tactile perception level or a low visual perception level. Mulsemedia – multiple sensorial media – captures a wide variety of research efforts and applications. This paper presents a historic perspective on mulsemedia work and reviews current developments in the area. These take place across the traditional multimedia spectrum – from virtual reality applications to computer games-as well as efforts in the arts, gastronomy and therapy, to mention a few. We also describe standardization efforts, via the MPEG-V standard, and identify future developments and exciting challenges the community needs to overcome.
The specific attention paid to the quality perceived through the senses of costumers when touching a product has led to a rapid growth in the industrial interest for the field of haptics. Controlling the quality of products with such expectations has become a challenge for manufacturers, especially considering the current lack of a generic method to standardize control specifications and provide efficient control tools, whether a manual or automated control is considered.
We proved the usefulness of our ontology model by showing results according to scenario-based experiments. Software Engineering (SE) is a wide domain, where ontologies are useful instruments for dealing with Knowledge Management (KM) related problems.
Among the five primary senses, the sense of taste is the least explored as a form of digital media applied in Human-Computer Interface. This article presents an experimental instrument, the Digital Lollipop, for digitally simulating the sensation of taste (gustation) by utilizing electrical stimulation on the human tongue.
An experimental method was used to model the influence of exploration on perception, considering the application case. MulSeMedia is related to the combination of traditional media (e.g. text, image and video) with other objects that aim to stimulate other human senses, such as mechanoreceptors, chemoreceptors and thermoreceptors. Existing solutions embed the control of actuators in the applications, thus limiting their reutilization in other types of applications or different media players. This work presents PlaySEM, a platform that brings a new approach for simulating and rendering sensory effects that operates independently of any Media Player, and that is compatible with the MPEG-V standard, while taking into account reutilization requirement. Regarding this architecture conjectures are tested focusing on the decoupled operation of the renderer.
The use of multiple senses in interactive applications has become increasingly feasible due to the upsurge of commercial, off-the-shelf devices to produce sensory effects. Creating Multiple Sensorial Media (MulSeMedia) immersive systems requires understanding their digital ecosystem.
The fourth part then presents a detailed overview of the NeOn Toolkit and its plug-ins. Lastly, case studies from the pharmaceutical and the fishery domain round out the work. The Semantic Web is characterized by the existence of a very large number of distributed semantic resources, which together define a network of ontologies. These ontologies in turn are interlinked through a variety of different meta-relationships such as versioning, inclusion, and many more.
No standardized methodology exists to conduct subjective quality assessments of multisensorial media applications. To date, researchers have employed different aspects of audiovisual standards to assess user QoE of multisensorial media applications and thus, a fragmented approach exists. In this article, the authors highlight issues researchers face from numerous perspectives including applicability (or lack of) existing audiovisual standards to evaluate user QoE and lack of result comparability due to varying approaches, specific requirements of olfactory-based multisensorial media applications, and novelty associated with these applications. Finally, based on the diverse approaches in the literature and the collective experience of authors, this article provides a tutorial and recommendations on the key steps to conduct olfactory-based multisensorial media QoE evaluation.
We present two realizations of TastyFloats, a novel system that uses acoustic levitation to deliver food morsels to the users’ tongue. To explore TastyFloats’ associated design framework, we first address the technical challenges to successfully levitate and deliver different types of foods on the tongue. We then conduct a user study, assessing the effect of acoustic levitation on users’ taste perception, comparing three basic taste stimuli (i.e., sweet, bitter and umami) and three volume sizes of droplets (5μL, 10μL and 20μL). Our results show that users perceive sweet and umami easily, even in minimal quantities, whereas bitter is the least detectable taste, despite its typical association with an unpleasant taste experience.
This is followed by a discussion of current technical and design challenges that could support the implementation of this concept. This discussion has informed the VTE framework (VTEf), which integrates different layers of experiences, including the role of each user and the technical challenges involved.