Bundle-PBib
view release on metacpan or search on metacpan
t/sample.html view on Meta::CPAN
is an example of how to react upon changes in the physical
environment. As mentioned, the virtual part of the bridge is shown as
soon as a physical object is detected on the physical part of the
bridge. Thus, Passage needs to keep a representation of the detected
physical objects and the location (esp. bridge) where they have been
sensed (fig. ). This is part of the environment model. Additionally,
the sensors used for detecting physical objects belong to the
environment model as well.</P>
<P CLASS="western" STYLE="margin-bottom: 0.11cm">Besides the physical
environment, other contextual information – such as the current
task, project, or presence of co-workers – could influence the
behavior of the software, so long as this information is available to
the application. This part refers to the <I>logical context</I> of
the application.</P>
<P CLASS="western" STYLE="margin-bottom: 0.11cm">Software with
functionality depending on physical objects and their properties, or
other aspects of the user’s environment (req. ) is called
<I>context-aware </I>[[ContextToolkit-AppDevelopment]]. There is a
strong need for context-aware applications in ubiquitous computing
environments, as the large number of available devices, services, and
tools can be a burden for the user if the complexity for explicit
interaction becomes too high. An environment designed to support the
users needs, needs to aim at <I>implicit</I> interaction
[[Schmidt-ImplicitHCI]]. This can be accomplished by using changes in
the real world’s state to trigger software functionality.<SUP><A CLASS="sdfootnoteanc" NAME="sdfootnote2anc" HREF="#sdfootnote2sym"><SUP>2</SUP></A></SUP>
Therefore, the environment model must be capable of expressing
relevant information, such as spatial relationships between physical
objects.</P>
<H3 CLASS="western"><A NAME="sInteractionModelConcept"></A>Interaction
Model: Presentation and Interaction</H3>
<P CLASS="western" STYLE="margin-bottom: 0.11cm">To be able to
support different styles of interaction (req. , ), the interaction
model specifies how different interaction styles can be defined. The
term used here describes a part of the software architecture, and
should not be confused with the “interaction model”
describing the “look and feel” of a user interface at a
conceptual level as defined by [ :inline |
[BeaudouinLafon-PostWIMPModel]]. Instead, it is a generalized view of
the “interaction model” described by [ :inline |
[Suite-CouplingUIs]].</P>
<P CLASS="western" STYLE="margin-bottom: 0.11cm">As shown in figure
4-2.2, the interaction model defines a way to interact with all other
basic models. This is necessary, as all models can define aspects and
functions that can be represented for and accessed by the user. For
example, a data object like a “text” object often has a
directly attached view and controller, enabling direct interaction
with the text; then, interaction and data model communicate directly,
bypassing user interface and application models. Alternatively, a
“visual interaction area” being part of the user
interface model, provides functionality that has an immediate visual
representation rendered by the interaction model. In other cases, the
interaction model will not access the data model directly. Instead,
it is associated with an appropriate application model as a mediator
to the data model. This way, the interaction style can be adapted
depending on which application model is used to access a data model.</P>
<P CLASS="western" STYLE="margin-bottom: 0.11cm">As an appropriate
interaction style depends on the available interaction devices and
the associated user interface, a suitable interaction model can be
chosen depending on the environment and user-interface model. For
visual-based interaction, an adapted version of the
model-view-controller concept [[MVC-Cookbook],
[COAST-ooSyncGroupware]] has proven successful. However, the “model”
of the model-view-controller concept is not further structured. It
can refer to each of data, application, user interface, or
environment model.</P>
<P CLASS="western" STYLE="margin-bottom: 0.11cm">Passage defines an
interactive visual representation (for the virtual part of the
bridge) and physical actions as input (placing objects on the
physical part of the bridge). Consequently, its interaction model
uses both a visual interaction model (see section ) and a sensor
model providing the basis for detecting physical objects (see section
).</P>
<H2 CLASS="western"><A NAME="sConceptualSharing"></A>2.3Second
Dimension: Coupling and Sharing</H2>
<P CLASS="western" STYLE="margin-bottom: 0.11cm">Whenever multiple
devices are involved in a software system, the question arises, which
parts of the system should be local to a device or shared between
several. This has to be clarified for both the application code and
its state. While <I>distributing code</I> among devices is a
technical question unique to every application, <I>sharing state</I>
has conceptual implications, which this section addresses.</P>
<P CLASS="western" STYLE="margin-bottom: 0.11cm">Today, many
applications still run entirely local to a single computer, or access
only data that is distributed over a network. Aiming at synchronous
collaboration, crucial aspects of traditional CSCW systems are <I>access
to shared data</I> and <I>coupling the applications </I>of
collaborating users [[Suite-CouplingUIs]]. Therefore, coupling has to
be applied to both the data and the application model
[[COAST-Model]].</P>
<P CLASS="western" STYLE="margin-bottom: 0.11cm">In the context of
ubiquitous computing environments, this view has to be extended. In
addition to data and application, also information about the physical
environment, e.g., the presence of nearby users or other available
interaction devices, has to be exchanged by different devices and
applications.</P>
<P CLASS="western" STYLE="margin-bottom: 0.11cm">As discussed above,
in a ubiquitous computing environment elements of the user interface
can be distributed among several machines (req. ) or among different
devices (req. ). Based on the separation of concerns that has been
previously identified, Dewan’s definition of coupling
[[Dewan-FlexibleUICoupling]] can be refined. Coupling can now be
defined as <I>sharing the same interaction, user interface, or
editing (application) state</I> among several users or devices.
Coupling can thus simply be implemented as accessing the same user
interface or application model. This is an important benefit of using
shared user interface and application models.</P>
<P CLASS="western" STYLE="margin-bottom: 0.11cm">Depending on how
much state is shared, the <I>degree of coupling</I> can be
controlled. If all involved user interface and editing state is
shared, a tightly coupled collaboration mode is realized; if only the
same data model is shared, users work loosely coupled (req. ). This
is related to the coupling model described in [[Suite-CouplingUIs]].</P>
<P CLASS="western" STYLE="margin-bottom: 0.11cm">Even, if some models
are not coupled, one can profit from sharing environment, user
interface, and application models. As the information encapsulated in
the models is accessible to all clients, it is possible to provide
<I>awareness information</I> in the user interface. Typical for CSCW
applications is the provision of workspace or activity awareness
[[GroupKit-AwarenessWidgets], [Interlocus-ActivityAwareness]]. This
can easily be realized if the application model including all editing
state is shared [[COAST-Model]]. While tightly coupling the user
( run in 0.868 second using v1.01-cache-2.11-cpan-39bf76dae61 )