INTELLIGENT MOBILE MULTI-ROBOTIC SYSTEMS: SOME
CHALLENGES AND POSSIBLE SOLUTIONS
Fl
´
avio S. Corr
ˆ
ea da Silva, Renata Wassermann, Ana Cristina V. Melo,
Leliane N. Barros, Marcelo Finger
Dept. of Computer Science, University of S
˜
ao Paulo
Rua do Mat
˜
ao, 1010, S
˜
ao Paulo, BRAZIL
Keywords:
Distributed multi-agent robotic systems, software for robotic agents, robotic surveillance and security.
Abstract:
Intelligent mobile multi-robotic systems (IMMRSs) are coordinated systems of autonomous mobile robots
endowed with reasoning capabilities. This sort of systems requires the integrated application of a variety of
state-of-the-art techniques developed within the realm of Artificial Intelligence, as well as instigates the further
development of different specialisations of Artificial Intelligence. In the present article we examine some
of these techniques and specialisations, discuss some specific challenges proposed to the field of Artificial
Intelligence by IMMRSs, and suggest possible solutions to these challenges. In order to make our presentation
more concrete, we employ throughout the article a specific example of IMMRS application, namely security
surveillance of an empty building by a team of robots.
1 INTRODUCTION
Intelligent mobile multi-robotic systems (IMMRSs)
are coordinated systems of autonomous mobile robots
endowed with reasoning capabilities. These systems
require the integrated application of a variety of state-
of-the-art techniques developed within the realm of
Artificial Intelligence (AI), as well as instigates the
further development of different specialisations of AI.
In the present article we examine some of these
techniques and specialisations, discuss some specific
challenges proposed to AI by IMMRSs, and sug-
gest possible solutions to these challenges. In order
to make our presentation more concrete, we employ
throughout the article a specific example of IMMRS
application, namely security surveillance of an empty
building by a team of robots (Gerkey et al., 2004):
Given an environment, modelled as a connected polygonal
free space, a fixed number of searchers autonomous mo-
bile robots equipped with cameras, each camera having a
fixed angular aperture and an unknown number of evaders
– autonomous entities which can move arbitrarily fast – de-
termine trajectories for each of the searchers so that the de-
tection of all evaders is guaranteed.
This problem has high computational complex-
ity with respect to the complexity of the environ-
ment (e.g. characterised by the number of edges of
the polygon that comprises it) and the number of
searchers. Moreover, determining the optimal (i.e.
minimum) quantity of searchers given an environment
is NP-hard (Gerkey et al., 2004).
This problem, however, does not take into account
a few parameters that can be important to improve its
accuracy for practical applications. In the following
sections we analyse this problem under the light of
different specialisations of AI, which provide us with
conceptual tools to refine its description. The present
work can be thus regarded as a refinement of the work
presented in (Gerkey et al., 2004), aiming at taking it
from foundational research that provides some math-
ematically well founded guidelines for IMMRS to a
de facto applied work that can effectively be used to
build IMMRS.
In section 2 we add to the model the inherent un-
certainties of the estimates of where a searcher and an
evader are at a given moment. In section 3 we con-
sider the fact that sometimes the sensed environment
and its corresponding model may disagree, requir-
ing a revision of the model of the environment. The
refinements of multi-robotic models that result from
taking into account the features discussed in these two
sections can lead to high consumption of computa-
tional resources. In section 4 we focus on the effi-
ciency of inference systems, to counterbalance this
consumption of resources. In section 5 we consider
the possibility of the searchers communicating with
479
S. Corrêa da Silva F., Wassermann R., Cristina V. Melo A., N. Barros L. and Finger M. (2005).
INTELLIGENT MOBILE MULTI-ROBOTIC SYSTEMS: SOME CHALLENGES AND POSSIBLE SOLUTIONS.
In Proceedings of the Second International Conference on Informatics in Control, Automation and Robotics - Robotics and Automation, pages 479-485
DOI: 10.5220/0001190904790485
Copyright
c
SciTePress
each other, to act in a coordinated fashion, exchang-
ing not only data but also functionalities. In section
6 we discuss research on high level languages to rep-
resent and reason about robotic actions, plans and ca-
pabilities. Finally, in section 7 we present some final
discussion and proposed future work.
2 MANAGEMENT OF
UNCERTAIN LOCATION
ESTIMATES
In (Gerkey et al., 2004) the information about the po-
sitions of the searchers and the sensed positions of
the evaders is assumed to be perfectly certain, thus
not taking into account the imprecision of sensors.
Among the many fields within AI to which uncertain
reasoning is relevant, IMMRSs are perhaps the most
prominent, due to the impossibility to idealise the
environment in which the designed reasoning agents
(autonomous robots in this case) shall inhabit (Gasos
and Saffiotti, 1999; Saffiotti, 1999).
Efficient probabilistic reasoning has challenged the
theorists for many years (Dix et al., 2000; Ng and
Subrahmanian, 1992). The usual approach to control
the computational complexity of probabilistic reason-
ing has been to look for special instances of reasoning
problems which present useful probabilistic proper-
ties (Campos and Cozman, 2004; Ide and Cozman,
2004; Rocha and Cozman, 2003).
Let us consider the surveillance problem with a sin-
gle searcher. Differently from the classical formula-
tion of this problem, let us also consider that the ac-
curacy of a sensor varies depending on the positions
of the searcher and the target point in the environment
(e.g. the reliability of the readings of the sensors can
be inversely proportional to the distance separating
the searcher and a point in the environment). Hence,
assuming that positions are characterised by plane co-
ordinates (x, y), if the searcher is at point (x
r
, y
r
) and
sends a message stating that there is an evader at point
(x, y), we should from this message infer a collection
of probabilities related to points surrounding (x1, y1)
such that (x
i
, y
i
) : µ
i
should be read as “there is a
probability µ
i
that point (x
i
, y
i
) is occupied by an
evader”.
We must rely on the ability of the searcher to locate
itself in the environment, and the message sent by the
searcher in this setting must contain two pairs of co-
ordinates, one for the actual position of the searcher
(x
r
, y
r
) and another for the position of the point pos-
sibly occupied by an evader (x, y). Both pairs of coor-
dinates can be inaccurate, hence if we get a message
from the searcher stating that it is at point (x
r
, y
r
),
we should from this infer a collection of probabil-
ities related to points surrounding (x
r
, y
r
) such that
(x
ri
, y
ri
) : µ
ri
should be read as “there is a probabil-
ity µ
ri
that the searcher is at point (x
ri
, y
ri
)”.
The composition of these two probability estimates
can provide us with estimates for the probabilities
that effectively there is an evader at certain points
within the building. Assuming that the plane on
which the searcher navigates is organised as a homo-
geneous square grid, the more accurate the sensors
are, the smaller are the areas with significant probabil-
ities given a position (x
r
, y
r
) and the areas with sig-
nificant probabilities given a suspicious point (x, y).
Thus, the accuracy of the sensors and the coarseness
of the grid determine the amount of data needed to
estimate probabilities of having evaders at different
points. Accurate sensors and coarser grids make for
smaller amounts of data. Hence, given the accuracy of
the sensors of a searcher, one can control the size of
the lists by making the grid squares larger or smaller.
This work is detailed in (Silva, 2005). An applica-
tion to robot navigation for surveillance is also out-
lined in that reference.
3 EFFICIENT REVISION OF
BELIEFS ABOUT THE
ENVIRONMENT
5
In (Gerkey et al., 2004) it is assumed that the informa-
tion about the environment is perfectly reliable. Sup-
pose, however, that there are two searchers looking
after an environment. Each searcher receives a map
of the environment, and then the searchers build an
internal model of the environment. As they go around
the physical space, they may observe facts that con-
tradict the information on the map. The robots must
be able to deal with this new information. There are
two main issues involved: (1) is the observation cor-
rect, so that it should be incorporated? (2) If a robot
decides to accept the information, how does it accom-
modate it?
Belief revision (G
¨
ardenfors, 1988; Hansson, 1999)
is the sub-area of AI that deals with these problems.
We are still far from having an acceptable framework
to deal with multiple agents, dynamic worlds and real-
time reasoning. In the present project, we have been
working mainly on the last issue, i.e., how to employ
logical models of belief revision to realistic problems
in which computation time matters.
The main idea is to cope with the high complexity
of logical reasoning by looking for approximate an-
swers. We have extended Cadoli and Schaerfs frame-
work for approximate reasoning (Sch95) in (Finger
and Wassermann, 2004; Finger and Wassermann,
2005) as a possible solution. In previous work
(Was99; Was01; Chopra et al., 2001), we had al-
ready proposed the use of approximate reasoning in
ICINCO 2005 - ROBOTICS AND AUTOMATION
480
the area of belief revision, but only for the clausal
fragment of propositional logic. Our extension deals
with full propositional logic and has a tableaux-based
proof method that was implemented and tested. We
presented families of anytime reasoners reasoners
that can be stopped anytime giving an approximate
answer to a query. If the agent is given more time, the
quality of the approximation is improved.
In (Was99), there was another proposal for approx-
imate reasoning, based on the idea of relevance. The
original version was for propositional logic, but has
been extended to first-order logic (Ria04a; Ria04b).
Belief revision involving multiple agents is also a
topic which started to receive some attention recently
(Roo03). The main problem here is that when each
agent holds beliefs about other agents’ beliefs, every
new information received by some agent gives rise to
a cascade of actualizations in every agents’ beliefs.
Recent work by Cantwell (Cantwell, 2005) tries to
avoid this problem by defining belief states as prim-
itives. While this works well on the formal side, it
is still not clear how this can be applied to real world
problems, chiefly due to computational complexity is-
sues.
4 UNCERTAIN REASONING VIA
APPROXIMATE REASONING
Attributing probability to the conclusion of a logi-
cal reasoner based on the a priori probabilities of the
premises has been for a long time an active area of
research (Nilsson, 1986).
The problem with this approach is its intractabil-
ity, whose sources are both the intractability of logi-
cal inferences and the multiplicity of states that have
to be considered to compute the interval of probabili-
ties of conclusions. The latter problem may be linked
to the fact that the attribution of probabilities to for-
mulae is not truth functional, in the sense that one
cannot always infer the probability of a formula sim-
ply by knowing the probability of its components. So
one has to consider all the exponentially many pos-
sible valuations of a formula to compute probability
intervals, even if the probability of all atomic facts
are known.
Our goal is to solve some of these problems
by using approximations of classical inference, as
introduced by Schaerf and Cadoli (Sch95) and
Dalal (Dalal, 1996). The idea here is to work in some
subclassical logic that proves less theorems than clas-
sical logic, but in which inference is tractable. This
approach was initially restricted to clausal form logic,
without an established proof theory. These approx-
imations have been extended to full classical logic
with a well-defined proof theory (Massacci, 1997;
Finger and Wassermann, 2002; Finger and Wasser-
mann, 2005) and recently, a more refined control on
the complexity of such approximations was estab-
lished (Finger, 2004).
In this work, we expect to apply this latter approach
to deal with some aspects of probabilistic logic. We
assume that we are dealing with a propositional lan-
guage based on a finite number of atoms p
1
, . . . , p
n
and the usual connectives ¬, , and .
The attribution of probabilities to a formula can-
not be inferred from the probabilities of its subfor-
mulae. All we have is that, if P (A) is the attribut-
ion of probabilities to a propositional formula A, such
that 0 P (A) 1, then the following must hold:
(1) if A then P (A) = 1; (2) if ¬(A B) then
P (A B) = P (A) + P (B). From this basic fact, it
is possible to infer the following properties: if the
symbol represents classical logical consequence,
then: (1) P (¬A) = 1 P (A); (2) if A B then
P (A) P (B); (3) if A B then P (A) = P (B);
(4) P (A B) = P (A) + P (B) P (A B).
The idea is now to consider not as classical infer-
ence, but some tractable approximate inference, and
assume the same definition for the attribution of prob-
abilities. We want to investigate which of the proper-
ties above are preserved.
We present the tractable approximation here only
in semantic terms; a full proof theory is discussed
in (Finger, 2004). This semantics is called Limited
Bivaluation, LB, and is paramererized by a set Σ of
formulae.
The semantics of LB(Σ) is based on a three-level
lattice, L = (L, , , 0, 1), where L is a countable set
of elements L = {0, 1, ǫ
0
, ǫ
1
, ǫ
2
, . . .} such that 0 ǫ
i
1 for every i < ω and ǫ
i
6⊑ ǫ
j
for i 6= j. The ǫ
i
s are
called neutral truth values. This lattice is enhanced
with a converse operation, , defined as: 0 = 1,
1 = 0 and ǫ
i
= ǫ
i
for all i < ω.
A limited valuation is a function v
Σ
: P L that
maps formulae to elements of the lattice that are sub-
ject to a set of restrictions with regards to whether
a formula is or is not in the parameter set Σ. Ini-
tially, the limited valuation v
Σ
maps atoms to the el-
ements of the lattice and is extended to all formu-
lae in the following way: (1) v
Σ
(¬A) = v
Σ
(A)
(2) v
Σ
(A B) = v
Σ
(A) v
Σ
(B) (3) v
Σ
(A B) =
v
Σ
(A) v
Σ
(B) (4) v
Σ
(A B) = 1 if v(A) v(B);
v
Σ
(A) v
Σ
(B) otherwise.
A further constraint, called the Limited Bivalence
Restriction, is imposed on v
Σ
for formulae in Σ: for-
mulae in Σ are bivalent, that is, if A Σ then
v
Σ
(A) must be bivalent, that is, v
Σ
(A) must satisfy
the rules above for unlimited valuations and be such
that v
Σ
(A) = 0 or v
Σ
(A) = 1.
When A 6∈ Σ, v
Σ
(A) is not always compositional,
which means that a neutral value may be assigned to
A independently of the truth value of its components.
INTELLIGENT MOBILE MULTI-ROBOTIC SYSTEMS: SOME CHALLENGES AND POSSIBLE SOLUTIONS
481
This is the case so that the bivalence of A Σ can al-
ways be satisfied without forcing all As subformulae
to be bivalent.
If A Σ it is always possible to have v
Σ
(A)
{0, 1} by making for every atom p in A, v
Σ
(p)
{0, 1}. However, this is not the only possibility. For
example, if B, C 6∈ Σ then we can make v
Σ
(B) = ǫ
i
6=
ǫ
j
= v
Σ
(C), so that v
Σ
(B C) = 0; similarly, we
obtain v
Σ
(B C) = 1 and v
Σ
(B C) = 1.
A valuation v
Σ
satisfies A if v
Σ
(A) = 1, and A is
called satisfiable; a set of formulae Γ is satisfied by
v
Σ
if all its formulae are satisfied by v
Σ
. A valuation
v
Σ
contradicts A if v
Σ
(A) = 0; if A is neither satis-
fied nor contradicted by v
Σ
, we say that v
Σ
is neutral
with respect to A. A valuation is classical if it assigns
only 0 or 1 to all proposition symbols, and hence to
all formulae.
The notion of a parameterised LB-Entailment,
|=
LB
Σ
, is obtained by defining, for a set of formulae Γ
and a formula A, Γ |=
LB
Σ
A if no valuation v
Σ
such that
v
Σ
(Γ) = 1 also makes v
Σ
(A) = 0. This consequence
relation is not classic, for if Γ |=
LB
Σ
A and v
Σ
(Γ) = 1 it
is possible that A is either neutral or satisfied by v
Σ
.
We now define the tractable entailment |=
LB
k
, para-
meterized by an integer k. For that, let S be a set
of sets of atoms and, for every Π S, let Π
+
be
the closure of Π under formula formation. We de-
fine Γ |=
LB
S
A iff there exists a set Π S such that
Γ |=
LB
Π
+
A. We define S
k
= {Π P| |Π| = k}. That is,
S
k
is a set of sets of atoms of size k. Note that if we re-
strict our attention to n atoms, |S
k
| =
n
k
= O(n
k
)
sets of k atoms.
We write |=
LB
k
to mean |=
LB
S
k
. We have focused on
the used of a quadratic decision procedure, |=
LB
2
. We
then apply the definition of probability attibution us-
ing such sub-classical entailment, as follows: (i) if
|=
LB
2
A then P (A) = 1; (ii) if |=
LB
2
¬(A B) then
P (A B) = P (A) + P (B). From that, we can
prove that the following classical properties are pre-
served: if the symbol represents classical logi-
cal consequence, then (1) P (¬A) = 1 P (A); (2) if
A |=
LB
2
B then P (A) P (B); (3) if |=
LB
2
A B then
P (A) = P (B).
In fact, the results above hold for any integer k, and
not only for k = 2. However, the following property
fails: P (A B) = P (A) + P (B) P (A B). This is
due to the fact that a classical theorem does not hold,
in general, for any k: 6|=
LB
k
A B (A ¬B) (A
B) (¬A B). In fact, to obtain an expression for
P (A B) we have to consider the cases where A and
B can have neutral values ǫ
i
.
That is, although it may be less complex to com-
pute an inference in |=
LB
k
than classical inference, the
computation of probabilities has to take in considera-
tion many more terms.
Future work consider the investigation of the com-
plexity of using probabilistic logic based on |=
LB
2
and
the effects of loss of expressivity on this inference in
the computation of probabilities.
5 FORMAL SPECIFICATION
AND VERIFICATION OF
IMMRSS
As usual, it is assumed in (Gerkey et al., 2004) that
the embedded software in each robot is fixed within
that robot. Indeed, it is usually accepted that each ro-
bot is the physical embodiment of an agent in a mul-
tiagent system. To our understanding, this constraint
in the design of IMMRSs is unnecessary and restric-
tive. If robots can exchange messages to coordinate
their actions, there is no reason why they cannot also
exchange lines of code that implement actions them-
selves. This amounts to a decoupling of the notions
of robot and agent: a single robot, in this framework,
can accomodate more than one agent simultaneously,
and agents can migrate between robots.
The decoupling of the notions of robot and agent
greatly extends the flexibility in the specification, de-
sign and implementation of IMMRSs. It also makes
these activities far more complex. In order to keep
such systems under control, it can be important to em-
ploy formal techniques for the specification and veri-
fication of mobile agents for robotic systems.
The π-calculus (Milner, 1999) is a theory for
mobile agents based on an algebra for concurrent
processes, the CCS (Milner, 1989). In a very simpli-
fied way, mobile agents can be regarded as concurrent
processes endowed with the capability of dynamic re-
configuration. The π-calculus has added to the CCS,
among other things, features of mobility and dynamic
reconfiguration.
It is relevant for the formal specification and ver-
ification of mobile agents the development of tools
for automatic verification of mobile agents, based on
novel verification techniques or the combination of
existing techniques. Our work has focused on (1)
techniques to get rid of useless code in the specifi-
cation of mobile agents and their formal specification;
(2) novel techniques for the formal verification of mo-
bile agents; (3) combination of formal verifiers of mo-
bile agents; (4) ontologies for mobile agents and mo-
bility features, to support the combination of systems
for specification and verification of mobile agents; (5)
ontologies about the capabilities of formal verifiers;
(6) formal models of autonomous and mobile agents;
(7) automatic code generation for the communication
between mobile agents; and (8) tools for the specifi-
cation and integration of formal verifiers for mobile
agents.
ICINCO 2005 - ROBOTICS AND AUTOMATION
482
The behaviour of mobile agents must be tested,
based on explicitly defined criteria (Wey88; McGre-
gor and Korson, 1994; Chen and Kao, 1999; Kung
et al., 1995) that determine what should be tested.
Our work on this field has focused on (1) the study
of what sets of properties can be verified in programs,
and how the verification of these properties simplifies
the tests requirements; (2) tools to support the com-
bination of formal verification and tests, e.g. the au-
tomation of object control graphs; (3) the extension
of model based formal methods to the specification of
exception handling; (4) the study of how to use for-
mal verifiers to improve the testing of software com-
ponents with exception handling.
Some preliminary results in these topics were pub-
lished in (Melo, 2005; Melo, 2004; Nunes and Melo,
2004; Andrade et al., 2004; Mel04b; Melo, 2003).
6 HIGH LEVEL REASONING
ABOUT ROBOTIC ACTIONS
The problem solved in (Gerkey et al., 2004) is how
to determine the plans (i.e. trajectories in the environ-
ment) for a collection of searchers, based on primitive
actions such as move forward and turn 90 degrees to
the left. This formulation of the problem scales badly
as the number of searchers increases. Another possi-
ble planning strategy is a higher-level set of actions,
possibly involving teams of robots, specified in terms
of the above primitive actions. This is an AI planning
approach called Hierarquical Task Network Planning
(HTN), that has been proved to be more powerful then
the search in the state/action space solution using only
primitive actions (Kutluhan et al., 1995).
The decomposition of a goal task adds other goal
tasks to be decomposed and this is very similar to
what it is done in a regressive partial-order planner
that adds actions to satisfy the goal making the pre-
conditions of these actions to become new goals (sub-
goaling) to plan for. In fact, it has been proved (Kut-
luhan et al., 1995) that planning with goal tasks al-
lows the HTN planners to inherit the efficiency of the
partial-order planners. HTN planners have been ap-
plied in many applications, such as a system for in-
tegrated product design and manufacturing planning,
and computer games like the winer of the 1997 world
championship of computer bridge.
Golog (Levesque et al., 1997) is a logical language,
implemented as a Prolog meta-interpreter, that allows
the definition of procedures to describe the behavior
of an agent, using the Situation Calculus (McCarthy,
1963) to represent knowledge, actions and states of
the world. A programmer can specify a robot high-
level control program as a set of high-level proce-
dures, which are decomposed by Golog into low-level
procedures (or primitive actions) during execution
time, very similar to HTN planning. Golog is a high-
level agent programming language, in which standard
programming constructs (e.g. sequence, choice and
iteration) are used to write the agent control program.
It can effectively represent and reason about the ac-
tions performed by agents in dynamic environments.
The emerging success of Golog has shown that, by us-
ing a logical approach, it is possible to solve complex
robotic tasks efficiently, despite contrary belief. How-
ever, Golog uses a planning strategy based on situa-
tion calculus, a logical formalism in which plans are
represented as a totally ordered sequence of actions
and, therefore, it inherits the well known deficiencies
of this approach (Kutluhan et al., 1995).
Since Golog decomposes procedures in a very si-
miliar way that it is done by HTN planning (Ga-
baldon, 2002), we have been working on the pro-
posal of a number of extensions in the Golog meta-
interpreter (Barros and Iamamoto, 2003; Iamamoto,
2005), namely (1) we have introduced the idea of goal
tasks into the Golog language; (2) based on the for-
malisation of HTN planning (Kutluhan et al., 1995),
we have proposed a special kind of Golog procedures,
called Goal Procedures, that can be decomposed by
the Golog meta-interpreter to solve goals of attain-
ment problems; (3) we have presented a way to in-
terleave Goal Procedures to solve action interactions
(conflicting subgoals); (4) we have also compared the
efficiency of using Goal Procedures with an example
of a Situation Calculus planner proposed by Reiter,
the Wspbf planner, encoded in Golog (Rei01) and
showed that our proposal, besides being more com-
patible with Golog way of planning through decom-
positions, can be also more efficient; (5) we have ar-
gued that Goal Procedures can be very useful in an
on-line execution to replanning when action’s execu-
tion fails.
We are currently working on the use of Goal Pro-
cedures to replanning in IndiGolog, an extention of
GOLOG to perform on-line execution of programs.
We have also proposed a new high-level robot
programming language, called ABGOLOG (Per04b;
Per04c; Per04a). This language is based on Golog,
but it uses Event Calculus as the formalism to de-
scribe actions and abductive reasoning to synthesize
plans, which corresponds to partial order planning.
So, based on our previous work on implementation
and analysis of abductive event calculus planning sys-
tems (Per04b), we have shown how it is possible to
modify AbGolog’s implementation to improve its ef-
ficiency, according to specific domain characteristics.
In (Tre05) we have shown an example of a robot
application using a Lego
R
MindStorms
TM
robot. The
implementation was done in Legolog, a package com-
posed by the Indigolog language and communication
protocols for the Lego
R
MindStorms
TM
robot.
INTELLIGENT MOBILE MULTI-ROBOTIC SYSTEMS: SOME CHALLENGES AND POSSIBLE SOLUTIONS
483
7 CONCLUSION
The design and implementation of IMMRSs is a great
challenge to AI and related areas. Chief among the
reasons to make this sort of systems so challenging
is the fact that one cannot idealise the environments
upon which the systems shall act, and hence they re-
quire refined methods to ensure the coupling between
idealised models (based on which the systems are pro-
grammed) and physical environments.
In the present article we listed some areas of Ar-
tificial Intelligence that we believe are most relevant
to multi-robotic systems, and that at the same time are
more strikingly challenged by those systems. We sug-
gested some solutions to specific challenges, which
are the ones on which we are working at the moment.
REFERENCES
A. G. Andrade, A. C. V. Melo, and M. M. Amorim. Da
especificao verificao de agentes mveis – um ambiente
grfico. In Mauricio Solar and David Fernandez Baca,
editors, Anales de 30th Conferencia Latino Americana
de Informtica, Arequipa, Peru. 2004.
L. N. de Barros and E. Iamamoto. Planning for tasks in cog-
nitive robotics. Proceedings of VI Simpsio Brasileiro
de Automao Inteligente, Bauru, SP, Brazil. 2003.
C. P. Campos, and F. G. Cozman. Inference in credal net-
wors using multilinear programming. Second Start-
ing AI Researcher Symposium (STAIRS), 50-61, IOS
Press. 2004.
J. Cantwell. A formal model of multi-agent belief-
interaction. Journal of Logic, Language and Infor-
mation. 2005. to appear.
M.-H. Chen, and H. M. Kao. Testing object-oriented pro-
grams - an integrated approach. In Proceedings of the
Tenth International Symposium on Software Reliabil-
ity Engineering. IEEE Computer Society Press. 1999.
S. Chopra, R. Parikh and R. Wassermann. Approximate
belief revision. Logic Journal of the IGPL, 9(6):755–
768. 2001.
F. S. Correa da Silva. Where am I? Where are you? Tech-
nical Report 2005-03. Department of Computer Sci-
ence, University of Sao Paulo. 2005.
M. Dalal. Anytime families of tractable propositional rea-
soners. In International Symposium of Artificial Intel-
ligence and Mathematics AI/MATH-96, pages 42–45.
1996.
J. Dix, M. Nanni, and V. S. Subrahmanian. Probabilistic
agent programs. ACM Transactions on Computational
Logic, v. 1(2), 207–245. 2000.
M. Finger. Polynomial approximations of full propositional
logic via limited bivalence. In 9th European Confer-
ence on Logics in Artificial Intelligence (JELIA 2004),
LNAI vol. 3229, pages 526–538. 2004.
M. Finger and R. Wassermann. Expressivity and control
in limited reasoning. In Frank van Harmelen, editor,
15th European Conference on Artificial Intelligence
(ECAI02), pages 272–276, Lyon, France. 2002.
M. Finger and R. Wassermann. Approximate and limited
reasoning: Semantics, proof theory, expressivity and
control. Journal of Logic And Computation, 14(2):179
– 204. 2004.
M. Finger and R. Wassermann. The universe of proposi-
tional approximations. Theoretical Computer Science.
2005. to appear.
A. Gabaldon. Programming hierarchical task networks
in the situation calculus. AIPS-2002 Workshop on
On-line Planning and Scheduling. Toulouse, France.
2002.
P. G
¨
ardenfors. Knowledge in Flux - Modeling the Dynamics
of Epistemic States. MIT Press. 1988.
J. Gasos, and A. Saffiotti. Using fuzzy sets to represent
uncertain spatial knowledge in autonomous robots.
Spatial Cognition and Computation, v. 1, 205-226,
Kluwer. 1999.
B. P. Gerkey, S. Thrun, and G. Gordon. Visibility-based
pursuit-evasion with limited field of view. In Proceed-
ings of the National Conference on Artificial Intelli-
gence (AAAI 2004). AAAI Press. 2004.
S. O. Hansson. A Textbook of Belief Dynamics. Kluwer
Academic Press. 1999.
E. Iamamoto. HTN Planning in Golog. Msc Dissertation at
the Departament of Computer Science of the Univer-
sity of So Paulo, Brazil. 2005.
J. S. Ide, and F. G. Cozman. IPE and L2U: Approximate al-
gorithms for credal networks. Second Starting AI Re-
searcher Symposium (STAIRS), 118-127, IOS Press.
2004.
D. Kung, N. Suchak, P. Hsia, Y. Toyoshima, and C. Chen.
Object state testing for object-oriented programs.
In Proc. of COMPSAC’95. IEEE Computer Society
Press. 1995.
E. Kutluhan, J. Hendler, and D. Nau. Complexity, decidabil-
ity and undecidability results for domain-independent
planning. Artificial Intelligence 76(1-2):75-88. 1995.
H. J. Levesque, R. Reiter, Y. Lesprance, F. Lin, and R. B.
Scherl. Golog: A logic programming language for dy-
namic domains. 1997.
F. Massacci. A proof theory for tractable approximations of
propositional reasoning. In Maurizio Lenzerini, edi-
tor, AI*IA-97, volume 1321 of LNAI, pages 219–230.
SV. 1997.
J. McCarthy. Situations, actions and causal laws, Techni-
cal report, Stanford University. Reprinted in Semantic
Information Processing (M. Minsky ed.), MIT Press,
Cambridge, MAss., 1968, pp. 410-417. 1963.
J. D. McGregor, and T. D. Korson. Integrated object-
oriented testing and development processes. Commu-
nications of ACM, 37(9). 1994.
A. C. V. Melo. From active names to pi-calculus rewriting
rules. ENTCS (Electronic Notes in Theoretical Com-
puter Science), to appear. 2005.
A. C. V. Melo. π-calculus rewriting rules based on active
names. In Alexandre Mota and Arnaldo Moura, edi-
tors, SBMF2004: 7th Brazilian Symposium on Formal
Methods, Recife, PE, Brazil. 2004.
ICINCO 2005 - ROBOTICS AND AUTOMATION
484
A. C. V. Melo. A study on the potential active names of π-
agents. ENTCS (Electronic Notes in Theoretical Com-
puter Science), 95(C):269–286. 2004.
A. C. V. Melo. A study on the potential active names of π-
agents. In A. L. C. Cavalcanti and Patrcia Machado,
editors, WMF2003: 6th Brazilian Workshop on For-
mal Methods, Campina Grande, PB, Brazil. 2003.
R. Milner. Communication and Concurrency. Prentice–
Hall, first edition. 1989.
R. Milner. Communicating and Mobile Systems: the π-
Calculus. Cambridge University Press. 1999.
R. Ng, and V. S. Subrahmanian. Probabilistic logic pro-
gramming. Information and Computation, v. 101(2),
150–201. 1992.
N. Nilsson. Probabilistic logic. Artificial Intelligence,
28(1):71-87. 1986.
P. R. Nunes and A. C. V. Melo. Ocongra - uma ferramenta
para gerao de grafos de controle de fluxo de obje-
tos. In Murilo Camargo and Jaelson Castro, editors,
Anais do Simpsio Brasileiro de Engenharia de Soft-
ware, Braslia, Brasil. 2004.
S. L. Pereira and L. N. Barros. High-Level Robot Pro-
grams Based on Abductive Event Calculus. Inter-
national Workshop on Cognitive Robotics, CogRob-
2002. 2002.
S. L. Pereira and L. N. Barros. Formalizing planning algo-
rithms: a logical framework for the research on ex-
tending the classical planning approach. International
Conference on Planning Systems Workshop: Con-
necting Planning Theory with Practice. 2004.
S. L. Pereira and L. N. Barros. Planning with Abduction:
a logical framework to explore extensions to classical
planning. Simpsio Brasileiro de Inteligncia Artificial,
LNAI. 2004.
S. L. Pereira and L. N. Barros. High-level Robot Pro-
gramming: an abductive approach using event calcu-
lus. Simpsio Brasileiro de Inteligncia Artificial, LNAI.
2004.
R. Reiter. Knowledge in Action: Logical Foundations for
Specifying and Implementing Dynamical Systems.
MIT Press. 2001.
J. Riani. Towards an efficient inference proce-
dure through syntax based relevance. Mas-
ter’s thesis, Department of Computer Science,
University of S
˜
ao Paulo. 2004. Available at
http://www.ime.usp.br/˜joselyto/mestrado.
J. Riani and R. Wassermann. Using relevance to speed up
inference – some empirical results. In Proceedings of
the Brazilian Artificial Intelligence Symposium, Lec-
ture Notes on Artificial Intelligence. Springer-Verlag.
2004.
J. C. F. Rocha, and F. G. Cozman. Inference in credal
networks with branch-and-bound algorithms. Third
International Symposium on Imprecise Probabilities
and Their Applications, 482–495, Carleton Scientific.
2003.
J. W. Roorda, W. Van der Hoek, and J. J. Meyer. Iterated
belief change in multi agent systems. Logic Journal
of the IGPL, 11(2):223–246. 2003.
A. Saffiotti. Handling uncertainty in control of autonomous
robots. in Wooldridge, M., Veloso, M. (eds.) Artificial
Intelligence Today, 381–408, Springer-Verlag. 1999.
M. Schaerf and M. Cadoli. Tractable reasoning via approx-
imation. Artificial Intelligence, 74(2):249–310. 1995.
F. W. Trevizan and L. N. de Barros. Planejamento e Exe-
cuo em Golog para Robs Lego Mindstorm. Encontro
Nacional de Inteligncia Artificial. 2005.
R. Wassermann. Resource-Bounded Belief Revision. PhD
thesis, Institute for Logic, Language and Computation
— University of Amsterdam. 1999.
R. Wassermann. On structured belief bases. In Hans Rott
and Mary-Anne Williams, editors, Frontiers in Belief
Revision. Kluwer. 2001.
E. J. Weyuker. The evaluation of program-based software
test data adequacy. Communications of ACM, 31(6).
1988.
INTELLIGENT MOBILE MULTI-ROBOTIC SYSTEMS: SOME CHALLENGES AND POSSIBLE SOLUTIONS
485