Informatica 34 (2010) 429-440 429
Trusted Reasoning Services for Semantic Web Agents
Kalliopi Kravari, Efstratios Kontopoulos and Nick Bassiliades
Dept. of Informatics, Aristotle University of Thessaloniki, Greece, GR-54124
E-mail: {kkravari, skontopo, nbassili}@csd.auth.gr
Keywords: semantic web, intelligent agents, multi-agent system, reasoning
Received: January 16, 2010
The Semantic Web aims at enriching information with well-defined semantics, making it possible both for people and machines to understand Web content. Intelligent agents are the most prominent approach towards realizing this vision. Nevertheless, agents do not necessarily share a common rule or logic formalism, neither would it be realistic to attempt imposing specific logic formalisms in a rapidly changing world like the Web. Thus, based on the plethora of proposals and standards for logic- and rule-based reasoning for the Semantic Web, a key factor for the success of Semantic Web agents lies in the interoperability of reasoning tasks. This paper reports on the implementation of trusted, third party reasoning services wrapped as agents in a multi-agent system framework. This way, agents can exchange their arguments, without the need to conform to a common rule or logic paradigm - via an external reasoning service, the receiving agent can grasp the semantics of the received rule set. Finally, a use case scenario is presented that illustrates the viability of the proposed approach.
Povzetek: Semantični spletni agenti potrebujejo oceno zaupanja storitev za kvalitetno delovanje.
1 Introduction
The Semantic Web (SW) is a rapidly evolving extension of the World Wide Web that derives from Sir Tim Bern-ers-Lee's vision of a universal medium for data, information and knowledge exchange [1]. The SW aims at augmenting Web content with well-defined semantics (i.e. meaning), making it possible both for people and machines to comprehend the available information and better satisfy their requests. So far, the fundamental SW technologies (content representation, ontologies) have been established and researchers are currently focusing their efforts on logic and proofs.
Intelligent agents (IAs- software programs extended to perform tasks more efficiently and with less human intervention) are considered the most prominent means towards realizing the SW vision [2]. The gradual integration of multi-agent systems (MAS) with SW technologies will affect the use of the Web in the imminent future; its next generation will consist of groups of intercommunicating agents traversing it and performing complex actions on behalf of their users.
IAs, on the other hand, are considered to be greatly favored by the interoperability that SW technologies aim to achieve. Thus, IAs will often interact with other agents, belonging to service providers, e-shops, Web enterprises or even other users. However, it is unrealistic to expect that all intercommunicating agents will share a common rule or logic representation formalism; neither can W3C impose specific logic formalisms in a drastically dynamic environment like the Web. In order for agent interactions to be meaningful, nevertheless, agents should somehow share an understanding of each other's position justification arguments (i.e. logical conclusions based on corresponding rule sets and facts). This hetero-
geneity in representation and reasoning technologies comprises a critical drawback in agent interoperation.
A solution to this compatibility issue could emerge via equipping each agent with its own inference engine or reasoning mechanism, which would assist in "grasping" other agents' logics. Nevertheless, every rule engine possesses its own formalism and, consequently, agents would require a common interchange language. Since generating a translation schema from one (rule) language into the other (e.g. RIF - Rule Interchange Format [3]) is not always plausible, this approach does not resolve the agent intercommunication issue, but only moves the setback one step further, from argument interchange to rule translation/transformation.
An alternative, more pragmatic, approach is presented in this work, where reasoning services are wrapped in IAs. Although we have embedded these rea-soners in a common framework for interoperating SW agents, called EMERALD\ they can be added in any other multi-agent system. The motivation behind this approach is to avoid the drawbacks outlined above and propose utilizing third-party reasoning services, instead, that allow each agent to effectively exchange its arguments with any other agent, without the need for all involved agents to conform to the same kind of rule paradigm or logic. This way, agents remain lightweight and flexible, while the tasks of inferring knowledge from agent rule bases and verifying the results is conveyed to the reasoning services.
Flexibility is a key aim for our research, thus a variety of popular inference services that conform to various
1 http://lpis.csd. auth.gr/systems/emerald/emerald.html
430 Informática 34 (2010) 429-440
K. Kravari et al.
types of logics is offered and the list is constantly expanding. Furthermore, the notion of trust is vital, since agents need a mechanism for establishing trust towards the reasoning services, so that they can trust the generated inference results. Towards this direction, reputation mechanisms (centralized and decentralized) were proposed and integrated in the EMERALD framework.
The rest of the paper is structured as follows: Section 2 presents a brief overview of the framework, followed by a more thorough description of the reasoning services, in Section 3. Section 4 features the implemented trust mechanisms, while Section 5 reports on a brokering use case scenario that illustrates the use of the reasoning services and the reputation methodology. Finally, the paper is concluded with an outline of related work paradigms, as well as the final remarks and directions for future improvements.
2 Framework overview
The EMERALD framework is built on-top of JADE2 and, as mentioned in the introduction, it involves trusted, third-party reasoning services, deployed as agents that infer knowledge from an agent's rule base and verify the results. The rest of the agents can communicate with these services via ACL message exchange.
Figure 1: Generic Overview.
Figure 1 illustrates a generic overview of the framework: each human user controls a single all-around agent; agents can intercommunicate, but do not have to "grasp" each other's logic. This is why third-party, reasoning services are deployed. In our approach, reasoning services are "wrapped" by an agent interface, called the Reasoner (presented later), allowing other agents to contact them via ACL (Agent Communication Language) messages.
The element of trust is also vital, since an agent needs to trust the inference results returned from a Rea-
2 JADE (Java Agent Development Environment): http://jade.tilab.com/
soner and is established via centralized and decentralized reputation mechanisms integrated in EMERALD. Figure 1 displays the aspect of the former (centralized) mechanism, where a specialized " Trust Manager" agent keeps the reputation scores for the reasoning services given from the rest of the IAs.
Overall, the goal is to apply as many standards as possible, in order to encourage the application and development of the framework. Towards this affair, a number of popular rule engines that comply with various types of (monotonic and non-monotonic) logics are featured in EMERALD (see section 3). Additionally, RDF/S (Resource Description Framework/Schema) and OWL (Web Ontology Language) serve as language formalisms, using in practice the Semantic Web as infrastructure for the framework.
3 Reasoning services
EMERALD currently implements a number of Rea-soner agents that offer reasoning services in two main formalisms: deductive and defeasible reasoning. Table 1 displays the main features of the reasoning engines described in the following sections.
Table 1: Reasoning engine features.
Type of logic Implementation
R-DEVICE deductive RDF/CLIPS/RuleML
Prova deductive Prolog/Java
DR-DEVICE defeasible RDF/CLIPS/RuleML
SPINdle defeasible XML/Java
Order of Logic Reasoning
R-DEVICE 2nd order fwd chaining
Prova 1st order bwd chaining
DR-DEVICE 2nd order fwd chaining
SPINdle 1st order fwd chaining
Deductive reasoning is based on classical logic arguments, where conclusions are proved to be valid, when the premises of the argument (i.e. rule conditions) are true. Defeasible reasoning [4], on the other hand, constitutes a non-monotonic rule-based approach for efficient reasoning with incomplete and inconsistent information. When compared to more mainstream non-monotonic reasoning approaches, the main advantages of defeasible reasoning are enhanced representational capabilities and low computational complexity [5]. The following subsection gives a brief insight into the fundamental elements of defeasible logics.
3.1 Defeasible logics
A defeasible theory D (i.e. a knowledge base or a program in defeasible logic) consists of three basic ingredients: a set of facts (F), a set of rules (R) and a superiority relationship (>). Therefore, D can be represented by the triple (F, R, >).
In defeasible logic, there are three distinct types of rules: strict rules, defeasible rules and defeaters. Strict rules are denoted by A ^ p and are interpreted in the typical sense: whenever the premises are indisputable, so
TRUSTED REASONING SERVICES FOR...
Informatica 34 (2010) 429-440 431
is the conclusion. An example of a strict rule is: "Apartments are houses", which, written formally, would become: r1: apartment(X) ^ house(X).
Defeasible rules are rules that can be defeated by contrary evidence and are denoted by A ^ p. An example of such a rule is "Any apartment is considered to be acceptable", which becomes: r2: apartment(X) ^ acceptable(X).
Defeaters, denoted by A e p, are rules that do not actively support conclusions, but can only prevent some of them. In other words, they are used to defeat some defeasible rules by producing evidence to the contrary. An example of a defeater is: r3: pets(X), garden-Size(X,Y), Y>0 e acceptable(X), which reads as: "If pets are allowed in the apartment, but the apartment has a garden, then it might be acceptable". This defeater can defeat, for example, rule r4: pets(X) ^ -acceptable(X).
Finally, the superiority relationship among the rule set R is an acyclic relation > on R. For example, given the defeasible rules r2 and r4, no conclusive decision can be made about whether the apartment is acceptable or not, because rules r2 and r4 contradict each other. But if a superiority relation > with r4 > r2 is introduced, then
overrides
and we can indeed conclude that the
apartment is considered unacceptable. In this case rule r4 is called superior to r2 and r2 inferior to r4.
Another important element of defeasible reasoning is the notion of conflicting literals. In applications, literals are often considered to be conflicting and at most one of a certain set should be derived. An example of such an application is price negotiation, where an offer should be made by the potential buyer. The offer can be determined by several rules, whose conditions may or may not be mutually exclusive. All rules have offer(X) in their head, since an offer is usually a positive literal. However, only one offer should be made. Therefore, only one of the rules should prevail, based on superiority relations among them. In this case, the conflict set is:
C(offer(x,y)) =
{-offer(x,y)} u {offer(x,z) | z ^ y}
For example, the following two rules make an offer for a given apartment, based on the buyer's requirements. However, the second one is more specific and its conclusion overrides the conclusion of the first one.
r5: size(X,Y),Y>45,garden(X,Z) ^ offer(X,250+2Z+5(Y-45)) r6: size(X,Y),Y>45,garden(X,Z),central(X)
^ offer(X,300+2Z+5(Y-45)) r6 > r5
3.2 Deductive reasoners
EMERALD currently deploys two deductive reasoners, based on the logic programming paradigm: R-Reasoner and Prova-Reasoner, which deploy the R-DEVICE and Prova rule engines, respectively.
3.2.1 R-DEVICE
R-DEVICE [6] is a deductive object-oriented knowledge base system for querying and reasoning about RDF metadata. The system is based on an OO RDF data model, which is different from the established triple-based model, in the sense that resources are mapped to objects and properties are encapsulated inside resource objects, as traditional OO attributes. More specifically, R-DEVICE transforms RDF triples into CLIPS (COOL) objects and uses a deductive rule language for querying and reasoning about them, in a forward-chaining Datalog fashion. This transformation leads to fewer joins required for accessing the properties of a single resource, subsequently resulting in better inference/querying performance.
Furthermore, R-DEVICE features a deductive rule language (in OPS5/CLIPS-like format or in a RuleML-like syntax) for reasoning on top of RDF metadata. The language supports a second-order syntax, which is efficiently translated into sets of first-order logic rules using metadata, where variables can range over classes and properties, so that reasoning over the RDF schema can be performed. A sample rule in the CLIPS-like syntax is displayed below:
(deductiverule test-rule
?x <- (website (dc:title ?t) (dc:creator
"John Smith"))
=>
(result (smith-creations ?t))
)
Rule test-rule above seeks for the titles of websites (class website) created by "John Smith". Note that namespaces, like DC, can also be used.
The semantics of the rule language of R-DEVICE are similar to Datalog [7] with a semi-naive evaluation proof procedure and an OO syntax in the spirit of F-Logic [8]. The proof procedure of R-DEVICE dictates that when the condition of the rule is satisfied, then the conclusion is derived and the corresponding object is materialized (asserted) in the knowledge base. R-DEVICE supports non-monotonic conclusions. So, when the condition of a rule is falsified (after being satisfied), then concluded object is retrieved (retracted). R-DEVICE also supports negation-as-failure.
3.2.2 Prova
Prova [9] is a rule engine for rule-based Java scripting, integrating Java with derivation rules (for reasoning over ontologies) and reaction rules (for specifying reactive behaviors of distributed agents). Prova supports rule interchange and rule-based decision logic, distributed inference services and combines ontologies and inference with dynamic object-oriented programming.
As a declarative language with derivation rules, Prova features a Prolog syntax that allows calls to Java methods, thus, merging a strong Java code base with Prolog features, such as backtracking. For example, the following Prova code fragment features a rule, whose body consists of a number of Java method calls:
r
r
4
2
432 Informática 34 (2010) 429-440
K. Kravari et al.
hello(Name):-
S = java.lang.String("Hello "), S.append(Name),
java.lang.System.out.println(S).
On the other hand, Prova reaction rules are applied in specifying agent behavior, leaving more critical operations (e.g. agent messaging etc.) to the language's Java-based extensions. In this affair, various communication frameworks can be deployed, like JADE, JMS3 or even Java events generated by Swing (G.U.I.) components. Reaction rules in Prova have a blocking rcvMsg predicate in the head and fire upon receipt of a corresponding event. The rcvMsg predicate has the following syntax: rcvMsg(Protocol, To, Performative, [Predi-cate|Args] | Context). The following code fragment shows a simplified reaction rule for the FIPA que-ryref performative:
rcvMsg(Protocol,From,queryref,[Pred|Args]| Context):-
derive([Pred|Args]),
sendMsg(Protocol,From,reply,[Pred|Args] |Context).
rcvMsg(Protocol,From,queryref,[Pred|Args], Protocol):-
sendMsg(Protocol,From,end_of_transmissi on,[Pred|Args]|Context).
The sendMsg predicate is embedded into the body of derivations or reaction rules and fails only if the parameters are incorrect or if the message could not be sent due to various other reasons, like network connection problems. Both code fragments presented above were adopted from [9].
Prova is derived from Mandarax [10], an older Java-based inference engine, and extends it by providing a proper language syntax, native syntax integration with Java, agent messaging and reaction rules.
3.3 Defeasible reasoners
Furthermore, EMERALD also supports two defeasible reasoners: DR-Reasoner and SPINdle-Reasoner, which deploy DR-DEVICE and SPINdle, respectively.
3.3.1 DR-DEVICE
DR-DEVICE [11] is a defeasible logic reasoner, based on R-DEVICE presented above. DR-DEVICE is capable of reasoning about RDF metadata over multiple Web sources using defeasible logic rules. More specifically, the system accepts as input the address of a defeasible logic rule base. The rule base contains only rules; the facts for the rule program are contained in RDF documents, whose addresses are declared in the rule base. After the inference, conclusions are exported as an RDF document. Furthermore, DR-DEVICE supports all defeasible logic features, like rule types, rule superiorities etc., applies two types of negation (strong, negation-as-failure) and conflicting (mutually exclusive) literals.
Similarly to R-DEVICE, rules can be expressed either in a native CLIPS-like language, or in a (further) extension of the OORuleML syntax, called DR-RuleML, that enhances the rule language with defeasible logic elements. For instance, rule r2 from section 3.1 can be represented in the CLIPS-like syntax as:
(defeasiblerule r2
(apartment (name ?X)) =>
(acceptable (name ?X)))
For completeness, we also include the representation of rule r4 from section 3.1 in the CLIPS-based syntax, in order to demonstrate rule superiority and negation:
(defeasiblerule r4
(declare (superior r2))
(apartment (name ?X) (pets "no")) =>
(not (acceptable (name ?X))))
The reasoner agent supporting DR-DEVICE is DR-Reasoner [12].
3.3.2 SPINdle
SPINdle [13] is an open-source, Java-based defeasible logic reasoner that supports reasoning on both standard and modal defeasible logic. It accepts defeasible logic theories, represented via a text-based pre-defined syntax or via a custom XML vocabulary, processes them and exports the results via XML. More specifically, SPINdle supports all the defeasible logic features (facts, strict rules, defeasible rules, defeaters and superiority relationships), modal defeasible logics [14] with modal operator conversions, negation and conflicting (mutually exclusive) literals.
A sample theory that follows the pre-defined syntax of SPINdle is displayed below (adopted from the SPINdle website4):
>> sh #Nanook is a Siberian husky.
R1: sh -> d #Huskies are dogs.
R2: sh => -b #Huskies usually do not bark.
R3: d => b #Dogs usually bark.
R2 > R3 #R2 is more specific than R3.
#Defeasibly, Nanook should not bark.
#That is, +d -b
Additionally, as a standalone system, SPINdle also features a visual theory editor for editing standard (i.e. nonmodal) defeasible logic theories.
3.4 Reasoner functionality
The reasoning services, as already mentioned, are wrapped by an agent interface, the Reasoner, allowing other IAs to contact them via ACL messages. The Rea-soner can launch an associated reasoning engine, in order to perform inference and provide results. In essence, the Reasoner is a service and not an autonomous agent; the agent interface is provided in order to integrate Reasoner
3 JMS (JavaMessage Service): http://java.sun.com/products/jms/
4 http://spin.nicta.org.au/spindleOnline/index.html
TRUSTED REASONING SERVICES FOR...
Informatica 34 (2010) 429-440 433
agents into EMERALD or even any other multi-agent system.
The procedure is straightforward (Figure 2): each Reasoner constantly stands by for new requests (ACL messages with a "REQUEST' communication act). As soon as it gets a valid request, it launches the associated reasoning engine that processes the input data (i.e. rule base) and returns the results. Finally, the Reasoner returns the above result through an "INFORM' ACL message.
INPUT: REQUEST
OUTPUT: INFORM
Agent
Reasoners
Reasoning Engines
Figure 2: Reasoners' functionality.
A sample ACL message, based on Fipa20005 description, in the CLIPS-like syntax is displayed below:
(ACLMessage
(communicative-act REQUEST) (sender AgentA@xx:1099/JADE) (receiver xx-Reasoner@xx:1099/JADE)
(protocol protocolA) (language "English") (content C:\\rulebase.ruleml)
)
where AgentA sends to a Reasoner (xx-Reasoner) a RuleML file path (C:\\rulebase.ruleml).
Figure 3: Serving multiple requests.
An important feature of the procedure is that whenever a Reasoner receives a new valid request, it launches a new instance of the associated reasoning engine. There-
Fipa2000 description for the ACL Message parameters: www.fipa.org
fore, multiple requests are served concurrently and independently (see Fig. 3). As a result, new requests are served almost immediately, avoiding burdening the framework's performance, because the only sequential operation of the reasoner is the transfer of requests and results between reasoning engines and the requesting agents, which are very low demanding in time.
Finally, note that Reasoners do not use a particular rule language. They simply transfer file paths (in the form of Java Strings) via ACL messages either from a requesting agent to a rule engine or from the rule engine to the requesting agent. Obviously, the content of these files has to be written in the appropriate rule language. For instance an agent who wants to use either the DR-DEVICE or the R-DEVICE rule engine has to provide valid RuleML files. Similarly, valid Prova or XML files are required by the Prova and SPINdle rule engine, respectively. Hence, it is up to the requesting agent's user to provide the appropriate files, by taking each time into consideration the rule engines' specifications.
Thus, new reasoners can be easily created and added to the platform by building a new agent that manages messages between the requesting agent and the rule engine. Furthermore, it has to launch instances of the rule engine according to the specific requirements of the engine.
4 Trust mechanisms
Tim Berners-Lee described trust as a fundamental component of his vision for the Semantic Web [1], [15], [16]. Thus, it is not surprising that trust is considered critical for effective interactions among agents in the Semantic Web, where agents have to interact under uncertain and risky situations. However, there is still no single, accepted definition of trust within the research community, although it is generally defined as the expectation of competence and willingness to perform a given task. Broadly speaking, trust has been defined in various ways in literature, depending on the domain of use. Among these definitions, there is one that can be used as a reference point for understanding trust, provided by Dasgupta [17]: "Trust is a belief an agent has that the other party will do what it says it will (being honest and reliable) or reciprocate (being reciprocative for the common good of both), given an opportunity to defect to get higher payoffs."
There are various trust metrics, some involving past experience, some giving relevance to opinions held by an agent's neighbours and others using only a single agent's own previous experience. During the past decade, many different metrics have been proposed, but most have not been widely implemented. Five such metrics are described in [18], among them Sporas [19] seems to be the most used metric, although CR (Certified Reputation) [20] is one of the most recently proposed methodologies.
Our approach adopts two reputation mechanisms, a decentralized and a centralized one. Notice that in both approaches newcomers start with a neutral value. Otherwise, if their initial reputation is set too low, it may be rather difficult to prove trustworthiness through one's
434 Informática 34 (2010) 429-440
K. Kravari et al.
actions. If, on the other hand, the reputation is set too high, there may be a need to limit the possibility for users to "start over" after misbehaving. Otherwise, the punishment from having behaved badly becomes void.
4.1 Decentralized reputation mechanism
The decentralized mechanism is a combination of Sporas and CR, where each agent keeps the references given from other agents and calculates the reputation value, according to the formula:
1 '
R+1 = 11 O (R ^ W1 - E (W+1)) (1)
f 1
O (R ) = 1 -
and E(W+1) =
Rl
D
1 + e
where: ' is the number of ratings the user has received thus far, 0 is a constant integer greater than 1, W, represents the rating given by user i, Ro'her is the reputation value of the user giving the rating, D is the range of reputation values (maximum rating minus minimum rating) and a is the acceleration factor of the damping function 0 (the smaller the value of s, the steeper the dumping factor 0). Note that the value of 0 determines how fast the reputation value of the user changes after each rating. The larger the value of 0, the longer the memory of the system is.
The user's rating value Wt is based on four coefficients:
• Correctness (Corr,): refers to the correctness of the returned results.
• Completeness (Comp): refers to the completeness of the returned results.
• Response 'ime (Resp): refers to the Reasoner's response time.
• Flexibility (Flex,): refers to the Reasoner's flexibility in input parameters.
The four coefficients are evaluated, based on the user's (subjective) assessment for each standard and their ratings vary from 1 to 10. The final rating value (Wi) is the weighted sum of the coefficients (equation (2) below), where aiU ai2, ai3 and ai4 are the respective weights and nCorri, nCompi, nRespi and nFlexi are the normalized values for correctness, completeness, response time and flexibility, accordingly:
wi = ai1nCorri + ai2nCompi + ai3nRespi + ai4nFlexi (2)
New users start with a reputation equal to 0 and can advance up to the maximum of 3000. The reputation ratings vary from 0.1 for "terrible" to 1 for "perfect". Thus, as soon as the interaction ends, the Reasoner asks for a rating. The other agent responds with a new message containing both its rating and its personal reputation and the Reasoner applies equation (1) above to update its reputation.
4.2 Centralized reputation mechanism
In the centralized approach, a third-party agent keeps the references given from agents interacting with Reasoners
or any other agent in the MAS environment. Each reference is in the form of:
Refi=(a, b, cr, cm, flx, rs) where: a is the 'rus'er agent, b is the 'rus'ee agen' and cr (Correctness), cm (Completeness), flx (Flexibility) and rs (Response time) are the evaluation criteria.
Ratings (r) vary from -1 (terrible) to 1 (perfect), while newcomers start with a reputation equal to 0 (neutral). The final reputation value (Rb) is based on the weighted sum of the relevant references stored in the third-party agent and is calculated according to the formula:
Tftb=W] *cr+w 2 *cm+w 3 *flx+w4 *rs where: w}+w2+w3+w4=1. Two options are supported for Rb, a default where the weights are equivalent, namely wk s[1i4]=0.25 each and a user-defined, where the weights vary from 0 to 1 depending on user priorities.
4.3 Comparison
The simple evaluation formula of the centralized approach, compared to the decentralized one, leads to time gain as it needs less calculation time. Moreover, it provides more guaranteed and reliable results (Rb), as it is centralized, overcoming the difficulty to locate references in a distributed mechanism.
In addition, in the decentralized approach an agent can interact with only one agent per time and, thus, requires more interactions, in order to discover the most reliable agent, leading to further time loss.
Agents can use either of the above mechanisms or even both complementarity. Namely, they can use the centralized mechanism, in order to find the most trusted service provider and/or they can use the decentralized approach for the rest of the agents.
5 Use case: a brokering scenario
Defeasible reasoning (see section 3) is useful in various applications, like brokering [21], bargaining and agent negotiations [22]. These domains are also extensively influenced by agent-based technology [23]. Towards this direction, a defeasible reasoning-based brokering scenario is adopted from [24]. In order to demonstrate the functionality of the presented technologies, part of the above scenario is extended with deductive reasoning. Four independent parties are involved, represented by intercommunicating intelligent agents.
• The customer (called Carlo) is a potential renter that wishes to rent an apartment based on his requirements (e.g. location, floor) and preferences.
• The broker possesses a number of available apartments stored in a database. His role is to match Carlo's requirements with the features of the available apartments and eventually propose suitable flats to the potential renter.
• Two Reasoners (independent third-party services), DR-Reasoner and R-Reasoner, with a high reputation rating that can conduct inference on defeasible and
( R - D )
TRUSTED REASONING SERVICES FOR...
Informatica 34 (2010) 429-440 435
deductive logic rule bases, accordingly, and produce the results as an RDF file.
5.1 Scenario overview
The scenario is carried out in eight distinct steps, as shown in Fig. 4 Carlo's agent retrieves the corresponding apartment schema (Appendix A), published in the broker's website, formulates his requirements accordingly and submits them to the broker, in order to get back all the available apartments with the proper specifications (Fig. 4 - step 1). These requirements are expressed in defeasible logic, in the DR-DEVICE RuleML-like syntax (Fig 5 and Fig 6). For the interested reader, Appendix B features a full description of the customer's requirements in d-POSL (see Appendix E), a POSL[25]-like dialect for representing defeasible logic rule sets in a more compact way.
The broker, on the other hand, has a list of all available apartments, along with their specifications (stored as an RDF database - see Figure 7 for an excerpt), but does not reveal it to Carlo, because it's one of his most valuable assets. However, since the broker cannot process Carlo's requirements using defeasible logic, he requests a trusted third-party reasoning service. The DR-Reasoner, as mentioned, is an agent-based service that uses DR-DEVICE, in order to infer conclusions from a defeasible logic program and a set of facts in an RDF document. Hence, the broker sends the customer's requirements, along with the URI of the RDF document containing the list of available apartments, and stands by for the list of proper apartments (step 2).
of these rules is shown in Appendix C; one rule proposes the biggest apartment in the city centre, while the other one suggests the apartment with the largest garden in the suburbs. These rules are formulated using deductive logic, so the broker sends them, along with the results of the previous inference step, to the R-Reasoner that launches R-DEVICE (step 4). Finally, the broker gets the appropriate list with proposed apartments that fulfil his "special" rules (step 5).
«RiileML rdfJmport="...carlo_ex.rdf rdf_export="export-carlo.rd1"
«Implies ruletype = "defeasiblerule"» r1 <116.1(1 s Atom=
«op »=Rel=-ac c e pta ti I e =JFel »=to|) » sslotss|iKl = apartrrieritijln(l=iV.ii >x «/head» =lKi(ly=
> «oid»=ln(1 uri = "&carlo_rb;r2"=r2=flinls=/oi(|s =Mei|= -Atôin-
«op»=Rel=ac:ceptable«/Rel»=/o|>> apartment=V.n >x «/Atom» s,'Heo «/Constraints
■=/Atoin=-ajlKHlys
«superior »=lnd uri = "ficarlo_rb;r1'Vs=/snperiois «/Implies?
ase>
Figure 6: Rule base fragment - rule r2.
Eventually, Carlo receives the appropriate list (step 6) and has to decide which apartment he prefers. However, his agent does not want to send Carlo's preferences to the broker, because he is afraid that the broker might take advantage of that and will not present him with his most preferred choices. Thus, Carlo's agent sends the list of acceptable apartments (an RDF document) and his preferences (once again as a defeasible logic rule base) to the Reasoner (step 7). The latter calls DR-DEVICE and
436 Informática 34 (2010) 429-440
K. Kravari et al.
gets the single most appropriate apartment. It replies to Carlo and proposes the best transaction (step 8). The procedure ends and Carlo can safely make the best choice based on his requirements and personal preferences. See Appendix D for a d-POSL version of Carlo's specific preferences. Notice that Carlo takes into consideration not only his preferences and requirements, but also broker's proposals, as long as they are compatible with his own requirements.
1
Figure 7: RDF document excerpt for available apartments.
As for the reputation rating, after each interaction with the Reasoners, both the Broker and the Customer are requested for their ratings. For instance, after the successful end of step 3, the Broker not only proceeds to step 4, but also sends its rating to the Reasoner or/and the third-party agent. As a result, the latter updates the reputation value.
(S3—S4), receives an INFORM message from him (S4—S5) and successfully terminates the process (S5—E).
On the other hand, the transition sequence for the broker is: S0—S—S2—S3—S4—S5—S6—E. Initially, the agent is waiting for new requests; as soon as one is received (S0—>S1), he sends an enriched REQUEST message to the DR-Reasoner (S1—S2) and waits for results. Finally, he gets the INFORM message from the DR-Reasoner (S2—S3) and sends a new enriched REQUEST message to the R-Reasoner (S3—S4). Eventually, the broker receives the appropriate INFORM message from the R-Reasoner (S4— S5) and forwards it to the customer (S5— S6), terminating the trade (S—E).
Figure 9: Agent brokering communication protocol.
= 'dr(1evice:acceptat)le ■=
< ENTITY carlo "http:Olpis.csd.auth.gr/systems/dr-device/carlo/carlo.rdW>
< ENTITY rdfs "http://Www.w3.org/2000/01/rdi-schemar>
< ENTITY xsd "http://Www.w3.org/2001/XMLSchema#"> ]>
=T