This describes the main lines of our design objective to fuzzify a logic
programming language using the OSF
lattice calculus as a set of Java packages hlt.osf.fuz.* extending
the hlt.osf.* packages in the same way as the
hlt.fot.fuz.* packages extend the hlt.fot.* packages
(Useful links).
Prolog accepts queries over first-order terms (FOTs) given a program
made of a finite ordered sequence of Horn clauses defining rules and
facts. If a query terminates,it can either succeed or fail. If a
query is successful, it prints a (possibly empty) variable
substitution if any occurs in the query, and the standard positive
keyword "yes". If the query fails, it comes back with the
standard positive keyword "no".
This categorical answer dichotomy is made more flexible in a fuzzy
Prolog where, if a query terminates, either one three
possible answers may happen:
"yes"; or,
"maybe[α]"; i.e., "yes" at a
maximal approximation degree α (0.0
< α < 1.0); or,
"no".
For example, a crisp positive answer such as in classical Prolog
would still be written "yes", but a fuzzier positive
answer at maximal approximation degree 0.7, would be written
"maybe [0.7]". Of course, the only possible answer left
(in a closed world such as Prolog's) would be a crisp
"no". This is formally equivalent to (although
intuitively less ambiguous than) "maybe[0.0]" —
just as "yes" is clearer than "maybe[1.0]". In
the "yes" or positive "maybe" cases, a variable substitution may be
required to accommodate (even fuzzy) success.
Prolog proceeds depth-first using chronological backtracking trying
each fact/rule in the order they appear. Whenever unification succeeds
between the current goal in the query and a fact or a rule head,
execution proceeds with the next goal on top of the query stack (if any
are pushed on it from the body of a rule). If it fails, it resets the
state (variable bindings), and tries the next untried fact or rule head
in the program sequence in the order they are specified. Since
crisp unification can either succeed or fail, this is a well defined
strategy.
However, in fuzzy LP, unification of a query goal with a fact or rule
head can be one of: yes, maybe[α], or
no. Since it is preferable to aim for answers with the
highest approximation degree, the above search strategy does not garantee
reaching the best answer first (i.e., with highest possible
degree).
In order to give preference to the highest approximation degree,
depth-first search should be replaced with best-first
search. This corresponds to trying each fact/rule in the order of
preference for the highest approximation degree obtained from unifiying
the current query goal with yet untried facts and rule heads. In case
of equal-degree candidates, the one corresponding to the earliest in
the program sequence is chosen. In other words, whenever unification
succeeds between the current goal in the query and a yet-untried fact
or a rule head in the remaining sequence, execution proceeds with the
next goal this would push on the query stack (from the body of a rule
if the successful candidate is a rule head).
If it fails, it tries the next untried fact or rule head, not in
the order they are in the program sequence, but in their order of
highest approximation, setting the corresponding state
(variable bindings) to the fuzzy unification state that had been
saved with the first highest-degree.
Note that this fuzzy-best strategy becomes Prolog's standard
leftmost/depth-first search when no other similarity besides
equality exists on the term signature. Note also that this does not
garantee that existing approximate solutions will be enumerated in
the order of increasing approximation degree since the above is
simply a local greedy search (a degree-greedy fuzzy
search). Guaranteeing optimal search is not posssible. Indeed,
Horn Logic (fuzzy or not) is undecidable (see for example
this Turing
Machine in Pure Prolog).
Start with an OSF-Prolog interpreter in Java
Start small along the lines of LOGIN. Like
LISP (where every construct is an S-Expression),
and Prolog (where every construct is a first-order
term), all construct in HOOT is represented as an OSF term, to be
normalized if necessary into a
ψ-term
(see also this ψ-term
Java implementation). However, at parse time, a HOOT program
is built from uninterpreted syntactic constructs of its parsed
components as it is being parsed. The uniform representation that
is a semantically interpretable OSF term is only built after
parsing is fully complete without errors. Thus, parsing HOOT only
accumulates all the necessary information from the basic
syntax. The rules of this basic syntax can be expressed as
the yacc-like
grammar whose rules we detail below. In this grammar, non-terminal
symbols are capitalized
CamelCase
symbols
(e.g., OsfTerm, UnTaggedOsfTerm).
Terminals are single-quoted strings
(e.g., '.', ':-' ), or uppercase
token names
(e.g., TAG, BUILT-IN_VALUE).
A HOOT program consists of a sequence of Horn clauses.
An OSF term is written using the following grammar
(where [...] stands for optional, and red
uppercase and single-quoted symbols are terminal tokens so
recognized by the OSF tokenizer):
a program is a function associating to a sort symbol denoting a
predicate, together with an optional arity (a possibly empty set of
features and/or positions), a linear list of clauses:
function program
: sort-symbol[ / feature-arity ]
→ list<clause>
a clause consists of arguments (an array of
ψ-terms) and a body as a list of literals:
class clause (array<ψ-term> arguments,
list<literal> body)
a literal is an atomic predicate/arity alongs with ψ-term arguments
class literal (sort-symbol predicate,
setOf<feature-symbol> arity,
array<ψ-term> arguments)etc.,...
What does it mean to "fuzzify" the OSF calculus?
It means the same as to fuzzify
the FOT calculus, but extended to the
OSF calculus.
Basically the Order-Sorted Feature (OSF) calculus consists of a set
of data structures called OSF terms, together with an approximation
ordering on these data structures forming a lattice for this
ordering with two operations on these data structures: unification
and generalization.
A canonical OSF term (or ψ-term) is a directed graph data
structure whose nodes are labeled with partially ordered sort
symbols and whose arcs are labeled with functional feature
symbols. It corresponds to an OSF graph where there may not be the
same feature on two distinct arcs out of any node (i.e.,
features are functional arcs), and no node is sorted with the
empty sort ⊥; i.e. is said to be consistently
labeled. In other words, a ψ-term is a consistently
labeled rooted OSF graph. The subsort partial order on the sorts
they use extends to an approximation ordering between
ψ-terms. This partial order (called subsumption) is the
natural extension to rooted OSF graphs of the well-known
instantiation ordering that exists between First-Order Terms (FOTs)
used in Logic Programming (e.g., Prolog). Like a FOT, a
ψ-term denotes the set of all its instances (i.e., all
the ψ-terms it subsumes). In particular, like FOTs, ψ-terms
form a lattice for this subsumption ordering, where the two lattice
operations are OSF unification, which corresponds to the
greatest set of common instances of two terms (i.e., the
intersection of their denotations), and OSF generalization,
which is the least graph having both as instances, and which is a
superset of the union of the sets of their instances.
N.B.: It may appear odd that, while unification of terms
yields a term that denotes exactly the intersection of their
denotations, generalization of two terms, on the other hand,
yields a term that denotes in general a
superset of the union of their denotations. This is
explained by the fact that denotations as sets of instances
are not the set-theoretic duals of denotations as sets of
containers. In particular, while there can never be a term that
is an instance of two terms but not of their unification, there
may be a term that is an instance of their generalization but is
not related with either term. This is in particular always the
case for any two distinct sorts that are immediate subsorts of
the most general sort ⊤, since their least generalizer is
precisely ⊤, which subsumes not only the two sorts but also
any other unrelated sort.
Rather than normalizing a conjunctive set of equations as done in FOT
unification, OSF unification normalizes an OSF clause consisting of a
conjunctive set of atomic OSF constraints obtained by "dissolving"
rooted OSF terms into OSF clauses and equating the two terms' roots.
Thus, the fuzzy unification of two rooted OSF terms ψ1
and ψ2 consists in normalizing the fuzzy OSF
clause:
(root(ψ1)
= root(ψ2)
& φ(ψ1) &
φ(ψ2))α
at a fuzzy degree α ∈ ]0,1] to an irreducible
rooted OSF clause φ'β and fuzzy
degree β ∈ [0,α], and when φ'
≠ false and β ≠ 0, derive a tag map from
the nodes of the original OSF terms being unified to those of the
OSF term ψ having the normal form φ' as clausal form
("dissolved" forms). That is, ψ
= φ-1(φ'). If ψ is the
inconsistent term ⊥
= φ-1(false) or if β
= 0, then ψ1 and ψ2 are not
fuzzy-unifiable, in which case such a tag map does not exist.
Fuzzifying generalization of two rooted OSF terms ψ1
and ψ2 consists in deriving a new OSF term together
with a pair of tag maps realizing the two OSF endomorphisms from the
terms being generalized to the new term generalizing them.
There are two approximation orderings on fuzzy ψ-terms that must
be conjugated and whose declarations must be checked consistent and
closed if not (transitive closure of sort ordering and sort
similarity). More explicitly, sort-approximation consistency means
the following.
At each of the approximation degrees in the set of the TC
closure's fuzzy values there corresponds a partition of
similar sorts;
As the fuzzy values decrease, the corresponding similarity
partitions get coarser;
For each element in any one of these partitions of similar
sorts there corresponds a lattice partial ordering on the
congruence classes;
As the fuzzy values decrease, the ordering on similar classes
corresponding a partition's element gets coarser.
Useful links:
The Javadoc API for the hlt.osf.* packages is accessible
at: