Char
::
Char
¶The set of all characters. See Char
for more details on the
range of characters supported.
Events
:: {
Event
}
¶The set of all events that are currently defined. For example, if a CSPM file included the following definitions:
channel a : {0, 1}
channel b
then Events
would evaluate to {a.0, a.1, b}
.
Int
:: {
Int
}
¶The set of all integers. See Int
for more details on the range of
integers supported.
card
:: (Eq
a) => ({a}) ->
Int
¶Returns the cardinality, or size, of the given set. For example, card({}) =
0
and card({0}) = 1
.
diff
:: (Set
a) => ({a}, {a}) -> {a}
¶Returns the relative complement of two sets (i.e. \(X \setminus Y\) is
written as diff(X,Y)
). For example, diff({1}, {1}) = {}
and
diff({1,2}, {1}) = {2}
.
Inter
:: (Set
a) => ({{a}}) -> {a}
¶Given a set of sets, returns the intersection of all of the sets. For
example, Inter({{1}, {1,2}, {1,2,3}}) = {1}
.
member
:: (Eq
a) => (a, {a}) ->
Bool
¶Returns true if the given element is a member of the given set.
seq
:: (Eq
a) => ({a}) -> <a>
¶Returns a sequence consisting of all the elements in the given set. The ordering is undefined, although equal sets will return their elements in the same order.
New in version 2.91.
concat
:: (<<a>>) -> <a>
¶Given a sequence of sequence, concatenates all the sequences into one. For
example, concat(<<1>, <2>, <3>>) = <1, 2, 3>
.
elem
:: (Eq
a) => (a, <a>) -> <a>
¶Returns true if the first argument occurs anywhere in the given list.
head
:: (<a>) -> a
¶Returns the first element of the given list, throwing an error if the list is empty.
tail
:: (<a>) -> <a>
¶Returns the tail of the list starting at the second element, throwing an
error if the list is empty. Thus, xs = <head(xs)>^tail(xs)
.
mapDelete
:: (Set
k) => (Map k v, k) -> Map k v
¶Removes the specified key from the map if it is present. O(log n).
New in version 3.2.
mapFromList
:: (Set
k) => (<(k, v)>) -> Map k v
¶Constructs a map given a sequence of associative pairs. If the same key appears more than once, then the last value in the list will be used (i.e. later entries will overwrite earlier entries). O(n log n).
New in version 3.0.
mapLookup
:: (Set
k) => (Map k v, k) -> v
¶Returns the value associated with the specified key in the map. If the key is not in the domain of the map then an error is thrown. O(log n)
New in version 3.0.
mapMember
:: (Set
k) => (Map k v, k) ->
Bool
¶Returns true if the key is in the map. O(log n).
New in version 3.0.
mapToList
:: (Set
k) => (Map k v) -> <(k, v)>
¶Converts a map to a sequence of associative pairs. O(n).
New in version 3.0.
mapUpdate
:: (Set
k) => (Map k v, k, v) -> Map k v
¶Inserts the specified key value pair into the map, overwriting the current value if the specified key is already in the map. O(log n).
New in version 3.0.
mapUpdateMultiple
:: (Set
k) => (Map k v, <(k, v)>) -> Map k v
¶Inserts each key-value pair into the map (as per mapUpdate). If the same key appears more than once, then the last value in the list will be used (i.e. later entries will overwrite earlier entries). O(m log n), where m is the length of the list to be inserted.
New in version 3.0.
CHAOS
:: ({
Event
}) ->
Proc
¶CHAOS(A)
offers the non-deterministic choice over all events in A
,
but may also deadlock at any point.
DIV
::
Proc
¶DIV
immediately diverges. It is equivalent to DIV = STOP [> DIV
,
or DIV = |~| _ : {0} @ DIV
.
loop
:: (
Proc
) ->
Proc
¶loop(P)
computes a process that repeatedly runs the given process using
Sequential Composition
. In particular, loop(P)
is equal to X
,
where X = P ; X
.
Note
This function was required in prior versions of FDR to enable a more
optimised representation for certain processes. The new compiler included
in 3.0 automatically recognises and optimises definitions of the form X
= P ; X
, negating the need for this function to be used.
STOP
::
Proc
¶The process that offers no events and therefore represents a deadlocked
process. It is equivalent to [] _ : {} @ error("This is not called.")
.
WAIT
:: (
Int
) ->
Proc
¶WAIT(n)
performs precisely n
tock
events and then behaves like
SKIP
. Note that this function is only in scope within
timed sections.
FDR has a number of compression functions that can be applied to processes to
either attempt to reduce the number of states that a process has (and thus make
refinement checking faster), or to change the semantics in useful ways (cf.
chase
or prioritise
).
Generally, compressions that are designed to reduce the number of states a process has are best applied to the arguments of parallel compositions, or other similar operators. There is no point applying one of the semantics-preserving compressions to the top-level of a process since the compression will need to visit every state of the inner process in order to calculate the compressed machine. Clearly this is precisely what the original refinement check would have done. For example:
-- Potentially a useful compression
assert normal(Q) ||| normal(R)
-- Not a useful compression
assert normal(Q ||| R) :[deadlock free]
Further, note that by default FDR applies compressions to the arguments of
parallel compositions, meaning there is no need to apply compression to such
processes. By default, FDR uses sbisim
to compress the leaf machines,
but the compression used can be configured using
compiler.leaf_compression
.
The best method of assessing the effectiveness of compressions is to use the
machine structure viewer, which can
display details regarding the number of states and transitions that are saved by
compression. In general, when attempting to reduce the state space of a process,
the best compression functions to try are dbisim
, diamond
(followed by sbisim
), normal
, and wbisim
. It is often
hard to predict which compression function will perform well on a given process:
in general, it is best to perform several experiments to determine which results
in the best reduction in the state space size. In general, diamond
followed by sbisim
is the fastest of the compressions to perform. On
most systems, wbisim
only gives a small extra reduction in the number of
states versus dbisim
, which is a very effective compression. When
normal
works efficiently it generally achieves very good compression
ratios, but unlike the other compressions, it can actually cause the number of
states to increase.
The compression functions functions are all transparent or external functions, and thus need to be explicitly imported. See Transparent and External Functions for more information. Compressions that are semantics preserving are transparent, whilst compressions that alter the semantics and are thus unsafe are marked as external.
The algorithms that are used to compute the compressions detailed below are described in Understanding Concurrent Systems.
chase
:: (
Proc
) ->
Proc
¶Not semantics preserving - this should only be used when it has been proven that its application is safe, or if the behaviour is desired.
chase(P)
is best defined by considering how to chase an individual state
s
of P
. If s
can perform a tau to some state s'
, then
chase(s) = chase(s')
. If there are multiple tau transitions, then the tau
transition that is picked is not defined: it is picked according to internal
implementation details. If s
cannot perform a tau, then chase(s) = s
.
For example, if P = a -> STOP |~| b -> STOP
then chase(P)
could
either equal a -> STOP
or b -> STOP
. Thus, chase
is a form of
manual partial order reduction on invisible events.
chase
is primarily useful when the result of performing one tau is known
to not cause other taus to become unavailable. For example, consider
chase((a -> STOP ||| b -> STOP) \ {a, b})
: applying chase
to this
system does not change the semantics, since whenever a hidden a
or b
occurs the other action is still allowed in the resulting state.
This compression is lazy, in that it is computed on-the-fly as the process is explored. It can therefore be sensibly applied at the top-level of a system.
Warning
Applying chase
to divergent processes is unsafe any may cause FDR
to crash (it will certainly cause checking not to terminate if a divergent
state is reached by chase
).
Warning
As with chase
, this function will not perform well once the bounds
of RAM have been exceeded. This will be addressed in a future release of
FDR3.
chase_nocache
:: (
Proc
) ->
Proc
¶Not semantics preserving - this should only be used when it has been proven that its application is safe.
This behaves exactly as per chase
, but optimises the internal
representation to cache less information and thus consume less memory (and
will give a small speed up). This should only be used if it is relatively
unlikely that a state of the chased machine will be visited twice.
deter
:: (
Proc
) ->
Proc
¶Not semantics preserving - this should only be used when it has been proven that its application is safe, or if the behaviour is desired.
This is only valid in the failures-divergences model, and returns a deterministic version of the process using the algorithm specified in Understanding Concurrent Systems. This is used internally by FDR to implement determinism checking in the failures-divergences model, and is unlikely to be of more general use.
diamond
:: (
Proc
) ->
Proc
¶diamond
returns a new process where essentially, given a state s
of
the original process, as many as possible of the transitions of states that
are reachable via a series of taus from s
are added to s
itself. The
idea behind this is that it will potentially allow states that are reachable
via chains of taus to be elided.
It is a useful compression to apply since it can never increase the size of a
compressed process (unlike normal
), results in a LTS that contains no
taus (which can be a useful property), and is often quick to compute.
Further, the output of diamond
is guaranteed to contain no taus.
diamond
can only be used in the traces, failures and failures-divergences
models. It applies tau_loop_factor
and explicate
as preprocessing
steps.
The output of this often benefits from being strongly bisimulated. In
particular, it may be useful to define the function sbdia(P) =
sbisim(diamond(P))
to use as a compression function.
dbisim
:: (
Proc
) ->
Proc
¶dbisim(P)
computes the maximal delay bisimulation of P
and then
returns and LTS that has been reduced using this. In particular, it
identifies any states of P
that can perform the same visible events after
performing zero or more taus, and such that performing any such event leads
to states, that are also delay bisimilar. This differs from wbisim
in
that it does not require the states immediately reached after performing the
visible event to be bisimilar, but instead allows zero or more taus to occur.
dbisim
is slower than sbisim
to compute, but can achieve
substantially more compression than sbisim
on some processes. It is
faster than wbisim
to compute, but wbisim
can achieve a small
amount of extra compression on some processes.
New in version 3.0.
explicate
:: (
Proc
) ->
Proc
¶If the provided machine is a high-level machine, converts it to a low-level machine. Whilst the transitions of this machine will be accessible more quickly, the resulting machine will usually use up far more memory and the time taken to do the explication will exceed the time taken to simply access the original machine’s transitions. Therefore, this is only of use when FDR’s compilation algorithm incorrectly selects the level to compile a machine at. It is also used as a preprocessing stage for a number of the other compressions.
lazyenumerate
:: (
Proc
) ->
Proc
¶This behaves like explicate
, but lazily computes the low-level
transition as it is being used, rather than up-front. It is not normally
necessary to use this compression function, but it is used internally by a
number of compression functions.
failure_watchdog
:: (
Proc
, {
Event
},
Event
) ->
Proc
¶This function returns the failures watchdog of the given process, as specified
in Watchdog Transformations for Property-Oriented Model-Checking. In particular, failure_watchdog(P, evs, bark)
alters P
and returns a process P'
such that, in any state s
of P
that
offers events inits
, instead offers \(evs \cup inits\) and, if any
event in \(evs \setminus inits\) is performed, transitions to a process
equivalent to bark -> STOP
. Further, the watchdog will also monitor the
failures of the process it is put in parallel with, and will deadlock if the
process has a disallowed failure.
This transformation can be used to turn a refinement check of the form:
assert Spec [F= Impl
into a check:
assert failure_watchdog(Spec, A, bark) [| A |] Impl :[deadlock free [F]]
assuming that A
is the alphabet of Impl
. The usefulness of this is as
per trace_watchdog
. However, note that unlike trace_watchdog
,
failure_watchdog
can cause the size of the specification to dramatically
increase and therefore the resulting check can be slower. In particular,
specifications that require lots of events to be simultaneously offered will
be less efficient (conversely, specifications that are very nondeterministic
will be more efficient).
New in version 3.1.
normal
:: (
Proc
) ->
Proc
¶normal(P)
returns a normalised version of P
in which there are no
taus and such that each state contains at most one transition labelled with
a particular event. For example, the process:
P = a -> P |~| a -> STOP
is normalised to the process P'
where:
P' = a -> Q'
Q' = a -> Q'
where Q'
is labelled with the refusals {}
, {a}
(i.e. either
Q'
refuses nothing, or it refuses a
).
The output of normal
automatically has sbisim
applied to it.
prioritise
:: (
Proc
, <{
Event
}>) ->
Proc
¶Not semantics preserving - this should only be used when it has been proven that its application is safe, or if the behaviour is desired.
prioritise(P, <S1, ..., SN>)
takes a process and a non-empty sequence
of disjoint sets of events (note that the latter condition is not checked).
It returns a machine whose operational semantics have been altered to only
allow actions from Si
providing no event from Sj
is offered for all
j < i
. Note that tau and tick are implicitly added to S1
. Any event
that is not in any of the Si
is unaffected by this transformation. For
example, in the following each Pi
is equivalent to Qi
:
P1 = prioritise(a -> STOP [] b -> STOP, <{a}, {b}>)
Q1 = a -> STOP
P2 = prioritise(a -> STOP |~| b -> STOP, <{a}, {b}>)
Q2 = a -> STOP |~| b -> STOP
P3 = prioritise(a -> STOP [> b -> STOP, <{}, {a}>)
Q3 = b -> STOP
Note that this compression is lazy and can therefore be applied to the outer level of a system (indeed, this is the most common usage). If there is only one set of events specified then the above function is the identity function.
This function is most commonly used when modelling timed systems using
tock CSP since prioritise(P,{},{tock})
ensures that tock
cannot occur when tau or tick are available.
Note that prioritise(P, ...)
cannot be applied to processes P
that
contain certain compressions that are not prioritise safe. The only
compressions that are prioritise safe are dbisim
, sbisim
and
wbisim
. FDR will detect unsafe applications of compressions and report
these as errors when necessary.
New in version 2.94.
Changed in version 3.0: In FDR2, this function took either a variable number of arguments or a sequence of sets. In FDR3 this function has been changed to only allow a sequence of sets to be passed.
Warning
As with chase
, this function will not perform well once the bounds
of RAM have been exceeded. This will be addressed in a future release of
FDR3.
prioritise_nocache
:: (
Proc
, <{
Event
}>) ->
Proc
¶Not semantics preserving - this should only be used when it has been proven that its application is safe, or if the behaviour is desired.
This behaves exactly as per prioritise
, but optimises the internal
representation to cache less information and thus consume less memory (and
will give a small speed up). This should only be used if it is relatively
unlikely that a state of the chased machine will be visited twice.
New in version 2.94.
Changed in version 3.0: In FDR2, this function took either a variable number of arguments or a sequence of sets. In FDR3 this function has been changed to only allow a sequence of sets to be passed.
prioritisepo
:: (
Proc
, {
Event
}, {(
Event
,
Event
)}, {
Event
}) ->
Proc
¶Not semantics preserving - this should only be used when it has been proven that its application is safe, or if the behaviour is desired.
prioritisepo(P, E, O, M)
takes a process and a specification of a partial
order (the format of which is defined below). It returns a machine whose
operational semantics have been altered to only events e
providing no
event above e
in the specified partial order is offered. This is
essentially a more advanced version of prioritise
that permits
additional control.
The partial order is specified by three sets: E
, O
, and M
. E
is the set of set of all events that should be prioritised: events not in
E
will be unaffected by prioritisepo
. O
is a set of pairs such
that if (u, l)
is present in O
, then l
is defined as being below
u
in the ordering (i.e. u
> l
, and thus u
would be
prioritised over l
). The actual order used is the transitive closure of
the order O
. Any event in E
is assumed to be below tau and tick, but
the set M
can be used to specify which events should be considered equal
to tau and tick in the ordering. For example, in the following each Pi
is
equivalent to Qi
:
P1 = prioritisepo(a -> STOP [] b -> STOP, {a,b}, {(a, b)}, {})
Q1 = a -> STOP
P2 = prioritisepo(a -> STOP |~| b -> STOP, {a, b}, {(a, b)}, {})
Q2 = a -> STOP |~| b -> STOP
P3 = prioritisepo(a -> STOP [> b -> STOP, {a}, {}, {})
Q3 = b -> STOP
P4 = prioritisepo(a -> STOP [> b -> STOP, {a}, {}, {a})
Q4 = a -> STOP [> b -> STOP
As per prioritise
, this compression is lazy and can therefore be
applied to the outer level of a system (indeed, this is the most common
usage). If there is only one set of events specified then the above function
is the identity function.
As per prioritise
, prioritisepo(P, ...)
cannot be applied to
processes P
that contain certain compressions that are not prioritise
safe. The only compressions that are prioritise safe are dbisim
,
sbisim
and wbisim
. FDR will detect unsafe applications of
compressions and report these as errors when necessary.
New in version 3.2.
Warning
As with chase
, this function will not perform well once the bounds
of RAM have been exceeded. This will be addressed in a future release of
FDR3.
sbisim
:: (
Proc
) ->
Proc
¶sbisim(P)
computes the maximal strong bisimulation of P
and then
returns and LTS that has been reduced using this. In particular, this means
that it identifies states that have identical behaviour. In particular, it
identifies any states of P
that can perform the same events and such that
performing them leads to states that are also strongly bisimilar.
Generally, sbisim
is able to compress a process very quickly, but will
often not reduce the number of states that much compared to other
compressions. It is automatically applied to the result of normal
, is
often useful to apply to the result of diamond
, and is the default
compression used by FDR on leaf machines (see
compiler.leaf_compression
).
tau_loop_factor
:: (
Proc
) ->
Proc
¶tau_loop_factor(P)
identifies any states that are on a tau loop (i.e.
it identifies any two states s and s’ such that s can reach s’ via a sequence
of taus and s’ can reach s via a sequence of taus).
Generally this compression is not that useful on its own, but it is used as a preprocessing step by a number of other compressions.
trace_watchdog
:: (
Proc
, {
Event
},
Event
) ->
Proc
¶This function returns the traces watchdog of the given process, as specified
in Watchdog Transformations for Property-Oriented Model-Checking. In particular, trace_watchdog(P, evs, bark)
alters P
and
returns a process P'
such that, in any state s
of P
that offers
events inits
, instead offers \(evs \cup inits\) and, if any event
in \(evs \setminus inits\) is performed, transitions to a process
equivalent to bark -> STOP
. For example,
trace_watchdog(a -> STOP, {a, b}, bark)
is equivalent to the process:
a -> (a -> bark -> STOP [] b -> bark -> STOP)
[] b -> bark -> STOP
This transformation can be used to turn a refinement check of the form:
assert Spec [T= Impl
into a check:
assert STOP [T= (trace_watchdog(Spec, A, bark) [| A |] Impl) \ A
assuming that A
is the alphabet of Impl
. There are two reasons why
this may be useful. Firstly, it allows various hierarchical compression
techniques to be applied to the combination of the specification and the
implementation. Further, if FDR finds a counterexample to the transformed
assertion then, providing Spec
is deterministic, it will be possible to
divide this into behaviours of both the specification and the implementation,
thus allowing a limited form of specification debugging to occur.
Note
Counterexample can only be divided through trace_watchdog
when the
check is being done in the traces model. Counterexamples for stronger
models cannot be divided through trace_watchdog
since there is no
guarantee that the behaviour is indeed a behaviour of the old machine.
New in version 3.1.
timed_priority
:: (
Proc
) ->
Proc
¶Not semantics preserving - this should only be used when it has been proven that its application is safe, or when the behaviour is desired.
timed_priority(P)
is equivalent to prioritise(P, <{}, {tock}>)
and
thus gives priority to tau and tick over tock
. Note that this function is
only in scope within timed sections and, in
contrast to the other compression functions, does not need to be imported
using transparent
or external
.
New in version 2.94.
Changed in version 3.0: In FDR2, this function was available anywhere in a file that used the timed section syntax. In FDR3, this is only available within a timed section itself.
wbisim
:: (
Proc
) ->
Proc
¶wbisim(P)
computes the maximal weak bisimulation of P
and then
returns and LTS that has been reduced using this. In particular, it
identifies any states of P
that can perform the same visible events after
performing zero or more taus, and such that performing any such event leads
to states, again after possibly performing some more taus, that are also
weakly bisimilar. This differs from dbisim
in that it does not require
the states immediately reached after performing the visible event to be
bisimilar, but instead allows zero or more taus to occur.
wbisim
is slower than both dbisim
and sbisim
, but is able to
achieve a small amount of extra compression compared to dbisim
(which
in turn can achieve more compression that sbisim
). In the worst case, it
will take twice as long as dbisim
to compute.
New in version 2.94.
mtransclose
:: (Eq
a) => ({(a, a)}, {a}) -> {(a, a)}
¶Given a relation R
, expressed as a set of pairs, computes the symmetric
transitive closure of the relation. Then, for each element of the second set
S
it computes a representative member of its equivalence class (under the
symmetric transitive closure). It returns a set of tuples consisting of each
element from S
along with its representative (nb. the representative is
the first element of the pair), providing the element is not equal to its
representative, in which case it is omitted.
relational_image
:: (Eq
a, Set
b) => ({(a, b)}) -> (a) -> {b}
¶Given a relation, expressed as a set of pairs, returns a function that takes
an element of the domain of the relation and returns the set of all elements
of the image that it is related to. For example, relational_image({(1,2),
(1,3)})(1) = {2,3}
.
This function benefits from being partially applied to its first argument.
relational_inverse_image
:: (Set
a, Eq
b) => ({(a, b)}) -> (b) -> {a}
¶This function is the opposite to relational_image. In particular, given a
relation, expressed as a set of pairs, it returns a function that takes an
element of the image of the relation and returns the set of all elements of
the domain that it is related to. For example,
relational_inverse_image({(2,1), (3,1)})(1) = {2,3}
.