Abstract. This is the Dafny reference manual which describes the Dafny programming language and how to use the Dafny verification system. Parts of this manual are more tutorial in nature in order to help the user understand how to do proofs with Dafny.
Dafny [18] is a programming language with builtin specification constructs. The Dafny static program verifier can be used to verify the functional correctness of programs.
The Dafny programming language is designed to support the static verification of programs. It is imperative, sequential, supports generic classes, methods and functions, dynamic allocation, inductive and coinductive datatypes, and specification constructs. The specifications include pre and postconditions, frame specifications (read and write sets), and termination metrics. To further support specifications, the language also offers updatable ghost variables, recursive functions, and types like sets and sequences. Specifications and ghost constructs are used only during verification; the compiler omits them from the executable code.
The Dafny verifier is run as part of the compiler. As such, a programmer interacts with it much in the same way as with the static type checkerâ€”when the tool produces errors, the programmer responds by changing the programâ€™s type declarations, specifications, and statements.
The easiest way to try out Dafny is in your web browser at rise4fun[15]. Once you get a bit more serious, you may prefer to download it to run it on your machine. Although Dafny can be run from the command line (on Windows or other platforms), the preferred way to run it is in Microsoft Visual Studio 2012 (or newer) or using emacs, where the Dafny verifier runs in the background while the programmer is editing the program.
The Dafny verifier is powered by Boogie [0, 16, 23] and Z3[4].
From verified programs, the Dafny compiler produces code (.dll
or
.exe
) for the .NET platform via intermediate C# files. However, the
facilities for interfacing with other .NET code are minimal.
This is the reference manual for the Dafny verification system. It is based on the following references: [5, 13–15, 18, 20, 22]
The main part of the reference manual is in top down order except for an initial section that deals with the lowest level constructs.
To give a flavor of Dafny, here is the solution to a competition problem.
// VSComp 2010, problem 3, find a 0 in a linked list and return how many
// nodes were skipped until the first 0 (or endoflist) was found.
// Rustan Leino, 18 August 2010.
//
// The difficulty in this problem lies in specifying what the return
// value 'r' denotes and in proving that the program terminates. Both of
// these are addressed by declaring a ghost field 'List' in each
// linkedlist node, abstractly representing the linkedlist elements
// from the node to the end of the linked list. The specification can
// now talk about that sequence of elements and can use 'r' as an index
// into the sequence, and termination can be proved from the fact that
// all sequences in Dafny are finite.
//
// We only want to deal with linked lists whose 'List' field is properly
// filled in (which can only happen in an acyclic list, for example). To
// that avail, the standard idiom in Dafny is to declare a predicate
// 'Valid()' that is true of an object when the data structure
// representing object's abstract value is properly formed. The
// definition of 'Valid()' is what one intuitively would think of as the
// ''object invariant'', and it is mentioned explicitly in method pre
// and postconditions. As part of this standard idiom, one also declared
// a ghost variable 'Repr' that is maintained as the set of objects that
// make up the representation of the aggregate objectin this case, the
// Node itself and all its successors.
class Node {
ghost var List: seq<int>
ghost var Repr: set<Node>
var head: int
var next: Node
predicate Valid()
reads this, Repr
{
this in Repr &&
1 <= List && List[0] == head &&
(next == null ==> List == 1) &&
(next != null ==>
next in Repr && next.Repr <= Repr && this !in next.Repr &&
next.Valid() && next.List == List[1..])
}
static method Cons(x: int, tail: Node) returns (n: Node)
requires tail == null  tail.Valid()
ensures n != null && n.Valid()
ensures if tail == null then n.List == [x]
else n.List == [x] + tail.List
{
n := new Node;
n.head, n.next := x, tail;
if (tail == null) {
n.List := [x];
n.Repr := {n};
} else {
n.List := [x] + tail.List;
n.Repr := {n} + tail.Repr;
}
}
}
method Search(ll: Node) returns (r: int)
requires ll == null  ll.Valid()
ensures ll == null ==> r == 0
ensures ll != null ==>
0 <= r && r <= ll.List &&
(r < ll.List ==> ll.List[r] == 0 && 0 !in ll.List[..r]) &&
(r == ll.List ==> 0 !in ll.List)
{
if (ll == null) {
r := 0;
} else {
var jj,i := ll,0;
while (jj != null && jj.head != 0)
invariant jj != null ==> jj.Valid() && i + jj.List == ll.List &&
ll.List[i..] == jj.List
invariant jj == null ==> i == ll.List
invariant 0 !in ll.List[..i]
decreases ll.List  i
{
jj := jj.next;
i := i + 1;
}
r := i;
}
}
method Main()
{
var list: Node := null;
list := list.Cons(0, list);
list := list.Cons(5, list);
list := list.Cons(0, list);
list := list.Cons(8, list);
var r := Search(list);
print "Search returns ", r, "\n";
assert r == 1;
}
Dafny uses the Coco/R lexer and parser generator for its lexer and parser
(http://www.ssw.unilinz.ac.at/Research/Projects/Coco)[27].
The Dafny input file to Coco/R is the Dafny.atg
file in the source tree.
A Coco/R input file consists of code written in the target language
(e.g. C#) intermixed with these special sections:
The grammar presented in this document was derived from the Dafny.atg
file but has been simplified by removing details that, though needed by
the parser, are not needed to understand the grammar. In particular, the
following transformation have been performed.
entity1(allowsX)
.
entity2(allowsX)
.
entity2(allowsX: true)
.
In some cases the value to be passed depends on the grammatical context. In such cases we give a description of the conditions under which the parameter is true, enclosed in parenthesis. For example:
FunctionSignatureOrEllipsis_(allowGhostKeyword: ("method" present))
allowGhostKeyword
parameter is true if the
“method” keyword was given in the associated FunctionDecl
.
The names of character sets and tokens start with a lower case letter but the names of grammar nonterminals start with an uppercase letter.
The grammar uses Extended BNF notation. See the Coco/R Referenced manual for details. But in summary:
=
separates the sides of a production, e.g. A = a b c

separates alternatives, e.g. a b  c  d e
means a b
or c or d e
(
)
groups alternatives, e.g. (a  b) c means a c or b c
[ ]
option, e.g. [a] b
means a b
or b
{ }
iteration (0 or more times), e.g. {a} b
means b
or a b
or a a b
or …

inside [ ]
and { }
. So [a  b]
is short for [(a  b)]
and {a  b}
is short for {(a  b)}
.
Dafny
.
In addition to the Coco rules, for the sake of readability we have adopted these additional conventions.

to be used. a  b
means it matches if it matches a
but not b
.
For the convenience of the reader, any references to character sets, tokens, or grammar nonterminals in this document are hyperlinks that will link to the definition of the entity.
This section defines character classes used later in the token definitions. In this section backslash is used to start an escape sequence, so for example ‘\n’ denotes the single linefeed character.
letter = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"
At present, a letter is an ASCII upper or lowercase letter. Other Unicode letters are not supported.
digit = "0123456789"
A digit is just one of the base10 digits.
posDigit = "123456789"
A posDigit
is a digit, excluding 0.
hexdigit = "0123456789ABCDEFabcdef"
A hexdigit
character is a digit or one of the letters from ‘A’ to ‘F’ in either case.
special = "'_?"
The special characters are the characters in addition to alphanumeric characters that are allowed to appear in a Dafny identifier. These are
"'"
because mathematicians like to put primes on identifiers and some ML
programmers like to start names of type parameters with a “'”.
cr = '\r'
A carriage return character.
lf = '\n'
A line feed character.
tab = '\t'
A tab character.
space = ' '
A space character.
nondigitIdChar = letter + special
The characters that can be used in an identifier minus the digits.
idchar = nondigitIdChar + digit
The characters that can be used in an identifier.
nonidchar = ANY  idchar
Any character except those that can be used in an identifier.
charChar = ANY  '\''  '\\'  cr  lf
Characters that can appear in a character constant.
stringChar = ANY  '"'  '\\'  cr  lf
Characters that can appear in a string constant.
verbatimStringChar = ANY  '"'
Characters that can appear in a verbatim string.
Comments are in two forms.
As with most languages, Dafny syntax is defined in two levels. First the stream of input characters is broken up into tokens. Then these tokens are parsed using the Dafny grammar. The Dafny tokens are defined in this section.
The following reserved words appear in the Dafny grammar and may not be used as identifiers of userdefined entities:
reservedword =
"abstract"  "array"  "as"  "assert"  "assume"  "bool"  "break" 
"calc"  "case"  "char"  "class"  "codatatype"  "colemma" 
"constructor"  "copredicate"  "datatype"  "decreases" 
"default"  "else"  "ensures"  "exists"  "extends"  "false" 
"forall"  "free"  "fresh"  "function"  "ghost"  "if"  "imap"  "import" 
"in"  "include"  "inductive"  "int"  "invariant"  "iset"  "iterator"  "label" 
"lemma"  "map"  "match"  "method"  "modifies"  "modify" 
"module"  "multiset"  "nat"  "new"  "newtype"  "null"  "object" 
"old"  "opened"  "predicate"  "print"  "protected" 
"reads"  "real"  "refines"  "requires"  "return"  "returns"  "seq" 
"set"  "static"  "string"  "then"  "this"  "trait"  "true"  "type" 
"var"  "where"  "while"  "yield"  "yields"  arrayToken
arrayToken = "array" [ posDigit { digit }]
An arrayToken
is a reserved word that denotes an array type of
given rank. array
is an array type of rank 1 (aka a vector). array2
is the type of twodimensional arrays, etc.
TODO: Is “_” is reserved word?
ident = nondigitIdChar { idchar }  arraytoken  chartoken  reservedword
In general Dafny identifiers are sequences of idChar
characters where
the first character is a nondigitIdChar
. However tokens that fit this pattern
are not identifiers if they look like an array type token, a character literal,
or a reserved work.
digits = digit {['_'] digit}
A sequence of decimal digits, possibly interspersed with underscores for readability. Example: 1_234_567
.
hexdigits = "0x" hexdigit {['_'] hexdigit}
A hexadecimal constant, possibly interspersed with underscores for readability.
Example: 0xffff_ffff
.
decimaldigits = digit {['_'] digit} '.' digit {['_'] digit}
A decimal fraction constant, possibly interspersed with underscores for readability.
Example: 123_456.789_123
.
In this section the “\” characters are literal.
escapedChar =
( "\'"  "\""  "\\"  "\0"  "\n"  "\r"  "\t"
 "\u" hexdigit hexdigit hexdigit hexdigit
)
In Dafny character or string literals escaped characters may be used to specify the presence of the delimiting quote, or back slash, or null, or new line, or carriage return or tab, or the Unicode character with given hexadecimal representation.
charToken = "'" ( charChar  escapedChar ) "'"
A character constant is enclosed by “'” and includes either a character
from the charChar
set, or an escaped character. Note that although Unicode
letters are not allowed in Dafny identifiers, Dafny does support Unicode
in its character and string constants and in its data. A character
constant has type char
.
stringToken =
'"' { stringChar  escapedChar } '"'
 '@' '"' { verbatimStringChar  '"' '"' } '"'
A string constant is either a normal string constant or a verbatim string constant.
A normal string constant is enclosed by ‘"’ and can contain characters from the
stringChar
set and escapes.
A verbatim string constant is enclosed between ‘@"’ and ‘"’ and can consists of any characters (including newline characters) except that two successive double quotes give a way to escape one quote character inside the string.
Ident = ident
The Ident
nonterminal is just an ident
token and represents an ordinary
identifier.
DotSuffix =
( ident  digits  "requires"  "reads" )
When using the dot notation to denote a component of a compound entity
the token following the “.”, in addition to being an identifier,
can also be a natural number, or one of the keywords requires
or reads
.
m.requires
is used to denote the precondition for method m.
m.reads
is used to denote the things that method m may read.
NoUSIdent = ident  "_" { idChar }
A NoUSIdent
is an identifier except that identifiers with a leading
underscore are not allowed. The names of userdefined entities are
required to be NoUSIdent
s. We introduce more mnemonic names
for these below (e.g. ClassName
).
WildIdent = NoUSIdent  "_"
Identifier, disallowing leading underscores, except the “wildcard” identifier “_”. When “_” appears it is replaced by a unique generated identifier distinct from user identifiers.
In the productions for the declaration of userdefined entities the name of the
userdefined entity is required to be an identifier that does not start
with an underscore, i.e., a NoUSIdent
. To make the productions more
mnemonic, we introduce the following synonyms for NoUSIdent
.
ModuleName = NoUSIdent
ClassName = NoUSIdent
TraitName = NoUSIdent
DatatypeName = NoUSIdent
DatatypeMemberName = NoUSIdent
NewtypeName = NoUSIdent
NumericTypeName = NoUSIdent
SynonymTypeName = NoUSIdent
IteratorName = NoUSIdent
TypeVariableName = NoUSIdent
MethodName = NoUSIdent
FunctionName = NoUSIdent
PredicateName = NoUSIdent
CopredicateName = NoUSIdent
LabelName = NoUSIdent
AttributeName = NoUSIdent
FieldIdent = NoUSIdent
A FieldIdent
is one of the ways to identify a field. The other is
using digits.
A qualified name starts with the name of the toplevel entity and then is followed by
zero or more DotSuffix
s which denote a component. Examples:
Module.MyType1
MyTuple.1
MyMethod.requires
The grammar does not actually have a production for qualified names
except in the special case of a qualified name that is known to be
a module name, i.e. a QualifiedModuleName
.
In this section, we describe some nonterminals that combine an identifier and a type.
IdentType = WildIdent ":" Type
In Dafny, a variable or field is typically declared by giving its name followed by
a colon
and its type. An IdentType
is such a construct.
GIdentType(allowGhostKeyword) = [ "ghost" ] IdentType
A GIdentType
is a typed entity declaration optionally preceded by “ghost”. The ghost
qualifier means the entity is only used during verification but not in the generated code.
Ghost variables are useful for abstractly representing internal state in specifications.
If allowGhostKeyword
is false then “ghost” is not allowed.
LocalIdentTypeOptional = WildIdent [ ":" Type ]
A LocalIdentTypeOptional
is used when declaring local variables. In
such a case a value may be specified for the variable in which case the
type may be omitted because it can be inferred from the initial value.
The initial value value may also be omitted.
IdentTypeOptional = WildIdent [ ":" Type ]
A IdentTypeOptional
is typically used in a context where the type of the identifier
may be inferred from the context. Examples are in pattern matching or quantifiers.
TypeIdentOptional = [ "ghost" ] ( NoUSIdent  digits ) ":" ] Type
TypeIdentOptional
s are used in FormalsOptionalIds
. This represents situations
where a type is given but there may not be an identifier.
FormalsOptionalIds = "(" [TypeIdentOptional { "," TypeIdentOptional } ] ")"
A FormalsOptionalIds
is a formal parameter list in which the types are required
but the names of the parameters is optional. This is used in algebraic
datatype definitions.
Nat = ( digits  hexdigits )
A Nat
represents a natural number expressed in either decimal or hexadecimal.
Dec = (decimaldigits )
A Dec
represents a decimal fraction literal.
Dafny = { IncludeDirective_ } { TopDecl } EOF
At the top level, a Dafny program (stored as files with extension .dfy
)
is a set of declarations. The declarations introduce (modulelevel)
methods and functions, as well as types (classes, traits, inductive and
coinductive datatypes, new_types, type synonyms, opaque types, and
iterators) and modules, where the order of introduction is irrelevant. A
class also contains a set of declarations, introducing fields, methods,
and functions.
When asked to compile a program, Dafny looks for the existence of a
Main() method. If a legal Main() method is found, the compiler will emit
a .EXE
; otherwise, it will emit a .DLL
.
(If there is more than one Main(), Dafny will try to emit an .EXE, but this may cause the C# compiler to complain. One could imagine improving this functionality so that Dafny will produce a polite error message in this case.)
In order to be a legal Main() method, the following must be true:
Note, however, that the following are allowed:
ensures
clauses
decreases
clauses, including a
decreases *
. (If Main() has a decreases *
, then its execution may
go on forever, but in the absence of a decreases *
on Main(), Dafny
will have verified that the entire execution will eventually
terminate.)
An invocation of Dafny may specify a number of source files.
Each Dafny file follows the grammar of the Dafny
nonterminal.
It consists of a sequence of optional include directives followed by top level declarations followed by the end of the file.
IncludeDirective_ = "include" stringToken
Include directives have the form "include" stringToken
where
the string token is either a normal string token or a
verbatim string token. The stringToken
is interpreted as the name of
a file that will be included in the Dafny source. These included
files also obey the Dafny
grammar. Dafny parses and processes the
transitive closure of the original source files and all the included files,
but will not invoke the verifier on these unless they have been listed
explicitly on the command line.
TopDecl = { { DeclModifier }
( SubModuleDecl
 ClassDecl
 DatatypeDecl
 NewtypeDecl
 SynonymTypeDecl
 IteratorDecl
 TraitDecl
 ClassMemberDecl(moduleLevelDecl: true)
}
Toplevel declarations may appear either at the top level of a Dafny file,
or within a SubModuleDecl
. A toplevel declaration is one of the following
types of declarations which are described later.
The ClassDecl
, DatatypeDecl
, NewtypeDecl
,
SynonymTypeDecl
, IteratorDecl
, and TraitDecl
declarations are
type declarations and are describe in Section 5. Ordinarily
ClassMemberDecl
s appear in class declarations but they can also
appear at the top level. In that case they are included as part of an
implicit toplevel class and are implicitly static
(but cannot be
declared as static). In addition a ClassMemberDecl
that appears at
the top level cannot be a FieldDecl
.
DeclModifier =
( "abstract"  "ghost"  "static"  "protected"
 "extern" [ stringToken]
)
Top level declarations may be preceded by zero or more declaration modifiers. Not all of these are allowed in all contexts.
The “abstract” modifiers may only be used for module declarations. An abstract module can leave some entities underspecified. Abstract modules are not compiled to C#.
The ghost modifier is used to mark entities as being used for specification only, not for compilation to code.
The static modifier is used for class members that that are associated with the class as a whole rather than with an instance of the class.
The protected modifier is used to control the visibility of the body of functions.
The extern modifier is used to alter the CompileName of entities. The CompileName is the name for the entity when translating to Boogie or C#.
The following table shows modifiers that are available for each of the kinds of declaration. In the table we use alreadyghost to denote that the item is not allowed to have the ghost modifier because it is already implicitly ghost.
Declaration  allowed modifiers 

module  abstract 
class  extern 
trait   
datatype or codatatype   
field  ghost 
newtype   
synonym types   
iterators   
method  ghost static extern 
lemma, colemma, comethod  alreadyghost static protected 
inductive lemma  alreadyghost static 
constructor   
function (nonmethod)  alreadyghost static protected 
function method  alreadyghost static protected extern 
predicate (nonmethod)  alreadyghost static protected 
predicate method  alreadyghost static protected extern 
inductive predicate  alreadyghost static protected 
copredicate  alreadyghost static protected 
SubModuleDecl = ( ModuleDefinition_  ModuleImport_ )
Structuring a program by breaking it into parts is an important part of creating large programs. In Dafny, this is accomplished via modules. Modules provide a way to group together related types, classes, methods, functions, and other modules together, as well as control the scope of declarations. Modules may import each other for code reuse, and it is possible to abstract over modules to separate an implementation from an interface.
ModuleDefinition_ = "module" { Attribute } ModuleName
[ [ "exclusively" ] "refines" QualifiedModuleName ]
"{" { TopDecl } "}"
QualifiedModuleName = Ident { "." Ident }
A qualified name that is known to refer to a module.
A new module is declared with the module
keyword, followed by the name
of the new module, and a pair of curly braces ({}) enclosing the body
of the module:
module Mod {
...
}
A module body can consist of anything that you could put at the top level. This includes classes, datatypes, types, methods, functions, etc.
module Mod {
class C {
var f: int
method m()
}
datatype Option = A(int)  B(int)
type T
method m()
function f(): int
}
You can also put a module inside another, in a nested fashion:
module Mod {
module Helpers {
class C {
method doIt()
var f: int
}
}
}
Then you can refer to the members of the Helpers
module within the
Mod
module by prefixing them with “Helpers.”. For example:
module Mod {
module Helpers { ... }
method m() {
var x := new Helpers.C;
x.doIt();
x.f := 4;
}
}
Methods and functions defined at the module level are available like classes, with just the module name prefixing them. They are also available in the methods and functions of the classes in the same module.
module Mod {
module Helpers {
function method addOne(n: nat): nat {
n + 1
}
}
method m() {
var x := 5;
x := Helpers.addOne(x); // x is now 6
}
}
ModuleImport_ = "import" ["opened" ] ModuleName
[ "=" QualifiedModuleName
 "as" QualifiedModuleName ["default" QualifiedModuleName ]
]
[ ";" ]
Declaring new submodules is useful, but sometimes you want to refer to
things from an existing module, such as a library. In this case, you
can import one module into another. This is done via the import
keyword, and there are a few different forms, each of which has a
different meaning. The simplest kind is the concrete import, and has
the form import A = B
. This declaration creates a reference to the
module B
(which must already exist), and binds it to the new name
A
. Note this new name, i.e. A
, is only bound in the module containing
the import declaration; it does not create a global alias. For
example, if Helpers
was defined outside of Mod
, then we could import
it:
module Helpers {
...
}
module Mod {
import A = Helpers
method m() {
assert A.addOne(5) == 6;
}
}
Note that inside m()
, we have to use A
instead of Helpers
, as we bound
it to a different name. The name Helpers
is not available inside m()
,
as only names that have been bound inside Mod
are available. In order
to use the members from another module, it either has to be declared
there with module
or imported with import
.
We don't have to give Helpers
a new name, though, if we don't want
to. We can write import Helpers = Helpers
if we want to, and Dafny
even provides the shorthand import Helpers
for this behavior. You
can't bind two modules with the same name at the same time, so
sometimes you have to use the = version to ensure the names do not
clash.
The QualifiedModuleName
in the ModuleImport_
starts with a
sibling module of the importing module, or with a submodule of the
importing module. There is no wya to refer to the parent module, only
sibling modules (and their submodules).
Sometimes, prefixing the members of the module you imported with the
name is tedious and ugly, even if you select a short name when
importing it. In this case, you can import the module as opened
,
which causes all of its members to be available without adding the
module name. The opened
keyword must immediately follow import
, if it
is present. For example, we could write the previous example as:
module Mod {
import opened Helpers
method m() {
assert addOne(5) == 6;
}
}
When opening modules, the newly bound members will have low priority,
so they will be hidden by local definitions. This means if you define
a local function called addOne
, the function from Helpers
will no
longer be available under that name. When modules are opened, the
original name binding is still present however, so you can always use
the name that was bound to get to anything that is hidden.
module Mod {
import opened Helpers
function addOne(n: nat): nat {
n  1
}
method m() {
assert addOne(5) == 6; // this is now false,
// as this is the function just defined
assert Helpers.addOne(5) == 6; // this is still true
}
}
If you open two modules that both declare members with the same name,
then neither member can be referred to without a module prefix, as it
would be ambiguous which one was meant. Just opening the two modules
is not an error, however, as long as you don't attempt to use members
with common names. The opened
keyword can be used with any kind of
import
declaration, including the module abstraction form.
Sometimes, using a specific implementation is unnecessary; instead,
all that is needed is a module that implements some interface. In
that case, you can use an abstract module import. In Dafny, this is
written import A as B
. This means bind the name A
as before, but
instead of getting the exact module B
, you get any module which is a
adheres of B
. Typically, the module B
may have abstract type
definitions, classes with bodyless methods, or otherwise be unsuitable
to use directly. Because of the way refinement is defined, any
refinement of B
can be used safely. For example, if we start with:
module Interface {
function method addSome(n: nat): nat
ensures addSome(n) > n
}
module Mod {
import A as Interface
method m() {
assert 6 <= A.addSome(5);
}
}
then we can be more precise if we know that addSome
actually adds
exactly one. The following module has this behavior. Further, the
postcondition is stronger, so this is actually a refinement of the
Interface module.
module Implementation {
function method addSome(n: nat): nat
ensures addSome(n) == n + 1
{
n + 1
}
}
We can then substitute Implementation
for A
in a new module, by
declaring a refinement of Mod
which defines A
to be Implementation
.
module Mod2 refines Mod {
import A = Implementation
...
}
You can also give an implementation directly, without introducing a refinement, by giving a default to the abstract import:
module Interface {
function method addSome(n: nat): nat
ensures addSome(n) > n
}
module Mod {
import A as Interface default Implementation
method m() {
assert 6 <= A.addSome(5);
}
}
module Implementation {
function method addSome(n: nat): nat
ensures addSome(n) == n + 1
{
n + 1
}
}
module Mod2 refines Mod {
import A as Interface default Implementation
...
}
Regardless of whether there is a default, the only things known about
A
in this example is that it has a function addSome
that returns a
strictly bigger result, so even with the default we still can't prove
that A.addSome(5) == 6
, only that 6 <= A.addSome(5)
.
When you refine an abstract import into a concrete one, or giving a default, Dafny checkes that the concrete module is a refinement of the abstract one. This means that the methods must have compatible signatures, all the classes and datatypes with their constructors and fields in the abstract one must be present in the concrete one, the specifications must be compatible, etc.
Dafny isn't particular about which order the modules appear in, but they must follow some rules to be well formed. As a rule of thumb, there should be a way to order the modules in a program such that each only refers to things defined before it in the source text. That doesn't mean the modules have to be given in that order. Dafny will figure out that order for you, assuming you haven't made any circular references. For example, this is pretty clearly meaningless:
import A = B
import B = A
You can have import statements at the toplevel, and you can import modules defined at the same level:
import A = B
method m() {
A.whatever();
}
module B { ... }
In this case, everything is well defined because we can put B
first,
followed by the A
import, and then finally m()
. If there is no
ordering, then Dafny will give an error, complaining about a cyclic
dependency.
Note that when rearranging modules and imports, they have to be kept in the same containing module, which disallows some pathological module structures. Also, the imports and submodules are always considered to be first, even at the toplevel. This means that the following is not well formed:
method doIt() { }
module M {
method m() {
doIt();
}
}
because the module M
must come before any other kind of members, such
as methods. To define global functions like this, you can put them in
a module (called Globals
, say) and open it into any module that needs
its functionality. Finally, if you import via a path, such as import A
= B.C
, then this creates a dependency of A
on B
, as we need to know
what B
is (is it abstract or concrete, or a refinement?).
When Dafny sees something like A<T>.B<U>.C<V>
, how does it know what each part
refers to? The process Dafny uses to determine what identifier
sequences like this refer to is name resolution. Though the rules may
seem complex, usually they do what you would expect. Dafny first looks
up the initial identifier. Depending on what the first identifier
refers to, the rest of the identifier is looked up in the appropriate
context.
In terms of the grammar, sequences like the above are represented as
a NameSegment
followed by 0 or more Suffix
es. A Suffix
is
more general and the form shown above would be for when the
Suffix
is an AugmentedDotSuffix_
.
The resolution is different depending on whether it is in an expression context or a type context.
The leading NameSegment
is resolved using the first following
rule that succeeds.
Local variables, parameters and bound variables. These are things like
x
, y
, and i
in var x;, ... returns (y: int)
, and
forall i :: ....
The declaration chosen is the match from the
innermost matching scope.
If in a class, try to match a member of the class. If the member that
is found is not static an implicit this
is inserted. This works for
fields, functions, and methods of the current class (if in a static
context, then only static methods and functions are allowed). You can
refer to fields of the current class either as this.f
or f
,
assuming of course that f
hasn't be hidden by one of the above. You
can always prefix this if needed, which cannot be hidden. (Note, a
field whose name is a string of digits must always have some prefix.)
If there is no Suffix
, then look for a datatype constructor, if
unambiguous. Any datatypes that don't need qualification (so the
datatype name itself doesn't need a prefix), and also have a uniquely
named constructor, can be referred to just by its name. So if
datatype List = Cons(List)  Nil
is the only datatype that declares
Cons
and Nil
constructors, then you can write Cons(Cons(Nil))
.
If the constructor name is not unique, then you need to prefix it with
the name of the datatype (for example List.Cons(List.Nil)))
. This is
done per constructor, not per datatype.
Look for a member of the enclosing module.
Modulelevel (static) functions and methods
TODO: Not sure about the following paragraph. Opened modules are treated at each level, after the declarations in the current module. Opened modules only affect steps 2, 3 and 5. If a ambiguous name is found, an error is generated, rather than continuing down the list. After the first identifier, the rules are basically the same, except in the new context. For example, if the first identifier is a module, then the next identifier looks into that module. Opened modules only apply within the module it is opened into. When looking up into another module, only things explicitly declared in that module are considered.
To resolve expression E.id
:
First resolve expression E and any type arguments.
E
resolved to a module M
:
E.id<T>
is not followed by any further suffixes, look for
unambiguous datatype constructor.
E
denotes a type:
E
denotes an expression:
In a type context the priority of NameSegment
resolution is:
Type parameters.
Member of enclosing module (type name or the name of a module).
To resolve expression E.id
:
E
resolved to a module M
:
E
denotes a type:
allowDanglingDotName
: Return the type of E
and the given E.id
,
letting the caller try to make sense of the final dotname.
TODO: I don't under this sentence. What is allowDanglingDotName
?
Specifications describe logical properties of Dafny methods, functions, lambdas, iterators and loops. They specify preconditions, postconditions, invariants, what memory locations may be read or modified, and termination information by means of specification clauses. For each kind of specification zero or more specification clauses (of the type accepted for that type of specification) may be given, in any order.
We document specifications at these levels:
RequiresClause_
.
MethodSpec
.
MethodDecl
.
This section documents the first two of these in a bottomup manner. We first document the clauses and then the specifications that use them.
RequiresClause_ =
"requires" Expression(allowLemma: false, allowLambda: false)
The requires clauses specify preconditions for methods, functions, lambda expressions and iterators. Dafny checks that the preconditions are met at all call sites. The callee may then assume the preconditions hold on entry.
If no requires clause is specified it is taken to be true
.
If more than one requires clause is given, then the precondition is the conjunction of all of the expressions from all of the requires clauses.
EnsuresClause_ =
"ensures" { Attribute } Expression(allowLemma: false, allowLambda: false)
ForAllEnsuresClause_ =
"ensures" Expression(allowLemma: false, allowLambda: true)
FunctionEnsuresClause_ =
"ensures" Expression(allowLemma: false, allowLambda: false)
An ensures clause specifies the post condition for a method, function or iterator.
If no ensures clause is specified it is taken to be true
.
If more than one ensures clause is given, then the postcondition is the conjunction of all of the expressions from all of the ensures clauses.
TODO: In the present sources FunctionEnsuresClause_
differs from
EnsuresClause_
only in that it is not allowed to specify
Attribute
s. This seems like a bug and will likely
be fixed in a future version.
DecreasesClause_(allowWildcard, allowLambda) =
"decreases" { Attribute } DecreasesList(allowWildcard, allowLambda)
FunctionDecreasesClause_(allowWildcard, allowLambda) =
"decreases" DecreasesList(allowWildcard, allowLambda)
DecreasesList(allowWildcard, allowLambda) =
PossiblyWildExpression(allowLambda)
{ "," PossiblyWildExpression(allowLambda) }
If allowWildcard
is false but one of the
PossiblyWildExpression
s is a wildcard, an error is
reported.
TODO: A FunctionDecreasesClause_
is not allowed to specify
Attribute
s. this will be fixed in a future version.
Decreases clauses are used to prove termination in the presence of recursion. if more than one decreases clause is given it is as if a single decreases clause had been given with the collected list of arguments. That is,
decreases A, B
decreases C, D
is equivalent to
decreases A, B, C, D
If any of the expressions in the decreases clause are wild (i.e. “*”) then proof of termination will be skipped.
Termination metrics in Dafny, which are declared by decreases clauses, are lexicographic tuples of expressions. At each recursive (or mutually recursive) call to a function or method, Dafny checks that the effective decreases clause of the callee is strictly smaller than the effective decreases clause of the caller.
What does “strictly smaller” mean? Dafny provides a builtin
wellfounded order for every type and, in some cases, between types. For
example, the Boolean “false” is strictly smaller than “true”, the
integer 78 is strictly smaller than 102, the set {2,5}
is strictly
smaller than the set {2,3,5}
, and for “s” of type seq<Color>
where
Color
is some inductive datatype, the color s[0]
is strictly less than
s
(provided s
is nonempty).
What does “effective decreases clause” mean? Dafny always appends a “top” element to the lexicographic tuple given by the user. This top element cannot be syntactically denoted in a Dafny program and it never occurs as a runtime value either. Rather, it is a fictitious value, which here we will denote \top, such that each value that can ever occur in a Dafny program is strictly less than \top. Dafny sometimes also prepends expressions to the lexicographic tuple given by the user. The effective decreases clause is any such prefix, followed by the userprovided decreases clause, followed by \top. We said “userprovided decreases clause”, but if the user completely omits a “decreases” clause, then Dafny will usually make a guess at one, in which case the effective decreases clause is any prefix followed by the guess followed by \top. (If you're using the Dafny IDE in Visual Studio, you can hover the mouse over the name of a recursive function or method, or the “while” keyword for a loop, to see the “decreases” clause that Dafny guessed, if any.)
Here is a simple but interesting example: the Fibonacci function.
function Fib(n: nat) : nat
{
if n < 2 then n else Fib(n2) + Fib(n1)
}
In this example, if you hover your mouse over the function name
you will see that Dafny has supplied a **decreases** n
clause.
Let's take a look at the kind of example where a mysteriouslooking decreases clause like “Rank, 0” is useful.
Consider two mutually recursive methods, A
and B
:
method A(x: nat)
{
B(x);
}
method B(x: nat)
{
if x != 0 { A(x1); }
}
To prove termination of A
and B
, Dafny needs to have effective
decreases clauses for A and B such that:
the measure for the callee B(x)
is strictly smaller than the measure
for the caller A(x)
, and
the measure for the callee A(x1)
is strictly smaller than the measure
for the caller B(x)
.
Satisfying the second of these conditions is easy, but what about the
first? Note, for example, that declaring both A
and B
with “decreases x”
does not work, because that won't prove a strict decrease for the call
from A(x)
to B(x)
.
Here's one possibility (for brevity, we will omit the method bodies):
method A(x: nat)
decreases x, 1
method B(x: nat)
decreases x, 0
For the call from A(x)
to B(x)
, the lexicographic tuple "x, 0"
is
strictly smaller than "x, 1"
, and for the call from B(x)
to A(x1)
, the
lexicographic tuple "x1, 1"
is strictly smaller than "x, 0"
.
Two things to note: First, the choice of “0” and “1” as the second
components of these lexicographic tuples is rather arbitrary. It could
just as well have been “false” and “true”, respectively, or the sets
{2,5}
and {2,3,5}
. Second, the keyword decreases often gives rise to
an intuitive English reading of the declaration. For example, you might
say that the recursive calls in the definition of the familiar Fibonacci
function Fib(n)
“decreases n”. But when the lexicographic tuple contains
constants, the English reading of the declaration becomes mysterious and
may give rise to questions like “how can you decrease the constant 0?”.
The keyword is just that—a keyword. It says “here comes a list of
expressions that make up the lexicographic tuple we want to use for the
termination measure”. What is important is that one effective decreases
clause is compared against another one, and it certainly makes sense to
compare something to a constant (and to compare one constant to
another).
We can simplify things a little bit by remembering that Dafny appends \top to the usersupplied decreases clause. For the AandB example, this lets us drop the constant from the decreases clause of A:
method A(x: nat)
decreases x
method B(x: nat)
decreases x, 0
The effective decreases clause of A
is "x, \top"
and the effective
decreases clause of B
is "x, 0, \top"
. These tuples still satisfy the two
conditions (x, 0, \top) < (x, \top)
and (x1, \top) < (x, 0, \top)
. And
as before, the constant “0” is arbitrary; anything less than \top (which
is any Dafny expression) would work.
Let's take a look at one more example that better illustrates the utility
of \top
. Consider again two mutually recursive methods, call them Outer
and Inner
, representing the recursive counterparts of what iteratively
might be two nested loops:
method Outer(x: nat)
{
// set y to an arbitrary nonnegative integer
var y : 0 <= y;
Inner(x, y);
}
method Inner(x: nat, y: nat)
{
if y != 0 {
Inner(x, y1);
} else if x != 0 {
Outer(x1);
}
}
The body of Outer
uses an assignsuchthat statement to represent some
computation that takes place before Inner
is called. It sets “y” to some
arbitrary nonnegative value. In a more concrete example, Inner
would do
some work for each “y” and then continue as Outer
on the next smaller
“x”.
Using a decreases clause "x, y"
for Inner
seems natural, but if
we don't have any bound on the size of the "y"
computed by Outer
,
there is no expression we can write in decreases clause of Outer
that is sure to lead to a strictly smaller value for "y"
when Inner
is called. \top
to the rescue. If we arrange for the effective
decreases clause of Outer
to be "x, \top"
and the effective decreases
clause for Inner
to be "x, y, \top"
, then we can show the strict
decreases as required. Since \top
is implicitly appended, the two
decreases clauses declared in the program text can be:
method Outer(x: nat)
decreases x
method Inner(x: nat, y: nat)
decreases x, y
Moreover, remember that if a function or method has no userdeclared decreases clause, Dafny will make a guess. The guess is (usually) the list of arguments of the function/method, in the order given. This is exactly the decreases clauses needed here. Thus, Dafny successfully verifies the program without any explicit decreases clauses:
method Outer(x: nat)
{
var y : 0 <= y;
Inner(x, y);
}
method Inner(x: nat, y: nat)
{
if y != 0 {
Inner(x, y1);
} else if x != 0 {
Outer(x1);
}
}
The ingredients are simple, but the end result may seem like magic. For many users, however, there may be no magic at all – the end result may be so natural that the user never even has to bothered to think about that there was a need to prove termination in the first place.
FrameExpression(allowLemma, allowLambda) =
( Expression(allowLemma, allowLambda) [ FrameField ]
 FrameField )
FrameField = "`" Ident
PossiblyWildFrameExpression(allowLemma) =
( "*"  FrameExpression(allowLemma, allowLambda: false) )
Frame expressions are used to denote the set of memory locations
that a Dafny program element may read or write. A frame
expression is a set expression. The form {}
(that is, the empty set)
says that no memory locations may be modified,
which is also the default if no modifies clause is given explicitly.
Note that framing only applies to the heap, or memory accessed through references. Local variables are not stored on the heap, so they cannot be mentioned (well, they are not in scope in the declaration) in reads annotations. Note also that types like sets, sequences, and multisets are value types, and are treated like integers or local variables. Arrays and objects are reference types, and they are stored on the heap (though as always there is a subtle distinction between the reference itself and the value it points to.)
The FrameField
construct is used to specify a field of a
class object. The identifier following the backquote is the
name of the field being referenced.
If the FrameField
is preceded by an expression the expression
must be a reference to an object having that field.
If the FrameField
is not preceded by an expression then
the frame expression is referring to that field of the current
object. This form is only used from a method of a class.
The use of FrameField
is discouraged as in practice it has not
been shown to either be more concise or to perform better.
Also, there's (unfortunately) no form of it for array
elements—one could imagine
modifies a`[j]
Also, FrameField
is not taken into consideration for
lambda expressions.
FunctionReadsClause_ =
"reads"
PossiblyWildFrameExpression (allowLemma: false)
{ "," PossiblyWildFrameExpression(allowLemma: false) }
LambdaReadsClause_ =
"reads" PossiblyWildFrameExpression(allowLemma: true)
IteratorReadsClause_ =
"reads" { Attribute }
FrameExpression(allowLemma: false, allowLambda: false)
{ "," FrameExpression(allowLemma: false, allowLambda: false) }
PossiblyWildExpression(allowLambda) =
( "*"  Expression(allowLemma: false, allowLambda) )
Functions are not allowed to have side effects but may be restricted in what they can read. The reading frame of a function (or predicate) is all the memory locations that the function is allowed to read. The reason we might limit what a function can read is so that when we write to memory, we can be sure that functions that did not read that part of memory have the same value they did before. For example, we might have two arrays, one of which we know is sorted. If we did not put a reads annotation on the sorted predicate, then when we modify the unsorted array, we cannot determine whether the other array stopped being sorted. While we might be able to give invariants to preserve it in this case, it gets even more complex when manipulating data structures. In this case, framing is essential to making the verification process feasible.
It is not just the body of a function that is subject to reads checks, but also its precondition and the reads clause itself.
A reads clause can list a wildcard (“*”), which allows the enclosing function to read anything. In many cases, and in particular in all cases where the function is defined recursively, this makes it next to impossible to make any use of the function. Nevertheless, as an experimental feature, the language allows it (and it is sound). Note that a “*” makes the rest of the frame expression irrelevant.
A reads clause specifies the set of memory locations that a function,
lambda, or iterator may read. If more than one reads clause is given
in a specification the effective read set is the union of the sets
specified. If there are no reads clauses the effective read set is
empty. If "*"
is given in a reads clause it means any memory may be
read.
TODO: It would be nice if the different forms of read clauses could be combined. In a future version the single form of read clause will allow a list and attributes.
ModifiesClause_ =
"modifies" { Attribute }
FrameExpression(allowLemma: false, allowLambda: false)
{ "," FrameExpression(allowLemma: false, allowLambda: false) }
Frames also affect methods. As you might have guessed, methods are not required to list the things they read. Methods are allowed to read whatever memory they like, but they are required to list which parts of memory they modify, with a modifies annotation. They are almost identical to their reads cousins, except they say what can be changed, rather than what the value of the function depends on. In combination with reads, modification restrictions allow Dafny to prove properties of code that would otherwise be very difficult or impossible. Reads and modifies are one of the tools that allow Dafny to work on one method at a time, because they restrict what would otherwise be arbitrary modifications of memory to something that Dafny can reason about.
Note that fields of newly allocated objects can always be modified.
It is also possible to frame what can be modified by a block statement by means of the block form of the modify statement (Section 21.16).
A modifies clause specifies the set of memory locations that a method, iterator or loop body may modify. If more than one modifies clause is given in a specification, the effective modifies set is the union of the sets specified. If no modifies clause is given the effective modifies set is empty. A loop can also have a modifies clause. If none is given, the loop gets to modify anything the enclosing context is allowed to modify.
InvariantClause_ =
"invariant" { Attribute }
Expression(allowLemma: false, allowLambda: true)
An invariant clause is used to specify an invariant for a loop. If more than one invariant clause is given for a loop the effective invariant is the conjunction of the conditions specified.
The invariant must hold on entry to the loop. And assuming it is valid on entry, Dafny must be able to prove that it then holds at the end of the loop.
MethodSpec =
{ ModifiesClause_
 RequiresClause_
 EnsuresClause_
 DecreasesClause_(allowWildcard: true, allowLambda: false)
}
A method specification is zero or more modifies, requires, ensures or decreases clauses, in any order. A method does not have reads clauses because methods are allowed to read any memory.
FunctionSpec =
{ RequiresClause_
 FunctionReadsClause_
 FunctionEnsuresClause_
 FunctionDecreasesClause_(allowWildcard: false, allowLambda: false)
}
A function specification is zero or more reads, requires, ensures or decreases clauses, in any order. A function specification does not have modifies clauses because functions are not allowed to modify any memory.
LambdaSpec_ =
{ LambdaReadsClause_
 RequiresClause_
}
A lambda specification is zero or more reads or requires clauses. Lambda specifications do not have ensures clauses because the body is never opaque. Lambda specifications do not have decreases clauses because they do not have names and thus cannot be recursive. A lambda specification does not have modifies clauses because lambdas are not allowed to modify any memory.
IteratorSpec =
{ IteratorReadsClause_
 ModifiesClause_
 [ "yield" ] RequiresClause_
 [ "yield" ] EnsuresClause_
 DecreasesClause_(allowWildcard: false, allowLambda: false)
}
An iterator specification applies both to the iterator's constructor
method and to its MoveNext
method. The reads and modifies
clauses apply to both of them. For the requires and ensures
clauses, if yield
is not present they apply to the constructor,
but if yield
is present they apply to the MoveNext
method.
TODO: What is the meaning of a decreases clause on an iterator?
Does it apply to MoveNext
? Make sure our description of
iterators explains these.
TODO: What is the relationship between the post condition and
the Valid()
predicate?
LoopSpec =
{ InvariantClause_
 DecreasesClause_(allowWildcard: true, allowLambda: true)
 ModifiesClause_
}
A loop specification provides the information Dafny needs to
prove properties of a loop. The InvariantClause_
clause
is effectively a precondition and it along with the
negation of the loop test condition provides the postcondition.
The DecreasesClause_
clause is used to prove termination.
Type = DomainType [ ">" Type ]
A Dafny type is a domain type (i.e. a type that can be the domain of a function type) optionally followed by an arrow and a range type.
DomainType =
( BoolType_  CharType_  NatType_  IntType_  RealType_  ObjectType_
 FiniteSetType_  InfiniteSetType_  MultisetType_
 SequenceType_  StringType_
 FiniteMapType_  InfiniteMapType_  ArrayType_
 TupleType_  NamedType_ )
The domain types comprise the builtin scalar types, the builtin collection types, tuple types (including as a special case a parenthesized type) and reference types.
Dafny types may be categorized as either value types or reference types.
The value types are those whose values do not lie in the program heap. These are:
bool
, char
, nat
, int
, real
set
, multiset
, seq
, string
, map
, imap
Data items having value types are passed by value. Since they are not considered to occupy memory, framing expressions do not reference them.
Dafny offers a host of reference types. These represent references to objects allocated dynamically in the program heap. To access the members of an object, a reference to (that is, a pointer to or object identity of) the object is dereferenced.
The reference types are class types, traits and array types.
The special value null
is part of every reference
type.^{0}
NamedType_ = NameSegmentForTypeName { "." NameSegmentForTypeName }
A NamedType_
is used to specify a userdefined type by name
(possibly modulequalified). Named types are introduced by
class, trait, inductive, coinductive, synonym and opaque
type declarations. They are also used to refer to type variables.
NameSegmentForTypeName = Ident [ GenericInstantiation ]
A NameSegmentForTypeName
is a type name optionally followed by a
GenericInstantiation
which supplies type parameters to a generic
type, if needed. It is a special case of a NameSegment
(See Section 22.35)
that does not allow a HashCall
.
The following sections describe each of these kinds of types in more detail.
Dafny offers these basic types: bool
for booleans, char
for
characters, int
and nat
for integers, and real
for reals.
BoolType_ = "bool"
There are two boolean values and each has a corresponding literal in
the language: false
and true
.
In addition to equality (==
) and disequality (!=
), which are
defined on all types, type bool
supports the following operations:
operator  description 

<==>  equivalence (if and only if) 
==>  implication (implies) 
<==  reverse implication (follows from) 
&&  conjunction (and) 
  disjunction (or) 
!  negation (not) 
Negation is unary; the others are binary. The table shows the operators in groups of increasing binding power, with equality binding stronger than conjunction and disjunction, and weaker than negation. Within each group, different operators do not associate, so parentheses need to be used. For example,
A && B  C // error
would be ambiguous and instead has to be written as either
(A && B)  C
or
A && (B  C)
depending on the intended meaning.
The expressions A <==> B
and A == B
give the same value, but note
that <==>
is associative whereas ==
is chaining. So,
A <==> B <==> C
is the same as
A <==> (B <==> C)
and
(A <==> B) <==> C
whereas
A == B == C
is simply a shorthand for
A == B && B == C
Conjunction is associative and so is disjunction. These operators are
short circuiting (from left to right), meaning that their second
argument is evaluated only if the evaluation of the first operand does
not determine the value of the expression. Logically speaking, the
expression A && B
is defined when A
is defined and either A
evaluates to false
or B
is defined. When A && B
is defined, its
meaning is the same as the ordinary, symmetric mathematical
conjunction ∧. The same holds for 
and ∨.
Implication is right associative and is shortcircuiting from left
to right. Reverse implication B <== A
is exactly the same as
A ==> B
, but gives the ability to write the operands in the opposite
order. Consequently, reverse implication is left associative and is
shortcircuiting from right to left. To illustrate the
associativity rules, each of the following four lines expresses the
same property, for any A
, B
, and C
of type bool
:
A ==> B ==> C
A ==> (B ==> C) // parentheses redundant, since ==> is right associative
C <== B <== A
(C <== B) <== A // parentheses redundant, since <== is left associative
To illustrate the shortcircuiting rules, note that the expression
a.Length
is defined for an array a
only if a
is not null
(see
Section 5.1), which means the following two
expressions are wellformed:
a != null ==> 0 <= a.Length
0 <= a.Length <== a != null
The contrapositive of these two expressions would be:
a.Length < 0 ==> a == null // not wellformed
a == null <== a.Length < 0 // not wellformed
but these expressions are not wellformed, since wellformedness
requires the left (and right, respectively) operand, a.Length < 0
,
to be wellformed by itself.
Implication A ==> B
is equivalent to the disjunction !A  B
, but
is sometimes (especially in specifications) clearer to read. Since,

is shortcircuiting from left to right, note that
a == null  0 <= a.Length
is wellformed, whereas
0 <= a.Length  a == null // not wellformed
is not.
In addition, booleans support logical quantifiers (forall and exists), described in section 22.30.
IntType_ = "int"
RealType_ = "real"
Dafny supports numeric types of two kinds, integerbased, which
includes the basic type int
of all integers, and realbased, which
includes the basic type real
of all real numbers. Userdefined
numeric types based on int
and real
, called newtypes, are
described in Section 19. Also, the subset type
nat
, representing the nonnegative subrange of int
, is described
in Section 20.
The language includes a literal for each nonnegative integer, like
0
, 13
, and 1985
. Integers can also be written in hexadecimal
using the prefix “0x
”, as in 0x0
, 0xD
, and 0x7c1
(always with
a lower case x
, but the hexadecimal digits themselves are case
insensitive). Leading zeros are allowed. To form negative integers,
use the unary minus operator.
There are also literals for some of the nonnegative reals. These are
written as a decimal point with a nonempty sequence of decimal digits
on both sides. For example, 1.0
, 1609.344
, and 0.5772156649
.
For integers (in both decimal and hexadecimal form) and reals, any two digits in a literal may be separated by an underscore in order to improve human readability of the literals. For example:
1_000_000 // easier to read than 1000000
0_12_345_6789 // strange but legal formatting of 123456789
0x8000_0000 // same as 0x80000000  hex digits are often placed in groups of 4
0.000_000_000_1 // same as 0.0000000001  1 Ångström
In addition to equality and disequality, numeric types support the following relational operations:
operator  description 

<  less than 
<=  at most 
>=  at least 
>  greater than 
Like equality and disequality, these operators are chaining, as long as they are chained in the “same direction”. That is,
A <= B < C == D <= E
is simply a shorthand for
A <= B && B < C && C == D && D <= E
whereas
A < B > C
is not allowed.
There are also operators on each numeric type:
operator  description 

+  addition (plus) 
  subtraction (minus) 
*  multiplication (times) 
/  division (divided by) 
%  modulus (mod) 
  negation (unary minus) 
The binary operators are left associative, and they associate with
each other in the two groups. The groups are listed in order of
increasing binding power, with equality binding more strongly than the
multiplicative operators and weaker than the unary operator.
Modulus is supported only for integerbased numeric types. Integer
division and modulus are the Euclidean division and modulus. This
means that modulus always returns a nonnegative, regardless of the
signs of the two operands. More precisely, for any integer a
and
nonzero integer b
,
a == a / b * b + a % b
0 <= a % b < B
where B
denotes the absolute value of b
.
Realbased numeric types have a member Trunc
that returns the
floor of the real value, that is, the largest integer not exceeding
the real value. For example, the following properties hold, for any
r
and r'
of type real
:
3.14.Trunc == 3
(2.5).Trunc == 3
2.5.Trunc == 2
real(r.Trunc) <= r
r <= r' ==> r.Trunc <= r'.Trunc
Note in the third line that member access (like .Trunc
) binds
stronger than unary minus. The fourth line uses the conversion
function real
from int
to real
, as described in Section
19.0.
CharType_ = "char"
Dafny supports a type char
of characters. Character literals are
enclosed in single quotes, as in 'D'
. Their form is described
by the charToken
nonterminal in the grammar. To write a single quote as a
character literal, it is necessary to use an escape sequence.
Escape sequences can also be used to write other characters. The
supported escape sequences are as follows:
escape sequence  meaning 

\'  the character ' 
\"  the character " 
\\  the character \ 
\0  the null character, same as \u0000 
\n  line feed 
\r  carriage return 
\t  horizontal tab 
\uxxxx  universal character whose hexadecimal code is xxxx 
The escape sequence for a double quote is redundant, because
'"' and '\"' denote the same
character—both forms are provided in order to support the same
escape sequences as for string literals (Section 9.2.4).
In the form \uxxxx
, the u
is always lower case, but the four
hexadecimal digits are case insensitive.
Character values are ordered and can be compared using the standard relational operators:
operator  description 

<  less than 
<=  at most 
>=  at least 
>  greater than 
Sequences of characters represent strings, as described in Section 9.2.4.
The only other operations on characters are obtaining a character
by indexing into a string, and the implicit conversion to string
when used as a parameter of a print
statement.
TODO: Are there any conversions between char
values and numeric values?
GenericParameters = "<" TypeVariableName [ "(" "==" ")" ]
{ "," TypeVariableName [ "(" "==" ")" ] } ">"
Many of the types (as well as functions and methods) in Dafny can be parameterized by types. These type parameters are typically declared inside angle brackets and can stand for any type.
It is sometimes necessary to restrict these type parameters so that
they can only be instantiated by certain families of types. As such,
Dafny distinguishes types that support the equality operation
not only in ghost contexts but also in compiled contexts. To indicate
that a type parameter is restricted to such equality supporting
types, the name of the type parameter takes the suffix
“(==)
”.^{1} For example,
method Compare<T(==)>(a: T, b: T) returns (eq: bool)
{
if a == b { eq := true; } else { eq := false; }
}
is a method whose type parameter is restricted to equalitysupporting types. Again, note that all types support equality in ghost contexts; the difference is only for nonghost (that is, compiled) code. Coinductive datatypes, function types, as well as inductive datatypes with ghost parameters are examples of types that are not equality supporting.
Dafny has some inference support that makes certain signatures less
cluttered (described in a different part of the Dafny language
reference). In some cases, this support will
infer that a type parameter must be restricted to equalitysupporting
types, in which case Dafny adds the “(==)
” automatically.
TODO: Need to describe type inference somewhere.
GenericInstantiation = "<" Type { "," Type } ">"
When a generic entity is used, actual types must be specified for each
generic parameter. This is done using a GenericInstantiation
.
If the GenericInstantiation
is omitted, type inference will try
to fill these in.
Dafny offers several builtin collection types.
FiniteSetType_ = "set" [ GenericInstantiation ]
InfiniteSetType_ = "iset" [ GenericInstantiation ]
For any type T
, each value of type set<T>
is a finite set of
T
values.
TODO:
Set membership is determined by equality in the type T
,
so set<T>
can be used in a nonghost context only if T
is equality
supporting.
For any type T
, each value of type iset<T>
is a potentially infinite
set of T
values.
A set can be formed using a set display expression, which is a possibly empty, unordered, duplicateinsensitive list of expressions enclosed in curly braces. To illustrate,
{} {2, 7, 5, 3} {4+2, 1+5, a*b}
are three examples of set displays. There is also a set comprehension expression (with a binder, like in logical quantifications), described in section 22.31.
In addition to equality and disequality, set types support the following relational operations:
operator  description 

<  proper subset 
<=  subset 
>=  superset 
>  proper superset 
Like the arithmetic relational operators, these operators are chaining.
Sets support the following binary operators, listed in order of increasing binding power:
operator  description 

!!  disjointness 
+  set union 
  set difference 
*  set intersection 
The associativity rules of +
, 
, and *
are like those of the
arithmetic operators with the same names. The expression A !! B
,
whose binding power is the same as equality (but which neither
associates nor chains with equality), says that sets A
and B
have
no elements in common, that is, it is equivalent to
A * B == {}
However, the disjointness operator is chaining, so A !! B !! C !! D
means:
A * B == {} && (A + B) * C == {} && (A + B + C) * D == {}
In addition, for any set s
of type set<T>
or iset<T>
and any
expression e
of type T
, sets support the following operations:
expression  description 

s  set cardinality 
e in s  set membership 
e !in s  set nonmembership 
The expression e !in s
is a syntactic shorthand for !(e in s)
.
MultisetType_ = "multiset" [ GenericInstantiation ]
A multiset is similar to a set, but keeps track of the multiplicity
of each element, not just its presence or absence. For any type T
,
each value of type multiset<T>
is a map from T
values to natural
numbers denoting each element's multiplicity. Multisets in Dafny
are finite, that is, they contain a finite number of each of a finite
set of elements. Stated differently, a multiset maps only a finite
number of elements to nonzero (finite) multiplicities.
Like sets, multiset membership is determined by equality in the type
T
, so multiset<T>
can be used in a nonghost context only if T
is equality supporting.
A multiset can be formed using a multiset display expression, which
is a possibly empty, unordered list of expressions enclosed in curly
braces after the keyword multiset
. To illustrate,
multiset{} multiset{0, 1, 1, 2, 3, 5} multiset{4+2, 1+5, a*b}
are three examples of multiset displays. There is no multiset comprehension expression.
In addition to equality and disequality, multiset types support the following relational operations:
operator  description 

<  proper multiset subset 
<=  multiset subset 
>=  multiset superset 
>  proper multiset superset 
Like the arithmetic relational operators, these operators are chaining.
Multisets support the following binary operators, listed in order of increasing binding power:
operator  description 

!!  multiset disjointness 
+  multiset union 
  multiset difference 
*  multiset intersection 
The associativity rules of +
, 
, and *
are like those of the
arithmetic operators with the same names. The +
operator
adds the multiplicity of corresponding elements, the 
operator
subtracts them (but 0 is the minimum multiplicity),
and the *
has multiplicity that is the minimum of the
multiplicity of the operands.
The expression A !! B
says that multisets A
and B
have no elements in common, that is,
it is equivalent to
A * B == multiset{}
Like the analogous set operator, !!
is chaining.
In addition, for any multiset s
of type multiset<T>
,
expression e
of type T
, and nonnegative integerbased numeric
n
, multisets support the following operations:
expression  description 

s  multiset cardinality 
e in s  multiset membership 
e !in s  multiset nonmembership 
s[e]  multiplicity of e in s 
s[e := n]  multiset update (change of multiplicity) 
The expression e in s
returns true
if and only if s[e] != 0
.
The expression e !in s
is a syntactic shorthand for !(e in s)
.
The expression s[e := n]
denotes a multiset like
s
, but where the multiplicity of element e
is n
. Note that
the multiset update s[e := 0]
results in a multiset like s
but
without any occurrences of e
(whether or not s
has occurrences of
e
in the first place). As another example, note that
s  multiset{e}
is equivalent to:
if e in s then s[e := s[e]  1] else s
SequenceType_ = "seq" [ GenericInstantiation ]
For any type T
, a value of type seq<T>
denotes a sequence of T
elements, that is, a mapping from a finite downwardclosed set of natural
numbers (called indices) to T
values. (Thinking of it as a map,
a sequence is therefore something of a dual of a multiset.)
A sequence can be formed using a sequence display expression, which is a possibly empty, ordered list of expressions enclosed in square brackets. To illustrate,
[] [3, 1, 4, 1, 5, 9, 3] [4+2, 1+5, a*b]
are three examples of sequence displays. There is no sequence comprehension expression.
In addition to equality and disequality, sequence types support the following relational operations:
operator  description 

<  proper prefix 
<=  prefix 
Like the arithmetic relational operators, these operators are
chaining. Note the absence of >
and >=
.
Sequences support the following binary operator:
operator  description 

+  concatenation 
Operator +
is associative, like the arithmetic operator with the
same name.
In addition, for any sequence s
of type seq<T>
, expression e
of type T
, integerbased numeric i
satisfying 0 <= i < s
, and
integerbased numerics lo
and hi
satisfying
0 <= lo <= hi <= s
, sequences support the following operations:
expression  description 

s  sequence length 
s[i]  sequence selection 
s[i := e]  sequence update 
e in s  sequence membership 
e !in s  sequence nonmembership 
s[lo..hi]  subsequence 
s[lo..]  drop 
s[..hi]  take 
s[slices]  slice 
multiset(s)  sequence conversion to a multiset<T> 
Expression s[i := e]
returns a sequence like s
, except that the
element at index i
is e
. The expression e in s
says there
exists an index i
such that s[i] == e
. It is allowed in nonghost
contexts only if the element type T
is equality supporting.
The expression e !in s
is a syntactic shorthand for !(e in s)
.
Expression s[lo..hi]
yields a sequence formed by taking the first
hi
elements and then dropping the first lo
elements. The
resulting sequence thus has length hi  lo
. Note that s[0..s]
equals s
. If the upper bound is omitted, it
defaults to s
, so s[lo..]
yields the sequence formed by dropping
the first lo
elements of s
. If the lower bound is omitted, it
defaults to 0
, so s[..hi]
yields the sequence formed by taking the
first hi
elements of s
.
In the sequence slice operation, slices
is a nonempty list of
length designators separated and optionally terminated by a colon, and
there is at least one colon. Each length designator is a nonnegative
integerbased numeric, whose sum is no greater than s
. If there
are k colons, the operation produces k + 1 consecutive subsequences
from s
, each of the length indicated by the corresponding length
designator, and returns these as a sequence of
sequences.^{2} If slices
is terminated by a
colon, then the length of the last slice extends until the end of s
,
that is, its length is s
minus the sum of the given length
designators. For example, the following equalities hold, for any
sequence s
of length at least 10
:
var t := [3.14, 2.7, 1.41, 1985.44, 100.0, 37.2][1:0:3];
assert t == 3 && t[0] == [3.14] && t[1] == [];
assert t[2] == [2.7, 1.41, 1985.44];
var u := [true, false, false, true][1:1:];
assert u == 3 && u[0][0] && !u[1][0] && u[2] == [false, true];
assert s[10:][0] == s[..10];
assert s[10:][1] == s[10..];
The operation multiset(s)
yields the multiset of elements of
sequence s
. It is allowed in nonghost contexts only if the element
type T
is equality supporting.
StringType_ = "string"
A special case of a sequence type is seq<char>
, for which Dafny
provides a synonym: string
. Strings are like other sequences, but
provide additional syntax for sequence display expressions, namely
string literals. There are two forms of the syntax for string
literals: the standard form and the verbatim form.
String literals of the standard form are enclosed in double quotes, as
in "Dafny"
. To include a double quote in such a string literal,
it is necessary to use an escape sequence. Escape sequences can also
be used to include other characters. The supported escape sequences
are the same as those for character literals, see Section 6.2.
For example, the Dafny expression "say \"yes\""
represents the
string 'say "yes"'
.
The escape sequence for a single quote is redundant, because
"'" and "\'" denote the same
string—both forms are provided in order to support the same
escape sequences as for character literals.
String literals of the verbatim form are bracketed by
@" and ", as in @"Dafny"
. To include
a double quote in such a string literal, it is necessary to use the
escape sequence "", that is, to write the character
twice. In the verbatim form, there are no other escape sequences.
Even characters like newline can be written inside the string literal
(hence spanning more than one line in the program text).
For example, the following three expressions denote the same string:
"C:\\tmp.txt"
@"C:\tmp.txt"
['C', ':', '\\', 't', 'm', 'p', '.', 't', 'x', 't']
Since strings are sequences, the relational operators <
and <=
are defined on them. Note, however, that these operators
still denote proper prefix and prefix, respectively, not some kind of
alphabetic comparison as might be desirable, for example, when
sorting strings.
FiniteMapType_ = "map" [ GenericInstantiation ]
InfiniteMapType_ = "imap" [ GenericInstantiation ]
For any types T
and U
, a value of type map<T,U>
denotes a
(finite) map
from T
to U
. In other words, it is a lookup table indexed by
T
. The domain of the map is a finite set of T
values that have
associated U
values. Since the keys in the domain are compared
using equality in the type T
, type map<T,U>
can be used in a
nonghost context only if T
is equality supporting.
Similarly, for any types T
and U
, a value of type imap<T,U>
denotes a (possibly) infinite map. In most regards, imap<T,U>
is
like map<T,U>
, but a map of type imap<T,U>
is allowed to have an
infinite domain.
A map can be formed using a map display expression (see MapDisplayExpr
),
which is a possibly empty, ordered list of maplets, each maplet having the
form t := u
where t
is an expression of type T
and u
is an
expression of type U
, enclosed in square brackets after the keyword
map
. To illustrate,
map[] map[20 := true, 3 := false, 20 := false] map[a+b := c+d]
are three examples of map displays. By using the keyword imap
instead of map
, the map produced will be of type imap<T,U>
instead of map<T,U>
. Note that an infinite map (imap
) is allowed
to have a finite domain, whereas a finite map (map
) is not allowed
to have an infinite domain.
If the same key occurs more than
once, only the last occurrence appears in the resulting
map.^{3} There is also a map comprehension expression,
explained in section 22.34.
For any map fm
of type map<T,U>
,
any map m
of type map<T,U>
or imap<T,U>
,
any expression t
of type T
,
any expression u
of type U
, and any d
in the domain of m
(that
is, satisfying d in m
), maps support the following operations:
expression  description 

fm  map cardinality 
m[d]  map selection 
m[t := u]  map update 
t in m  map domain membership 
t !in m  map domain nonmembership 
fm
denotes the number of mappings in fm
, that is, the
cardinality of the domain of fm
. Note that the cardinality operator
is not supported for infinite maps.
Expression m[d]
returns the U
value that m
associates with d
.
Expression m[t := u]
is a map like m
, except that the
element at key t
is u
. The expression t in m
says t
is in the
domain of m
and t !in m
is a syntactic shorthand for
!(t in m)
.^{4}
Here is a small example, where a map cache
of type map<int,real>
is used to cache computed values of JouleThomson coefficients for
some fixed gas at a given temperature:
if K in cache { // check if temperature is in domain of cache
coeff := cache[K]; // read result in cache
} else {
coeff := ComputeJouleThomsonCoefficient(K); // do expensive computation
cache := cache[K := coeff]; // update the cache
}
SynonymTypeDecl =
( SynonymTypeDefinition_  OpaqueTypeDefinition_ ) [ ";" ]
It is sometimes useful to know a type by several names or to treat a type abstractly. Synonym and opaque types serve this purpose.
SynonymTypeDefinition_ =
"type" { Attribute } SynonymTypeName [ GenericParameters ] "=" Type
A type synonym declaration:
type Y<T> = G
declares Y<T>
to be a synonym for the type G
. Here, T
is a
nonempty list of type parameters (each of which is optionally
designated with the suffix “(==)
”), which can be used as free type
variables in G
. If the synonym has no type parameters, the “<T>
”
is dropped. In all cases, a type synonym is just a synonym. That is,
there is never a difference, other than possibly in error messages
produced, between Y<T>
and G
.
For example, the names of the following type synonyms may improve the readability of a program:
type Replacements<T> = map<T,T>
type Vertex = int
As already described in Section 9.2.4, string
is a builtin
type synonym for seq<char>
, as if it would have been declared as
follows:
type string = seq<char>
OpaqueTypeDefinition_ = "type" { Attribute } SynonymTypeName
[ "(" "==" ")" ] [ GenericParameters ]
A special case of a type synonym is one that is underspecified. Such a type is declared simply by:
type Y<T>
It is known as an opaque type. Its definition can be revealed in a
refining module. To indicate that Y
designates an
equalitysupporting type, “(==)
” can be written immediately
following the name “Y
”.
For example, the declarations
type T
function F(t: T): T
can be used to model an uninterpreted function F
on some
arbitrary type T
. As another example,
type Monad<T>
can be used abstractly to represent an arbitrary parameterized monad.
This section is a tutorial on wellfounded functions and extreme predicates. We place it here in preparation for Section 12 where function and predicate definitions are described.
Recursive functions are a core part of computer science and mathematics. Roughly speaking, when the definition of such a function spells out a terminating computation from given arguments, we may refer to it as a wellfounded function. For example, the common factorial and Fibonacci functions are wellfounded functions.
There are also other ways to define functions. An important case regards the definition of a boolean function as an extreme solution (that is, a least or greatest solution) to some equation. For computer scientists with interests in logic or programming languages, these extreme predicates are important because they describe the judgments that can be justified by a given set of inference rules (see, e.g., [3, 24, 28, 31, 36]).
To benefit from machineassisted reasoning, it is necessary not just to understand extreme predicates but also to have techniques for proving theorems about them. A foundation for this reasoning was developed by PaulinMohring [29] and is the basis of the constructive logic supported by Coq [1] as well as other proof assistants [2, 34]. Essentially, the idea is to represent the knowledge that an extreme predicate holds by the proof term by which this knowledge was derived. For a predicate defined as the least solution, such proof terms are values of an inductive datatype (that is, finite proof trees), and for the greatest solution, a coinductive datatype (that is, possibly infinite proof trees). This means that one can use induction and coinduction when reasoning about these proof trees. Therefore, these extreme predicates are known as, respectively, inductive predicates and coinductive predicates (or, copredicates for short). Support for extreme predicates is also available in the proof assistants Isabelle [30] and HOL [6].
Dafny supports both wellfounded functions and extreme predicates. This section is a tutorial that describes the difference in general terms, and then describes novel syntactic support in Dafny for defining and proving lemmas with extreme predicates. Although Dafny's verifier has at its core a firstorder SMT solver, Dafny's logical encoding makes it possible to reason about fixpoints in an automated way.
The encoding for coinductive predicates in Dafny was described previously [21] and is here described in Section 18.2.
To define a function in terms of itself, one can write an equation like
where is a nonrecursive function of type . Because it takes a function as an argument, is referred to as a functor (or functional, but not to be confused by the categorytheory notion of a functor). Throughout, I will assume that by itself is well defined, for example that it does not divide by zero. I will also assume that occurs only in fully applied calls in ; eta expansion can be applied to ensure this. If is a boolean function, that is, if is the type of booleans, then I call a predicate.
For example, the common Fibonacci function over the natural numbers can be defined by the equation
With the understanding that the argument is universally quantified, we can write this equation equivalently as
The fact that the function being defined occurs on both sides of the equation causes concern that we might not be defining the function properly, leading to a logical inconsistency. In general, there could be many solutions to an equation like (0) or there could be none. Let's consider two ways to make sure we're defining the function uniquely.
A standard way to ensure that equation (0) has a unique solution in is to make sure the recursion is wellfounded, which roughly means that the recursion terminates. This is done by introducing any wellfounded relation on the domain of and making sure that the argument to each recursive call goes down in this ordering. More precisely, if we formulate (0) as
then we want to check for each call in . When a function definition satisfies this decrement condition, then the function is said to be wellfounded.
For example, to check the decrement condition for in (2), we can pick to be the arithmetic lessthan relation on natural numbers and check the following, for any :
Note that we are entitled to using the antecedent , because that is the condition under which the else branch in (2) is evaluated.
A wellfounded function is often thought of as “terminating” in the sense that the recursive depth in evaluating on any given argument is finite. That is, there are no infinite descending chains of recursive calls. However, the evaluation of on a given argument may fail to terminate, because its width may be infinite. For example, let be some predicate defined on the ordinals and let be a predicate on the ordinals defined by the following equation:
With as the usual ordering on ordinals, this equation satisfies the decrement condition, but evaluating would require evaluating for every natural number . However, what we are concerned about here is to avoid mathematical inconsistencies, and that is indeed a consequence of the decrement condition.
So that we can later see how inductive proofs are done in Dafny, let's prove that for any , is even iff is a multiple of . We split our task into two cases. If , then the property follows directly from the definition of . Otherwise, note that exactly one of the three numbers , , and is a multiple of 3. If is the multiple of 3, then by invoking the induction hypothesis on and , we obtain that is the sum of two odd numbers, which is even. If or is a multiple of 3, then by invoking the induction hypothesis on and , we obtain that is the sum of an even number and an odd number, which is odd. In this proof, we invoked the induction hypothesis on and on . This is allowed, because both are smaller than , and hence the invocations go down in the wellfounded ordering on natural numbers.
We don't need to exclude the possibility of equation (0) having multiple solutions—instead, we can just be clear about which one of them we want. Let's explore this, after a smidgen of lattice theory.
For any complete lattice and any set , we can by pointwise extension define a complete lattice , where for any ,
In particular, if is the set of booleans ordered by implication (), then the set of predicates over any domain forms a complete lattice. Tarski's Theorem [35] tells us that any monotonic function over a complete lattice has a least and a greatest fixpoint. In particular, this means that has a least fixpoint and a greatest fixpoint, provided is monotonic.
Speaking about the set of solutions in to (0) is the same as speaking about the set of fixpoints of functor . In particular, the least and greatest solutions to (0) are the same as the least and greatest fixpoints of . In casual speak, it happens that we say “fixpoint of (0)”, or more grotesquely, “fixpoint of ” when we really mean “fixpoint of ”.
In conclusion of our little excursion into lattice theory, we have that, under the proviso of being monotonic, the set of solutions in to (0) is nonempty, and among these solutions, there is in the ordering a least solution (that is, a function that returns more often than any other) and a greatest solution (that is, a function that returns more often than any other).
When discussing extreme solutions, I will now restrict my attention to boolean functions (that is, with being the type of booleans). Functor is monotonic if the calls to in are in positive positions (that is, under an even number of negations). Indeed, from now on, I will restrict my attention to such monotonic functors .
Let me introduce a running example. Consider the following equation, where ranges over the integers:
This equation has four solutions in . With ranging over the integers, they are:
The first of these is the least solution and the last is the greatest solution.
In the literature, the definition of an extreme predicate is often given as a set of inference rules. To designate the least solution, a single line separating the antecedent (on top) from conclusion (on bottom) is used:
Through repeated applications of such rules, one can show that the predicate holds for a particular value. For example, the derivation, or proof tree, to the left in Figure 0 shows that holds. (In this simple example, the derivation is a rather degenerate proof “tree”.) The use of these inference rules gives rise to a least solution, because proof trees are accepted only if they are finite.


When inference rules are to designate the greatest solution, a double line is used:
In this case, proof trees are allowed to be infinite. For example, the (partial depiction of the) infinite proof tree on the right in Figure 0 shows that holds.
Note that derivations may not be unique. For example, in the case of the greatest solution for , there are two proof trees that establish : one is the finite proof tree that uses the lefthand rule of (10) once, the other is the infinite proof tree that keeps on using the righthand rule of (10).
In general, one cannot evaluate whether or not an extreme predicate holds for some input, because doing so may take an infinite number of steps. For example, following the recursive calls in the definition (7) to try to evaluate would never terminate. However, there are useful ways to establish that an extreme predicate holds and there are ways to make use of one once it has been established.
For any as in (0), I define two infinite series of wellfounded functions, and where ranges over the natural numbers:
These functions are called the iterates of , and I will also refer to them as the prefix predicates of (or the prefix predicate of , if we think of as being a parameter). Alternatively, we can define and without mentioning : Let denote the function that always returns , let denote the function that always returns , and let a superscript on denote exponentiation (for example, and ). Then, (11) and (12) can be stated equivalently as and .
For any solution to equation (0), we have, for any and such that :
In other words, every is a prefixpoint of and every is a postfixpoint of . Next, I define two functions, and , in terms of the prefix predicates:
By (13), we also have that is a prefixpoint of and is a postfixpoint of . The marvelous thing is that, if is continuous, then and are the least and greatest fixpoints of . These equations let us do proofs by induction when dealing with extreme predicates. I will explain in Section 11.1.2 how to check for continuity.
Let's consider two examples, both involving function in (7). As it turns out, 's defining functor is continuous, and therefore I will write and to denote the least and greatest solutions for in (7).
The main technique for establishing that holds for some , that is, proving something of the form , is to construct a proof tree like the one for in Figure 0. For a proof in this direction, since we're just applying the defining equation, the fact that we're using a least solution for never plays a role (as long as we limit ourselves to finite derivations).
The technique for going in the other direction, proving something from an established property, that is, showing something of the form , typically uses induction on the structure of the proof tree. When the antecedent of our proof obligation includes a predicate term , it is sound to imagine that we have been given a proof tree for . Such a proof tree would be a data structure—to be more precise, a term in an inductive datatype. For this reason, least solutions like have been given the name inductive predicate.
Let's prove . We split our task into two cases, corresponding to which of the two proof rules in (9) was the last one applied to establish . If it was the lefthand rule, then , which makes it easy to establish the conclusion of our proof goal. If it was the righthand rule, then we unfold the proof tree one level and obtain . Since the proof tree for is smaller than where we started, we invoke the induction hypothesis and obtain , from which it is easy to establish the conclusion of our proof goal.
Here's how we do the proof formally using (14). We massage the general form of our proof goal:
=  { (14) } 
=  { distribute over to the left } 
The last line can be proved by induction over . So, in our case, we prove for every . If , then is , so our goal holds trivially. If , then . Our goal holds easily for the first disjunct (). For the other disjunct, we apply the induction hypothesis (on the smaller and with ) and obtain , from which our proof goal follows.
We can think of a given predicate as being represented by a proof tree—in this case a term in a coinductive datatype, since the proof may be infinite. For this reason, greatest solutions like have been given the name coinductive predicate, or copredicate for short. The main technique for proving something from a given proof tree, that is, to prove something of the form , is to destruct the proof. Since this is just unfolding the defining equation, the fact that we're using a greatest solution for never plays a role (as long as we limit ourselves to a finite number of unfoldings).
To go in the other direction, to establish a predicate defined as a greatest solution, like , we may need an infinite number of steps. For this purpose, we can use induction's dual, coinduction. Were it not for one little detail, coinduction is as simple as continuations in programming: the next part of the proof obligation is delegated to the coinduction hypothesis. The little detail is making sure that it is the “next” part we're passing on for the continuation, not the same part. This detail is called productivity and corresponds to the requirement in induction of making sure we're going down a wellfounded relation when applying the induction hypothesis. There are many sources with more information, see for example the classic account by Jacobs and Rutten [8] or a new attempt by Kozen and Silva that aims to emphasize the simplicity, not the mystery, of coinduction [11].
Let's prove . The intuitive coinductive proof goes like this: According to the righthand rule of (10), follows if we establish , and that's easy to do by invoking the coinduction hypothesis. The “little detail”, productivity, is satisfied in this proof because we applied a rule in (10) before invoking the coinduction hypothesis.
For anyone who may have felt that the intuitive proof felt too easy, here is a formal proof using (15), which relies only on induction. We massage the general form of our proof goal:
=  { (15) } 
=  { distribute over to the right } 
The last line can be proved by induction over . So, in our case, we prove for every . If , then is , so our goal holds trivially. If , then . We establish the second disjunct by applying the induction hypothesis (on the smaller and with ).
Although in this paper I consider only wellfounded functions and extreme predicates, it is worth mentioning that there are additional ways of making sure that the set of solutions to (0) is nonempty. For example, if all calls to in are tailrecursive calls, then (under the assumption that is nonempty) the set of solutions is nonempty. To see this, consider an attempted evaluation of that fails to determine a definite result value because of an infinite chain of calls that applies to each value of some subset of . Then, apparently, the value of for any one of the values in is not determined by the equation, but picking any particular result values for these makes for a consistent definition. This was pointed out by Manolios and Moore [25]. Functions can be underspecified in this way in the proof assistants ACL2 [10] and HOL [12].
In this section, I explain with examples the support in Dafny^{5} for wellfounded functions, extreme predicates, and proofs regarding these.
Declarations of wellfounded functions are unsurprising. For example, the Fibonacci function is declared as follows:
function fib(n: nat): nat
{
if n < 2 then n else fib(n2) + fib(n1)
}
Dafny verifies that the body (given as an expression in curly braces) is well defined. This includes decrement checks for recursive (and mutually recursive) calls. Dafny predefines a wellfounded relation on each type and extends it to lexicographic tuples of any (fixed) length. For example, the wellfounded relation for integers is , the one for reals is (this is the same ordering as for integers, if you read the integer relation as ), the one for inductive datatypes is structural inclusion, and the one for coinductive datatypes is .
Using a decreases
clause, the programmer can specify the term in this predefined
order. When a function definition omits a decreases
clause, Dafny makes a simple
guess. This guess (which can be inspected by hovering over the function name in the
Dafny IDE) is very often correct, so users are rarely bothered to provide explicit
decreases
clauses.
If a function returns bool
, one can drop the result type : bool
and change the
keyword function
to predicate
.
Dafny has lemma
declarations. These are really just special cases of methods:
they can have pre and postcondition specifications and their body is a code block.
Here is the lemma we stated and proved in Section 11.0.0.0:
lemma FibProperty(n: nat)
ensures fib(n) % 2 == 0 <==> n % 3 == 0
{
if n < 2 {
} else {
FibProperty(n2); FibProperty(n1);
}
}
The postcondition of this lemma (keyword ensures
) gives the proof
goal. As in any programcorrectness logic (e.g.,
[7]), the postcondition must
be established on every control path through the lemma's body. For
FibProperty
, I give the proof by
an if
statement, hence introducing a case split. The then branch is empty, because
Dafny can prove the postcondition automatically in this case. The else branch
performs two recursive calls to the lemma. These are the invocations of the induction
hypothesis and they follow the usual programcorrectness rules,
namely: the precondition must hold at the call site, the call must terminate, and then
the caller gets to assume the postcondition upon return. The “proof glue” needed
to complete the proof is done automatically by Dafny.
Dafny features an aggregate statement using which it is possible to make (possibly
infinitely) many calls at once. For example, the induction hypothesis can be called
at once on all values n'
smaller than n
:
forall n'  0 <= n' < n {
FibProperty(n');
}
For our purposes, this corresponds to strong induction. More
generally, the forall
statement has the form
forall k  P(k)
ensures Q(k)
{ Statements; }
Logically, this statement corresponds to universal introduction: the body proves that
Q(k)
holds for an arbitrary k
such that P(k)
, and the conclusion of the forall
statement
is then . When the body of the forall
statement is
a single call (or calc
statement), the ensures
clause is inferred and can be omitted,
like in our FibProperty
example.
Lemma FibProperty
is simple enough that its whole body can be replaced by the one
forall
statement above. In fact, Dafny goes one step further: it automatically
inserts such a forall
statement at the beginning of every lemma [19].
Thus, FibProperty
can be declared and proved simply by:
lemma FibProperty(n: nat)
ensures fib(n) % 2 == 0 <==> n % 3 == 0
{ }
Going in the other direction from universal introduction is existential elimination,
also known as Skolemization. Dafny has a statement for this, too:
for any variable x
and boolean expression Q
, the
assign such that statement x : Q;
says to assign to x
a value such that Q
will hold. A proof obligation when using this statement is to show that there
exists an x
such that Q
holds. For example, if the fact
is known, then the statement
k : 100 <= fib(k) < 200;
will assign to k
some value (chosen arbitrarily)
for which fib(k)
falls in the given range.
In this previous subsection, I explained that a predicate
declaration introduces a
wellfounded predicate. The declarations for introducing extreme predicates are
inductive predicate
and copredicate
. Here is the definition of the least and
greatest solutions of from above, let's call them g
and G
:
inductive predicate g(x: int) { x == 0  g(x2) }
copredicate G(x: int) { x == 0  G(x2) }
When Dafny receives either of these definitions, it automatically declares the corresponding
prefix predicates. Instead of the names and that I used above, Dafny
names the prefix predicates g#[k]
and G#[k]
, respectively, that is, the name of
the extreme predicate appended with #
, and the subscript is given as an argument in
square brackets. The definition of the prefix predicate derives from the body of
the extreme predicate and follows the form in (11) and (12).
Using a fauxsyntax for illustrative purposes, here are the prefix
predicates that Dafny defines automatically from the extreme
predicates g
and G
:
predicate g#[_k: nat](x: int) { _k != 0 && (x == 0  g#[_k1](x2)) }
predicate G#[_k: nat](x: int) { _k != 0 ==> (x == 0  G#[_k1](x2)) }
The Dafny verifier is aware of the connection between extreme predicates and their prefix predicates, (14) and (15).
Remember that to be well defined, the defining functor of an extreme predicate must be monotonic, and for (14) and (15) to hold, the functor must be continuous. Dafny enforces the former of these by checking that recursive calls of extreme predicates are in positive positions. The continuity requirement comes down to checking that they are also in continuous positions: that recursive calls to inductive predicates are not inside unbounded universal quantifiers and that recursive calls to copredicates are not inside unbounded existential quantifiers [21, 26].
From what I have presented so far, we can do the formal proofs from Sections 11.0.2.0 and 11.0.2.1. Here is the former:
lemma EvenNat(x: int)
requires g(x)
ensures 0 <= x && x % 2 == 0
{
var k: nat : g#[k](x);
EvenNatAux(k, x);
}
lemma EvenNatAux(k: nat, x: int)
requires g#[k](x)
ensures 0 <= x && x % 2 == 0
{
if x == 0 { } else { EvenNatAux(k1, x2); }
}
Lemma EvenNat
states the property we wish to prove. From its
precondition (keyword requires
) and
(14), we know there is some k
that will make the condition in the
assignsuchthat statement true. Such a value is then assigned to k
and passed to
the auxiliary lemma, which promises to establish the proof goal. Given the condition
g#[k](x)
, the definition of g#
lets us conclude k != 0
as well as the disjunction
x == 0  g#[k1](x2)
. The then branch considers the case of the first disjunct,
from which the proof goal follows automatically. The else branch can then assume
g#[k1](x2)
and calls the induction hypothesis with those parameters. The proof
glue that shows the proof goal for x
to follow from the proof goal with x2
is
done automatically.
Because Dafny automatically inserts the statement
forall k', x'  0 <= k' < k && g#[k'](x') {
EvenNatAux(k', x');
}
at the beginning of the body of EvenNatAux
, the body can be left empty and Dafny
completes the proof automatically.
Here is the Dafny program that gives the proof from Section 11.0.2.1:
lemma Always(x: int)
ensures G(x)
{ forall k: nat { AlwaysAux(k, x); } }
lemma AlwaysAux(k: nat, x: int)
ensures G#[k](x)
{ }
While each of these proofs involves only basic proof rules, the setup feels a bit clumsy, even with the empty body of the auxiliary lemmas. Moreover, the proofs do not reflect the intuitive proofs I described in Section 11.0.2.0 and 11.0.2.1. These shortcoming are addressed in the next subsection.
The proofs we just saw follow standard forms:
use Skolemization to convert the inductive predicate into a prefix predicate for some k
and then do the proof inductively over k
; respectively,
by induction over k
, prove the prefix predicate for every k
, then use
universal introduction to convert to the coinductive predicate.
With the declarations inductive lemma
and colemma
, Dafny offers to
set up the proofs
in these standard forms. What is gained is not just fewer characters in the program
text, but also a possible intuitive reading of the proofs. (Okay, to be fair, the
reading is intuitive for simpler proofs; complicated proofs may or may not be intuitive.)
Somewhat analogous to the creation of prefix predicates from extreme predicates, Dafny
automatically creates a prefix lemma L#
from each “extreme lemma” L
. The pre
and postconditions of a prefix lemma are copied from those of the extreme lemma,
except for the following replacements:
For an inductive lemma, Dafny looks in the precondition to find calls (in positive, continuous
positions) to inductive predicates P(x)
and replaces these with P#[_k](x)
.
For a
colemma, Dafny looks in the postcondition to find calls (in positive, continuous positions)
to copredicates P
(including equality among coinductive datatypes, which is a builtin
copredicate) and replaces these with P#[_k](x)
.
In each case, these predicates P
are the lemma's focal predicates.
The body of the extreme lemma is moved to the prefix lemma, but with
replacing each recursive
call L(x)
with L#[_k1](x)
and replacing each occurrence of a call
to a focal predicate
P(x)
with P#[_k1](x)
. The bodies of the extreme lemmas are then replaced as shown
in the previous subsection. By construction, this new body correctly leads to the
extreme lemma's postcondition.
Let us see what effect these rewrites have on how one can write proofs. Here are the proofs of our running example:
inductive lemma EvenNat(x: int)
requires g(x)
ensures 0 <= x && x % 2 == 0
{ if x == 0 { } else { EvenNat(x2); } }
colemma Always(x: int)
ensures G(x)
{ Always(x2); }
Both of these proofs follow the intuitive proofs given in Sections 11.0.2.0 and 11.0.2.1. Note that in these simple examples, the user is never bothered with either prefix predicates nor prefix lemmas—the proofs just look like “what you'd expect”.
Since Dafny automatically inserts calls to the induction hypothesis at the beginning of
each lemma, the bodies of the given extreme lemmas EvenNat
and
Always
can be empty and Dafny still completes the proofs.
Folks, it doesn't get any simpler than that!
ClassDecl = "class" { Attribute } ClassName [ GenericParameters ]
["extends" Type {"," Type} ]
"{" { { DeclModifier } ClassMemberDecl(moduleLevelDecl: false) } "}"
ClassMemberDecl(moduleLevelDecl) =
( FieldDecl  FunctionDecl 
MethodDecl(isGhost: ("ghost" was present),
allowConstructor: !moduleLevelDecl)
)
The ClassMemberDecl
parameter moduleLevelDecl
will be true if
the member declaration is at the top level or directly within a
module declaration. It will be false for ClassMemberDecl
s
that are part of a class or trait declaration. If moduleLevelDecl
is
false FieldDecl
s are not allowed.
A class C
is a reference type declared as follows:
class C<T> extends J1, ..., Jn
{
members
}
where the list of type parameters T
is optional and so is
“extends J1, ..., Jn
”, which says that the class extends traits J1
… Jn
.
The members of a class are fields, functions, and
methods. These are accessed or invoked by dereferencing a reference
to a C
instance.
A function or method is invoked on an instance
of C
, unless the function or method is declared static
.
A function or method that is not static
is called an
instance function or method.
An instance function or method takes an implicit receiver
parameter, namely, the instance used to access the member. In the
specification and body of an instance function or method, the receiver
parameter can be referred to explicitly by the keyword this
.
However, in such places, members of this
can also be mentioned
without any qualification. To illustrate, the qualified this.f
and
the unqualified f
refer to the same field of the same object in the
following example:
class C {
var f: int
method Example() returns (b: bool)
{
b := f == this.f;
}
}
so the method body always assigns true
to the outparameter b
.
There is no semantic difference between qualified and
unqualified accesses to the same receiver and member.
A C
instance is created using new
, for example:
c := new C;
Note that new
simply allocates a C
object and returns a reference
to it; the initial values of its fields are arbitrary values of their
respective types. Therefore, it is common to invoke a method, known
as an initialization method, immediately after creation, for
example:
c := new C;
c.InitFromList(xs, 3);
When an initialization method has no outparameters and modifies no
more than this
, then the two statements above can be combined into
one:
c := new C.InitFromList(xs, 3);
Note that a class can contain several initialization methods, that
these methods can be invoked at any time, not just as part of a new
,
and that new
does not require that an initialization method be
invoked at creation.
A clas can declare special initializing methods called constructor methods. See Section 12.1.
FieldDecl = "var" { Attribute } FIdentType { "," FIdentType }
An FIdentType
is used to declare a field. The field name is either an
identifier (that is not allowed to start with a leading underscore) or
some digits. Digits are used if you want to number your fields, e.g. “0”,
“1”, etc.
FIdentType = ( FieldIdent  digits ) ":" Type
A field x of some type T is declared as:
var x: T
A field declaration declares one or more fields of the enclosing class. Each field is a named part of the state of an object of that class. A field declaration is similar to but distinct from a variable declaration statement. Unlike for local variables and bound variables, the type is required and will not be inferred.
Unlike method and function declarations, a field declaration cannot be given at the top level. Fields can be declared in either a class or a trait. A class that inherits from multiple traits will have all the fields declared in any of its parent traits.
Fields that are declared as ghost
can only be used in specifications,
not in code that will be compiled into executable code.
Fields may not be declared static.
protected
is not allowed for fields.
MethodDecl(isGhost, allowConstructor) =
MethodKeyword { Attribute } [ MethodName ]
( MethodSignature(isGhost)  SignatureEllipsis_ )
MethodSpec [ BlockStmt ]
The isGhost
parameter is true iff the ghost
keyword
preceded the method declaration.
If the allowConstructor
parameter is false then
the MethodDecl
must not be a constructor
declaration.
MethodKeyword = ("method"  "lemma"  "colemma"
 "inductive" "lemma"  "constructor" )
The method keyword is used to specify special kinds of methods as explained below.
MethodSignature(isGhost) =
[ GenericParameters ]
Formals(allowGhost: !isGhost)
[ "returns" Formals(allowGhost: !isGhost) ]
A method signature specifies the method generic parameters,
input parameters and return parameters.
The formal parameters are not allowed to have ghost
specified
if ghost
was already specified for the method.
SignatureEllipsis_ = "…"
A SignatureEllipsis_
is used when a method or function is being redeclared
in module that refines another module. In that case the signature is
copied from the module that is being refined. This works because
Dafny does not support method or function overloading, so the
name of the class method uniquely identifies it without the
signature.
Formals(allowGhostKeyword) =
"(" [ GIdentType(allowGhostKeyword)
{ "," GIdentType(allowGhostKeyword) } ] ")"
The Formals
specifies the names and types of the method input or
output parameters.
See section 4.1 for a description of MethodSpec
.
A method declaration adheres to the MethodDecl
grammar above.
Here is an example of a method declaration.
method {:att1}{:att2} M<T1, T2>(a: A, b: B, c: C) returns (x: X, y: Y, z: Z)
requires Pre
modifies Frame
ensures Post
decreases Rank
{
Body
}
where :att1
and :att2
are attributes of the method,
T1
and T2
are type parameters of the method (if generic),
a, b, c
are the methodâ€™s inparameters, x, y, z
are the
methodâ€™s outparameters, Pre
is a boolean expression denoting the
methodâ€™s precondition, Frame
denotes a set of objects whose fields may
be updated by the method, Post
is a boolean expression denoting the
methodâ€™s postcondition, Rank
is the methodâ€™s variant function, and
Body
is a statement that implements the method. Frame
can be a list
of expressions, each of which is a set of objects or a single object, the
latter standing for the singleton set consisting of that one object. The
methodâ€™s frame is the union of these sets, plus the set of objects
allocated by the method body. For example, if c
and d
are parameters
of a class type C
, then
modifies {c, d}
modifies {c} + {d}
modifies c, {d}
modifies c, d
all mean the same thing.
A method can be declared as ghost by preceding the declaration with the keyword ghost. By default, a method has an implicit receiver parameter, this. This parameter can be removed by preceding the method declaration with the keyword static. A static method M in a class C can be invoked by C.M(â€¦).
In a class, a method can be declared to be a constructor method by
replacing the keyword method
with the keyword constructor
. A constructor
can only be called at the time an object is allocated (see
objectcreation examples below), and for a class that contains one or
more constructors, object creation must be done in conjunction with a
call to a constructor.
An ordinary method is declared with the method
keyword.
Section 12.1.0 explains methods that instead use the
constructor
keyword. Section 12.1.1 discusses methods that are
declared with the lemma
keyword. Methods declared with the inductive
lemma
keywords are discussed later in the context of inductive
predicates (see 18.0). Methods declared with the
colemma
keyword are discussed later in the context of coinductive
types, in section 18.2.4.1.
A method without is body is abstract. A method is allowed to be abstract under the following circumstances:
{:axiom}
attribute
{:imported}
attribute
{:decl}
attribute
To write structured objectoriented programs, one often relies on that
objects are constructed only in certain ways. For this purpose, Dafny
provides constructor (method)s, which are a restricted form of
initialization methods. A constructor is declared with the keyword
constructor
instead of method
. When a class contains a
constructor, every call to new
for that class must be accompanied
with a call to one of the constructors. Moreover, a constructor
cannot be called at other times, only during object creation. Other
than these restrictions, there is no semantic difference between using
ordinary initialization methods and using constructors.
The Dafny design allows the constructors to be named, which promotes
using names like InitFromList
above. Still, many classes have just
one constructor or have a typical constructor. Therefore, Dafny
allows one anonymous constructor, that is, a constructor whose name
is essentially “”. For example:
class Item {
constructor (x: int, y: int)
// ...
}
When invoking this constructor, the “.
” is dropped, as in:
m := new Item(45, 29);
Note that an anonymous constructor is just one way to name a constructor; there can be other constructors as well.
Sometimes there are steps of logic required to prove a program correct,
but they are too complex for Dafny to discover and use on its own. When
this happens, we can often give Dafny assistance by providing a lemma.
This is done by declaring a method with the lemma
keyword.
Lemmas are implicitly ghost methods and the ghost
keyword cannot
be applied to them.
For an example, see the FibProperty
lemma in
Section 11.1.1.
See the Dafny Lemmas tutorial for more examples and hints for using lemmas.
FunctionDecl =
( "function" [ "method" ] { Attribute }
FunctionName
FunctionSignatureOrEllipsis_(allowGhostKeyword: ("method" present))
 "predicate" [ "method" ] { Attribute }
PredicateName
PredicateSignatureOrEllipsis_(allowGhostKeyword: ("method" present))
 "inductive" "predicate" { Attribute }
PredicateName
PredicateSignatureOrEllipsis_(allowGhostKeyword: false)
 "copredicate" { Attribute }
CopredicateName
PredicateSignatureOrEllipsis_(allowGhostKeyword: false)
)
FunctionSpec [ FunctionBody ]
FunctionSignatureOrEllipsis_(allowGhostKeyword) =
FunctionSignature_  SignatureEllipsis_
FunctionSignature_(allowGhostKeyword) =
[ GenericParameters ] Formals(allowGhostKeyword) ":" Type
PredicateSignatureOrEllipsis_(allowGhostKeyword) =
PredicateSignature_(allowGhostKeyword)  SignatureEllipsis_
PredicateSignature_(allowGhostKeyword) =
[ GenericParameters ] Formals(allowGhostKeyword)
FunctionBody = "{" Expression(allowLemma: true, allowLambda: true) "}"
In the above productions, allowGhostKeyword is true if the optional “method” keyword was specified. This allows some of the formal parameters of a function method to be specified as ghost.
See section 4.2 for a description of FunctionSpec
.
A Dafny function is a pure mathematical function. It is allowed to
read memory that was specified in its reads
expression but is not
allowed to have any side effects.
Here is an example function declaration:
function {:att1}{:att2} F<T1, T2>(a: A, b: B, c: C): T
requires Pre
reads Frame
ensures Post
decreases Rank
{
Body
}
where :att1
and :att2
are attributes of the function, if any, T1
and T2
are type parameters of the function (if generic), a, b, c
are
the functionsâ€™s parameters, T
is the type of the functionâ€™s result,
Pre
is a boolean expression denoting the functionâ€™s precondition,
Frame
denotes a set of objects whose fields the function body may
depend on, Post
is a boolean expression denoting the functionâ€™s
postcondition, Rank
is the functionâ€™s variant function, and Body
is
an expression that defines the function return value. The precondition
allows a function to be partial, that is, the precondition says when the
function is defined (and Dafny will verify that every use of the function
meets the precondition). The postcondition is usually not needed, since
the body of the function gives the full definition. However, the
postcondition can be a convenient place to declare properties of the
function that may require an inductive proof to establish. For example:
function Factorial(n: int): int
requires 0 <= n
ensures 1 <= Factorial(n)
{
if n == 0 then 1 else Factorial(n1) * n
}
says that the result of Factorial is always positive, which Dafny verifies inductively from the function body. To refer to the functionâ€™s result in the postcondition, use the function itself, as shown in the example.
By default, a function is ghost, and cannot be called from nonghost code. To make it nonghost, replace the keyword function with the two keywords “function method”.
By default, a function has an implicit receiver parameter, this
. This
parameter can be removed by preceding the function declaration with the
keyword static
. A static function F
in a class C
can be invoked
by C.F(â€¦)
. This can give a convenient way to declare a number of helper
functions in a separate class.
As for methods, a SignatureEllipsis_
is used when declaring
a function in a module refinement. For example, if module M0
declares
function F
, a module M1
can be declared to refine M0
and
M1
can then refine F
. The refinement function, M1.F
can have
a SignatureEllipsis_
which means to copy the signature form
M0.F
. A refinement function can furnish a body for a function
(if M0.F
does not provide one). It can also add ensures
clauses. And if F
is a predicate, it can add conjuncts to
a previously given body.
A function is said to be transparent in a location if the
contents of the body of the function is visible at that point.
A function is said to be opaque at a location if it is not
transparent. However the FunctionSpec
of a function
is always available.
A function is usually transparent up to some unrolling level (up to 1, or maybe 2 or 3). If its arguments are all literals it is transparent all the way.
But the transparency of a function is affected by the following:
{:opaque}
attribute (as explained
in Section 24.1.12).
The following table summarizes where the function is transparent. The module referenced in the table is the module in which the function is defined.
Protected?  {:opaque} ?  Transparent  Transparent 

Inside  Outside  
Module  Module  
N  N  Y  Y 
Y  N  Y  N 
N  Y  N  N 
When {:opaque}
is specified for function g
, g
is opaque,
however the lemma reveal_g
is available to give the semantics
of g
whether in the defining module or outside.
It currently is not allowed to have both protected
and
{:opaque}
specified for a function.
A function that returns a bool
results is called a predicate. As an
alternative syntax, a predicate can be declared by replacing the function
keyword with the predicate
keyword and omitting a declaration of the
return type.
See section 11.1.2 for descriptions of inductive predicates and lemmas.
TraitDecl = "trait" { Attribute } TraitName [ GenericParameters ]
"{" { { DeclModifier } ClassMemberDecl(moduleLevelDecl: false) } "}"
A trait is an “abstract superclass”, or call it an “interface” or “mixin”. Traits are new to Dafny and are likely to evolve for a while.
The declaration of a trait is much like that of a class:
trait J
{
members
}
where members
can include fields, functions, and methods, but
no constructor methods. The functions and methods are allowed to be
declared static
.
A reference type C
that extends a trait J
is assignable to J
, but
not the other way around. The members of J
are available as members
of C
. A member in J
is not allowed to be redeclared in C
,
except if the member is a nonstatic
function or method without a
body in J
. By doing so, type C
can supply a stronger
specification and a body for the member.
new
is not allowed to be used with traits. Therefore, there is no
object whose allocated type is a trait. But there can of course be
objects of a class C
that implements a trait J
, and a reference to
such a C
object can be used as a value of type J
.
As an example, the following trait represents movable geometric shapes:
trait Shape
{
function method Width(): real
reads this
method Move(dx: real, dy: real)
modifies this
method MoveH(dx: real)
modifies this
{
Move(dx, 0.0);
}
}
Members Width
and Move
are abstract (that is, body less) and can
be implemented differently by different classes that extend the trait.
The implementation of method MoveH
is given in the trait and thus
gets used by all classes that extend Shape
. Here are two classes
that each extends Shape
:
class UnitSquare extends Shape
{
var x: real, y: real
function method Width(): real { // note the empty reads clause
1.0
}
method Move(dx: real, dy: real)
modifies this
{
x, y := x + dx, y + dy;
}
}
class LowerRightTriangle extends Shape
{
var xNW: real, yNW: real, xSE: real, ySE: real
function method Width(): real
reads this
{
xSE  xNW
}
method Move(dx: real, dy: real)
modifies this
{
xNW, yNW, xSE, ySE := xNW + dx, yNW + dy, xSE + dx, ySE + dy;
}
}
Note that the classes can declare additional members, that they supply implementations for the abstract members of the trait, that they repeat the member signatures, and that they are responsible for providing their own member specifications that both strengthen the corresponding specification in the trait and are satisfied by the provided body. Finally, here is some code that creates two class instances and uses them together as shapes:
var myShapes: seq<Shape>;
var A := new UnitSquare;
myShapes := [A];
var tri := new LowerRightTriangle;
// myShapes contains two Shape values, of different classes
myShapes := myShapes + [tri];
// move shape 1 to the right by the width of shape 0
myShapes[1].MoveH(myShapes[0].Width());
ArrayType_ = arrayToken [ GenericInstantiation ]
Dafny supports mutable fixedlength array types of any positive dimension. Array types are reference types.
A onedimensional array of n
T
elements is created as follows:
a := new T[n];
The initial values of the array elements are arbitrary values of type
T
.
The length of an array is retrieved using the immutable Length
member. For example, the array allocated above satisfies:
a.Length == n
For any integerbased numeric i
in the range 0 <= i < a.Length
,
the array selection expression a[i]
retrieves element i
(that
is, the element preceded by i
elements in the array). The
element stored at i
can be changed to a value t
using the array
update statement:
a[i] := t;
Caveat: The type of the array created by new T[n]
is
array<T>
. A mistake that is simple to make and that can lead to
befuddlement is to write array<T>
instead of T
after new
.
For example, consider the following:
var a := new array<T>;
var b := new array<T>[n];
var c := new array<T>(n); // resolution error
var d := new array(n); // resolution error
The first statement allocates an array of type array<T>
, but of
unknown length. The second allocates an array of type
array<array<T>>
of length n
, that is, an array that holds n
values of type array<T>
. The third statement allocates an
array of type array<T>
and then attempts to invoke an anonymous
constructor on this array, passing argument n
. Since array
has no
constructors, let alone an anonymous constructor, this statement
gives rise to an error. If the typeparameter list is omitted for a
type that expects type parameters, Dafny will attempt to fill these
in, so as long as the array
type parameter can be inferred, it is
okay to leave off the “<T>
” in the fourth statement above. However,
as with the third statement, array
has no anonymous constructor, so
an error message is generated.
Onedimensional arrays support operations that convert a stretch of
consecutive elements into a sequence. For any array a
of type
array<T>
, integerbased numerics lo
and hi
satisfying
0 <= lo <= hi <= a.Length
, the following operations each yields a
seq<T>
:
expression  description 

a[lo..hi]  subarray conversion to sequence 
a[lo..]  drop 
a[..hi]  take 
a[..]  array conversion to sequence 
The expression a[lo..hi]
takes the first hi
elements of the array,
then drops the first lo
elements thereof and returns what remains as
a sequence. The resulting sequence thus has length hi  lo
.
The other operations are special instances of the first. If lo
is
omitted, it defaults to 0
and if hi
is omitted, it defaults to
a.Length
.
In the last operation, both lo
and hi
have been omitted, thus
a[..]
returns the sequence consisting of all the array elements of
a
.
The subarray operations are especially useful in specifications. For
example, the loop invariant of a binary search algorithm that uses
variables lo
and hi
to delimit the subarray where the search key
may be still found can be expressed as follows:
key !in a[..lo] && key !in a[hi..]
Another use is to say that a certain range of array elements have not been changed since the beginning of a method:
a[lo..hi] == old(a[lo..hi])
or since the beginning of a loop:
ghost var prevElements := a[..];
while // ...
invariant a[lo..hi] == prevElements[lo..hi]
{
// ...
}
Note that the type of prevElements
in this example is seq<T>
, if
a
has type array<T>
.
A final example of the subarray operation lies in expressing that an array's elements are a permutation of the array's elements at the beginning of a method, as would be done in most sorting algorithms. Here, the subarray operation is combined with the sequencetomultiset conversion:
multiset(a[..]) == multiset(old(a[..]))
An array of 2 or more dimensions is mostly like a onedimensional
array, except that new
takes more length arguments (one for each
dimension), and the array selection expression and the array update
statement take more indices. For example:
matrix := new T[m, n];
matrix[i, j], matrix[x, y] := matrix[x, y], matrix[i, j];
create a 2dimensional array whose dimensions have lengths m
and
n
, respectively, and then swaps the elements at i,j
and x,y
.
The type of matrix
is array2<T>
, and similarly for
higherdimensional arrays (array3<T>
, array4<T>
, etc.). Note,
however, that there is no type array0<T>
, and what could have been
array1<T>
is actually named just array<T>
.
The new
operation above requires m
and n
to be nonnegative
integerbased numerics. These lengths can be retrieved using the
immutable fields Length0
and Length1
. For example, the following
holds of the array created above:
matrix.Length0 == m && matrix.Length1 == n
Higherdimensional arrays are similar (Length0
, Length1
,
Length2
, …). The array selection expression and array update
statement require that the indices are in bounds. For example, the
swap statement above is wellformed only if:
0 <= i < matrix.Length0 && 0 <= j < matrix.Length1 &&
0 <= x < matrix.Length0 && 0 <= y < matrix.Length1
In contrast to onedimensional arrays, there is no operation to convert stretches of elements from a multidimensional array to a sequence.
ObjectType_ = "object"
There is a builtin trait object
that is like a supertype of all
reference types.^{6} Every class automatically extends
object and so does every userdefined trait. The purpose of type object
is to enable a uniform treatment of dynamic frames. In particular, it
is useful to keep a ghost field (typically named Repr
for
“representation”) of type set<object>
.
IteratorDecl = "iterator" { Attribute } IteratorName
( [ GenericParameters ]
Formals(allowGhostKeyword: true)
[ "yields" Formals(allowGhostKeyword: true) ]
 "…"
)
IteratorSpec [ BlockStmt ]
See section 4.4 for a description of IteratorSpec
.
An iterator provides a programming abstraction for writing code that iteratively returns elements. These CLUstyle iterators are coroutines in the sense that they keep track of their own program counter and control can be transferred into and out of the iterator body.
An iterator is declared as follows:
iterator Iter<T>(inparams) yields (yieldparams)
specification
{
body
}
where T
is a list of type parameters (as usual, if there are no type
parameters, “<T>
” is omitted). This declaration gives rise to a
reference type with the same name, Iter<T>
. In the signature,
inparameters and yieldparameters are the iterator's analog of a
method's inparameters and outparameters. The difference is that the
outparameters of a method are returned to a caller just once, whereas
the yieldparameters of an iterator are returned each time the iterator
body performs a yield
. The body consists of statements, like in a
method body, but with the availability also of yield
statements.
From the perspective of an iterator client, the iterator
declaration
can be understood as generating a class Iter<T>
with various
members, a simplified version of which is described next.
The Iter<T>
class contains an anonymous constructor whose parameters
are the iterator's inparameters:
predicate Valid()
constructor (inparams)
modifies this
ensures Valid()
An iterator is created using new
and this anonymous constructor.
For example, an iterator willing to return ten consecutive integers
from start
can be declared as follows:
iterator Gen(start: int) yields (x: int)
{
var i := 0;
while i < 10 {
x := start + i;
yield;
i := i + 1;
}
}
An instance of this iterator is created using:
iter := new Gen(30);
The predicate Valid()
says when the iterator is in a state where one
can attempt to compute more elements. It is a postcondition of the
constructor and occurs in the specification of the MoveNext
member:
method MoveNext() returns (more: bool)
requires Valid()
modifies this
ensures more ==> Valid()
Note that the iterator remains valid as long as MoveNext
returns
true
. Once MoveNext
returns false
, the MoveNext
method can no
longer be called. Note, the client is under no obligation to keep
calling MoveNext
until it returns false
, and the body of the
iterator is allowed to keep returning elements forever.
The inparameters of the iterator are stored in immutable fields of
the iterator class. To illustrate in terms of the example above, the
iterator class Gen
contains the following field:
var start: int
The yieldparameters also result in members of the iterator class:
var x: int
These fields are set by the MoveNext
method. If MoveNext
returns
true
, the latest yield values are available in these fields and the
client can read them from there.
To aid in writing specifications, the iterator class also contains
ghost members that keep the history of values returned by
MoveNext
. The names of these ghost fields follow the names of the
yieldparameters with an “s
” appended to the name (to suggest
plural). Name checking rules make sure these names do not give rise
to ambiguities. The iterator class for Gen
above thus contains:
ghost var xs: seq<int>
These history fields are changed automatically by MoveNext
, but are
not assignable by user code.
Finally, the iterator class contains some special fields for use in specifications. In particular, the iterator specification gets recorded in the following immutable fields:
ghost var _reads: set<object>
ghost var _modifies: set<object>
ghost var _decreases0: T0
ghost var _decreases1: T1
// ...
where there is a _decreasesi: Ti
field for each
component of the iterator's decreases
clause.^{7}
In addition, there is a field:
ghost var _new: set<object>;
to which any objects allocated on behalf of the iterator body get
added. The iterator body is allowed to remove elements from the
_new
set, but cannot by assignment to _new
add any elements.
Note, in the precondition of the iterator, which is to hold upon
construction of the iterator, the inparameters are indeed
inparameters, not fields of this
.
It's regrettably tricky to use iterators. The language really
ought to have a foreach
statement to make this easier.
Here is an example showing definition and use of an iterator.
iterator Iter<T>(s: set<T>) yields (x: T)
yield ensures x in s && x !in xs[..xs1];
ensures s == set z  z in xs;
{
var r := s;
while (r != {})
invariant forall z :: z in xs ==> x !in r; // r and xs are disjoint
invariant s == r + set z  z in xs;
{
var y : y in r;
r, x := r  {y}, y;
yield;
assert y == xs[xs1]; // needed as a lemma to prove loop invariant
}
}
method UseIterToCopy<T>(s: set<T>) returns (t: set<T>)
ensures s == t;
{
t := {};
var m := new Iter(s);
while (true)
invariant m.Valid() && fresh(m._new);
invariant t == set z  z in m.xs;
decreases s  t;
{
var more := m.MoveNext();
if (!more) { break; }
t := t + {m.x};
}
}
Type = DomainType ">" Type
Functions are firstclass values in Dafny. Function types have the form
(T) > U
where T
is a commadelimited list of types and U
is a
type. T
is called the function's domain type(s) and U
is its
range type. For example, the type of a function
function F(x: int, b: bool): real
is (int, bool) > real
. Parameters are not allowed to be ghost.
To simplify the appearance of the basic case where a function's
domain consist of a list of exactly one type, the parentheses around
the domain type can be dropped in this case, as in T > U
.
This innocent simplification requires additional explanation in the
case where that one type is a tuple type, since tuple types are also
written with enclosing parentheses.
If the function takes a single argument that is a tuple, an additional
set of parentheses is needed. For example, the function
function G(pair: (int, bool)): real
has type ((int, bool)) > real
. Note the necessary double
parentheses. Similarly, a function that takes no arguments is
different from one that takes a 0tuple as an argument. For instance,
the functions
function NoArgs(): real
function Z(unit: ()): real
have types () > real
and (()) > real
, respectively.
The function arrow, >
, is right associative, so A > B > C
means
A > (B > C)
. The other association requires explicit parentheses:
(A > B) > C
.
Note that the receiver parameter of a named function is not part of
the type. Rather, it is used when looking up the function and can
then be thought of as being captured into the function definition.
For example, suppose function F
above is declared in a class C
and
that c
references an object of type C
; then, the following is type
correct:
var f: (int, bool) > real := c.F;
whereas it would have been incorrect to have written something like:
var f': (C, int, bool) > real := F; // not correct
Outside its type signature, each function value has three properties, described next.
Every function implicitly takes the heap as an argument. No function
ever depends on the entire heap, however. A property of the
function is its declared upper bound on the set of heap locations it
depends on for a given input. This lets the verifier figure out that
certain heap modifications have no effect on the value returned by a
certain function. For a function f: T > U
and a value t
of type
T
, the dependency set is denoted f.reads(t)
and has type
set<object>
.
The second property of functions stems from the fact that every function
is potentially partial. In other words, a property of a function is its
precondition. For a function f: T > U
, the precondition of f
for a
parameter value t
of type T
is denoted f.requires(t)
and has type
bool
.
The third property of a function is more obvious—the function's
body. For a function f: T > U
, the value that the function yields
for an input t
of type T
is denoted f(t)
and has type U
.
Note that f.reads
and f.requires
are themselves functions.
Suppose f
has type T > U
and t
has type T
. Then, f.reads
is a function of type T > set<object>
whose reads
and requires
properties are:
f.reads.reads(t) == f.reads(t)
f.reads.requires(t) == true
f.requires
is a function of type T > bool
whose reads
and
requires
properties are:
f.requires.reads(t) == f.reads(t)
f.requires.requires(t) == true
Dafny also support anonymous functions by means of lambda expressions. See section 22.9.
Dafny offers two kinds of algebraic datatypes, those defined inductively and those defined coinductively. The salient property of every datatype is that each value of the type uniquely identifies one of the datatype's constructors and each constructor is injective in its parameters.
DatatypeDecl = ( InductiveDatatypeDecl  CoinductiveDatatypeDecl )
InductiveDatatypeDecl_ = "datatype" { Attribute } DatatypeName [ GenericParameters ]
"=" DatatypeMemberDecl { "" DatatypeMemberDecl } [ ";" ]
DatatypeMemberDecl = { Attribute } DatatypeMemberName [ FormalsOptionalIds ]
The values of inductive datatypes can be seen as finite trees where the leaves are values of basic types, numeric types, reference types, coinductive datatypes, or function types. Indeed, values of inductive datatypes can be compared using Dafny's wellfounded < ordering.
An inductive datatype is declared as follows:
datatype D<T> = Ctors
where Ctors
is a nonempty 
separated list of
(datatype) constructors for the datatype. Each constructor has the
form:
C(params)
where params
is a commadelimited list of types, optionally
preceded by a name for the parameter and a colon, and optionally
preceded by the keyword ghost
. If a constructor has no parameters,
the parentheses after the constructor name can be omitted. If no
constructor takes a parameter, the type is usually called an
enumeration; for example:
datatype Friends = Agnes  Agatha  Jermaine  Jack
For every constructor C
, Dafny defines a discriminator C?
, which
is a member that returns true
if and only if the datatype value has
been constructed using C
. For every named parameter p
of a
constructor C
, Dafny defines a destructor p
, which is a member
that returns the p
parameter from the C
call used to construct the
datatype value; its use requires that C?
holds. For example, for
the standard List
type
datatype List<T> = Nil  Cons(head: T, tail: List<T>)
the following holds:
Cons(5, Nil).Cons? && Cons(5, Nil).head == 5
Note that the expression
Cons(5, Nil).tail.head
is not wellformed, since Cons(5, Nil).tail
does not satisfy
Cons?
.
The names of the destructors must be unique across all the constructors of the datatype. A constructor can have the same name as the enclosing datatype; this is especially useful for singleconstructor datatypes, which are often called record types. For example, a record type for blackandwhite pixels might be represented as follows:
datatype Pixel = Pixel(x: int, y: int, on: bool)
To call a constructor, it is usually necessary only to mention the
name of the constructor, but if this is ambiguous, it is always
possible to qualify the name of constructor by the name of the
datatype. For example, Cons(5, Nil)
above can be written
List.Cons(5, List.Nil)
As an alternative to calling a datatype constructor explicitly, a
datatype value can be constructed as a change in one parameter from a
given datatype value using the datatype update expression. For any
d
whose type is a datatype that includes a constructor C
that has
a parameter (destructor) named f
of type T
, and any expression t
of type T
,
d[f := t]
constructs a value like d
but whose f
parameter is t
. The
operation requires that d
satisfies C?
. For example, the
following equality holds:
Cons(4, Nil)[tail := Cons(3, Nil)] == Cons(4, Cons(3, Nil))
The datatype update expression also accepts multiple field names, provided these are distinct. For example, a node of some inductive datatype for trees may be updated as follows:
node[left := L, right := R]
TupleType_ = "(" [ Type { "," Type } ] ")"
Dafny builds in record types that correspond to tuples and gives these a convenient special syntax, namely parentheses. For example, what might have been declared as:
datatype Pair<T,U> = Pair(0: T, 1: U)
Dafny provides as the type (T, U)
and the constructor (t, u)
, as
if the datatype's name were “” and its type arguments are given in
round parentheses, and as if the constructor name were “”. Note that
the destructor names are 0
and 1
, which are legal identifier names
for members. For example, showing the use of a tuple destructor, here
is a property that holds of 2tuples (that is, pairs):
(5, true).1 == true
Dafny declares ntuples where n is 0 or 2 or up. There are no
1tuples, since parentheses around a single type or a single value have
no semantic meaning. The 0tuple type, ()
, is often known as the
unit type and its single value, also written ()
, is known as unit.
CoinductiveDatatypeDecl_ = "codatatype" { Attribute } DatatypeName [ GenericParameters ]
"=" DatatypeMemberDecl { "" DatatypeMemberDecl } [ ";" ]
Whereas Dafny insists that there is a way to construct every inductive
datatype value from the ground up, Dafny also supports
coinductive datatypes, whose constructors are evaluated lazily and
hence allows infinite structures. A coinductive datatype is declared
using the keyword codatatype
; other than that, it is declared and
used like an inductive datatype.
For example,
codatatype IList<T> = Nil  Cons(head: T, tail: IList<T>)
codatatype Stream<T> = More(head: T, tail: Stream<T>)
codatatype Tree<T> = Node(left: Tree<T>, value: T, right: Tree<T>)
declare possibly infinite lists (that is, lists that can be either finite or infinite), infinite streams (that is, lists that are always infinite), and infinite binary trees (that is, trees where every branch goes on forever), respectively.
The paper Coinduction Simply, by Leino and Moskal[20], explains Dafny's implementation and verification of coinductive types. We capture the key features from that paper in this section but the reader is referred to that paper for more complete details and to supply bibliographic references that we have omitted.
Mathematical induction is a cornerstone of programming and program verification. It arises in data definitions (e.g., some algebraic data structures can be described using induction), it underlies program semantics (e.g., it explains how to reason about finite iteration and recursion), and it gets used in proofs (e.g., supporting lemmas about data structures use inductive proofs). Whereas induction deals with finite things (data, behavior, etc.), its dual, coinduction, deals with possibly infinite things. Coinduction, too, is important in programming and program verification, where it arises in data definitions (e.g., lazy data structures), semantics (e.g., concurrency), and proofs (e.g., showing refinement in a coinductive bigstep semantics). It is thus desirable to have good support for both induction and coinduction in a system for constructing and reasoning about programs.
Codatatypes and corecursive functions make it possible to use lazily evaluated data structures (like in Haskell or Agda). Copredicates, defined by greatest fixpoints, let programs state properties of such data structures (as can also be done in, for example, Coq). For the purpose of writing coinductive proofs in the language, we introduce colemmas. Ostensibly, a colemma invokes the coinduction hypothesis much like an inductive proof invokes the induction hypothesis. Underneath the hood, our coinductive proofs are actually approached via induction: colemmas provide a syntactic veneer around this approach.
The following example gives a taste of how the coinductive features in Dafny come together to give straightforward definitions of infinite matters.
// infinite streams
codatatype IStream<T> = ICons(head: T, tail: IStream)
// pointwise product of streams
function Mult(a: IStream<int>, b: IStream<int>): IStream<int>
{ ICons(a.head * b.head, Mult(a.tail, b.tail)) }
// lexicographic order on streams
copredicate Below(a: IStream<int>, b: IStream<int>)
{ a.head <= b.head && ((a.head == b.head) ==> Below(a.tail, b.tail)) }
// a stream is Below its Square
colemma Theorem_BelowSquare(a: IStream<int>)
ensures Below(a, Mult(a, a))
{ assert a.head <= Mult(a, a).head;
if a.head == Mult(a, a).head {
Theorem_BelowSquare(a.tail);
}
}
// an incorrect property and a bogus proof attempt
colemma NotATheorem_SquareBelow(a: IStream<int>)
ensures Below(Mult(a, a), a); // ERROR
{
NotATheorem_SquareBelow(a);
}
It defines a type IStream
of infinite streams, with constructor ICons
and
destructors head
and tail
. Function Mult
performs pointwise
multiplication on infinite streams of integers, defined using a
corecursive call (which is evaluated lazily). Copredicate Below
is
defined as a greatest fixpoint, which intuitively means that the
copredicate will take on the value true if the recursion goes on forever
without determining a different value. The colemma states the theorem
Below(a, Mult(a, a))
. Its body gives the proof, where the recursive
invocation of the colemma corresponds to an invocation of the
coinduction hypothesis.
The proof of the theorem stated by the first colemma lends
itself to the following intuitive reading: To prove that a
is below
Mult(a, a)
, check that their heads are ordered and, if the heads are
equal, also prove that the tails are ordered. The second colemma states
a property that does not always hold; the verifier is not fooled by the
bogus proof attempt and instead reports the property as unproved.
We argue that these definitions in Dafny are simple enough to level the
playing field between induction (which is familiar) and coinduction
(which, despite being the dual of induction, is often perceived as eerily
mysterious). Moreover, the automation provided by our SMTbased verifier
reduces the tedium in writing coinductive proofs. For example, it
verifies Theorem_BelowSquare
from the program text given aboveâ€” no
additional lemmas or tactics are needed. In fact, as a consequence of the
automaticinduction heuristic in Dafny, the verifier will
automatically verify Theorem_BelowSquare even given an empty body.
Just like there are restrictions on when an inductive hypothesis can be invoked, there are restriction on how a coinductive hypothesis can be used. These are, of course, taken into consideration by our verifier. For example, as illustrated by the second colemma above, invoking the coinductive hypothesis in an attempt to obtain the entire proof goal is futile. (We explain how this works in section 18.2.4.1) Our initial experience with coinduction in Dafny shows it to provide an intuitive, lowoverhead user experience that compares favorably to even the best of todayâ€™s interactive proof assistants for coinduction. In addition, the coinductive features and verification support in Dafny have other potential benefits. The features are a stepping stone for verifying functional lazy programs with Dafny. Coinductive features have also shown to be useful in defining language semantics, as needed to verify the correctness of a compiler, so this opens the possibility that such verifications can benefit from SMT automation.
The Dafny programming language supports functions and methods. A function in Dafny is a mathematical function (i.e., it is welldefined, deterministic, and pure), whereas a method is a body of statements that can mutate the state of the program. A function is defined by its given body, which is an expression. To ensure that function definitions are mathematically consistent, Dafny insists that recursive calls be wellfounded, enforced as follows: Dafny computes the call graph of functions. The strongly connected components within it are clusters of mutually recursive definitions arranged in a DAG. This stratifies the functions so that a call from one cluster in the DAG to a lower cluster is allowed arbitrarily. For an intracluster call, Dafny prescribes a proof obligation that gets taken through the program verifierâ€™s reasoning engine. Semantically, each function activation is labeled by a rankâ€”a lexicographic tuple determined by evaluating the functionâ€™s decreases clause upon invocation of the function. The proof obligation for an intracluster call is thus that the rank of the callee is strictly less (in a languagedefined wellfounded relation) than the rank of the caller. Because these wellfounded checks correspond to proving termination of executable code, we will often refer to them as â€œtermination checksâ€. The same process applies to methods.
Lemmas in Dafny are commonly introduced by declaring a method, stating the property of the lemma in the postcondition (keyword ensures) of the method, perhaps restricting the domain of the lemma by also giving a precondition (keyword requires), and using the lemma by invoking the method. Lemmas are stated, used, and proved as methods, but since they have no use at run time, such lemma methods are typically declared as ghost, meaning that they are not compiled into code. The keyword lemma introduces such a method. Control flow statements correspond to proof techniquesâ€”case splits are introduced with if statements, recursion and loops are used for induction, and method calls for structuring the proof. Additionally, the statement:
forall x  P(x) { Lemma(x); }
is used to invoke Lemma(x)
on all x
for which P(x)
holds. If
Lemma(x)
ensures Q(x)
, then the forall statement establishes
forall x :: P(x) ==> Q(x).
Each value of an inductive datatype is finite, in the sense that it can be constructed by a finite number of calls to datatype constructors. In contrast, values of a coinductive datatype, or codatatype for short, can be infinite. For example, a codatatype can be used to represent infinite trees.
Syntactically, the declaration of a codatatype in Dafny looks like that of a datatype, giving prominence to the constructors (following Coq). The following example defines a codatatype Stream of possibly infinite lists.
codatatype Stream<T> = SNil  SCons(head: T, tail: Stream)
function Up(n: int): Stream<int> { SCons(n, Up(n+1)) }
function FivesUp(n: int): Stream<int>
decreases 4  (n  1) % 5
{
if (n % 5 == 0) then
SCons(n, FivesUp(n+1))
else
FivesUp(n+1)
}
Stream
is a coinductive datatype whose values are possibly infinite
lists. Function Up
returns a stream consisting of all integers upwards
of n
and FivesUp
returns a stream consisting of all multiples of 5
upwards of n
. The selfcall in Up
and the first selfcall in FivesUp
sit in productive positions and are therefore classified as corecursive
calls, exempt from termination checks. The second selfcall in FivesUp
is
not in a productive position and is therefore subject to termination
checking; in particular, each recursive call must decrease the rank
defined by the decreases clause.
Analogous to the common finite list datatype, Stream declares two
constructors, SNil
and SCons
. Values can be destructed using match
expressions and statements. In addition, like for inductive datatypes,
each constructor C
automatically gives rise to a discriminator C?
and
each parameter of a constructor can be named in order to introduce a
corresponding destructor. For example, if xs
is the stream
SCons(x, ys)
, then xs.SCons?
and xs.head == x
hold. In contrast
to datatype declarations, there is no grounding check for
codatatypesâ€”since a codatatype admits infinite values, the type is
nevertheless inhabited.
To define values of codatatypes, one could imagine a â€œcofunctionâ€
language feature: the body of a â€œcofunctionâ€ could include possibly
neverending selfcalls that are interpreted by a greatest fixpoint
semantics (akin to a CoFixpoint in Coq). Dafny uses a different design:
it offers only functions (not â€œcofunctionsâ€), but it classifies each
intracluster call as either recursive or corecursive. Recursive calls
are subject to termination checks. Corecursive calls may be
neverending, which is what is needed to define infinite values of a
codatatype. For example, function Up(n )
in the preceding example is defined as the
stream of numbers from n
upward: it returns a stream that starts with n
and continues as the corecursive call Up(n + 1)
.
To ensure that corecursive calls give rise to mathematically consistent definitions, they must occur only in productive positions. This says that it must be possible to determine each successive piece of a codatatype value after a finite amount of work. This condition is satisfied if every corecursive call is syntactically guarded by a constructor of a codatatype, which is the criterion Dafny uses to classify intracluster calls as being either corecursive or recursive. Calls that are classified as corecursive are exempt from termination checks.
A consequence of the productivity checks and termination checks is that, even in the
absence of talking about least or greatest fixpoints of selfcalling functions, all functions
in Dafny are deterministic. Since there is no issue of several possible fixpoints,
the language allows one function to be involved in both recursive and corecursive calls,
as we illustrate by the function FivesUp
.
Determining properties of codatatype values may require an infinite
number of observations. To that avail, Dafny provides copredicates
which are function declarations that use the copredicate
keyword.
Selfcalls to a copredicate need not terminate. Instead, the value
defined is the greatest fixpoint of the given recurrence equations.
Continuing the preceding example, the following code defines a
copredicate that holds for exactly those streams whose payload consists
solely of positive integers. The copredicate definition implicitly also
gives rise to a corresponding prefix predicate, Pos#
. The syntax for
calling a prefix predicate sets apart the argument that specifies the
prefix length, as shown in the last line; for this figure, we took the
liberty of making up a coordinating syntax for the signature of the
automatically generated prefix predicate (which is not part of
Dafny syntax).
copredicate Pos(s: Stream<int>)
{
match s
case SNil => true
case SCons(x, rest) => x > 0 && Pos(rest)
}
// Automatically generated by the Dafny compiler:
predicate Pos#[_k: nat](s: Stream<int>)
decreases _k
{ if _k = 0 then true else
match s
case SNil => true
case SCons(x, rest) => x > 0 && Pos#[_k1](rest)
}
Some restrictions apply. To guarantee that the greatest fixpoint always exists, the (implicit functor defining the) copredicate must be monotonic. This is enforced by a syntactic restriction on the form of the body of copredicates: after conversion to negation normal form (i.e., pushing negations down to the atoms), intracluster calls of copredicates must appear only in positive positionsâ€”that is, they must appear as atoms and must not be negated. Additionally, to guarantee soundness later on, we require that they appear in cofriendly positionsâ€”that is, in negation normal form, when they appear under existential quantification, the quantification needs to be limited to a finite range^{8}. Since the evaluation of a copredicate might not terminate, copredicates are always ghost. There is also a restriction on the call graph that a cluster containing a copredicate must contain only copredicates, no other kinds of functions.
A copredicate declaration of P
defines not just a copredicate, but
also a corresponding prefix predicate P#
. A prefix predicate is a
finite unrolling of a copredicate. The prefix predicate is constructed
from the copredicate by
adding a parameter _k of type nat to denote the prefix length,
adding the clause “decreases _k;
” to the prefix predicate (the
copredicate itself is not allowed to have a decreases clause),
replacing in the body of the copredicate every intracluster
call Q(args)
to a copredicate by a call Q#[_k  1](args)
to the corresponding prefix predicate, and then
prepending the body with if _k = 0 then true else
.
For example, for copredicate Pos
, the definition of the prefix
predicate Pos#
is as suggested above. Syntactically, the prefixlength
argument passed to a prefix predicate to indicate how many times to
unroll the definition is written in square brackets, as in Pos#[k](s)
.
In the Dafny grammar this is called a HashCall
. The definition of
Pos#
is available only at clusters strictly higher than that of Pos
;
that is, Pos
and Pos#
must not be in the same cluster. In other
words, the definition of Pos
cannot depend on Pos#
.
Equality between two values of a codatatype is a builtin copredicate.
It has the usual equality syntax s == t
, and the corresponding prefix
equality is written s ==#[k] t
. And similarly for s != t
and s !=#[k] t
.
From what we have said so far, a program can make use of properties of
codatatypes. For example, a method that declares Pos(s)
as a
precondition can rely on the stream s
containing only positive integers.
In this section, we consider how such properties are established in the
first place.
Among other possible strategies for establishing coinductive properties
we take the timehonored approach of reducing coinduction to
induction. More precisely, Dafny passes to the SMT solver an
assumption D(P)
for every copredicate P
, where:
D(P) = ? x â€¢ P(x) <==> ? k â€¢ P#[k](x)
In other words, a copredicate is true iff its corresponding prefix predicate is true for all finite unrollings.
In Sec. 4 of the paper Coinduction Simply a soundness theorem of such
assumptions is given, provided the copredicates meet the cofriendly
restrictions. An example proof of Pos(Up(n))
for every n > 0
is
here shown:
lemma UpPosLemma(n: int)
requires n > 0
ensures Pos(Up(n))
{
forall k  0 <= k { UpPosLemmaK(k, n); }
}
lemma UpPosLemmaK(k: nat, n: int)
requires n > 0
ensures Pos#[k](Up(n))
decreases k
{
if k != 0 {
// this establishes Pos#[k1](Up(n).tail)
UpPosLemmaK(k1, n+1);
}
}
The lemma UpPosLemma
proves Pos(Up(n))
for every n > 0
. We first
show Pos#[k](Up(n ))
, for n > 0
and an arbitrary k
, and then use
the forall statement to show ? k â€¢ Pos#[k](Up(n))
. Finally, the axiom
D(Pos)
is used (automatically) to establish the copredicate.
As we just showed, with help of the D
axiom we can now prove a
copredicate by inductively proving that the corresponding prefix
predicate holds for all prefix lengths k
. In this section, we introduce
colemma declarations, which bring about two benefits. The first benefit
is that colemmas are syntactic sugar and reduce the tedium of having to
write explicit quantifications over k
. The second benefit is that, in
simple cases, the bodies of colemmas can be understood as coinductive
proofs directly. As an example consider the following colemma.
colemma UpPosLemma(n: int)
requires n > 0
ensures Pos(Up(n))
{
UpPosLemma(n+1);
}
This colemma can be understood as follows: UpPosLemma
invokes itself
corecursively to obtain the proof for Pos(Up(n).tail)
(since Up(n).tail
equals Up(n+1)
). The proof glue needed to then conclude Pos(Up(n))
is
provided automatically, thanks to the power of the SMTbased verifier.
To understand why the above UpPosLemma
colemma code is a sound proof,
let us now describe the details of the desugaring of colemmas. In
analogy to how a copredicate declaration defines both a copredicate and
a prefix predicate, a colemma declaration defines both a colemma and
prefix lemma. In the call graph, the cluster containing a colemma must
contain only colemmas and prefix lemmas, no other methods or function.
By decree, a colemma and its corresponding prefix lemma are always
placed in the same cluster. Both colemmas and prefix lemmas are always
ghosts.
The prefix lemma is constructed from the colemma by
adding a parameter _k
of type nat
to denote the prefix length,
replacing in the colemmaâ€™s postcondition the positive cofriendly
occurrences of copredicates by corresponding prefix predicates,
passing in _k
as the prefixlength argument,
prepending _k
to the (typically implicit) decreases clause of the colemma,
replacing in the body of the colemma every intracluster call
M(args)
to a colemma by a call M#[_k  1](args)
to the
corresponding prefix lemma, and then
making the bodyâ€™s execution conditional on _k != 0
.
Note that this rewriting removes all corecursive calls of colemmas,
replacing them with recursive calls to prefix lemmas. These recursive
call are, as usual, checked to be terminating. We allow the predeclared
identifier _k
to appear in the original body of the
colemma.^{9}
We can now think of the body of the colemma as being replaced by a
forall call, for every k , to the prefix lemma. By construction,
this new body will establish the colemmaâ€™s declared postcondition (on
account of the D
axiom, and remembering that only the positive
cofriendly occurrences of copredicates in the colemmaâ€™s postcondition
are rewritten), so there is no reason for the program verifier to check
it.
The actual desugaring of our colemma UpPosLemma
is in fact the
previous code for the UpPosLemma
lemma except that UpPosLemmaK
is
named UpPosLemma#
and modulo a minor syntactic difference in how the
k
argument is passed.
In the recursive call of the prefix lemma, there is a proof obligation
that the prefixlength argument _k  1
is a natural number.
Conveniently, this follows from the fact that the body has been wrapped
in an if _k != 0
statement. This also means that the postcondition must
hold trivially when _k = 0
, or else a postcondition violation will be
reported. This is an appropriate design for our desugaring, because
colemmas are expected to be used to establish copredicates, whose
corresponding prefix predicates hold trivially when _k = 0
. (To prove
other predicates, use an ordinary lemma, not a colemma.)
It is interesting to compare the intuitive understanding of the coinductive proof in using a colemma with the inductive proof in using the lemma. Whereas the inductive proof is performing proofs for deeper and deeper equalities, the colemma can be understood as producing the infinite proof on demand.
NewtypeDecl = "newtype" { Attribute } NewtypeName "="
( NumericTypeName [ ":" Type ] "" Expression(allowLemma: false, allowLambda: true)
 Type
)
A new numeric type can be declared with the newtype declaration^{10}, for example:
newtype N = x: M  Q
where M
is a numeric type and Q
is a boolean expression that can
use x
as a free variable. If M
is an integerbased numeric type,
then so is N
; if M
is realbased, then so is N
. If the type M
can be inferred from Q
, the “: M
” can be omitted. If Q
is just
true
, then the declaration can be given simply as:
newtype N = M
Type M
is known as the base type of N
.
A newtype is a numeric type that supports the same operations as its
base type. The newtype is distinct from and incompatible with other
numeric types; in particular, it is not assignable to its base type
without an explicit conversion. An important difference between the
operations on a newtype and the operations on its base type is that
the newtype operations are defined only if the result satisfies the
predicate Q
, and likewise for the literals of the
newtype.^{11}
For example, suppose lo
and hi
are integerbased numerics that
satisfy 0 <= lo <= hi
and consider the following code fragment:
var mid := (lo + hi) / 2;
If lo
and hi
have type int
, then the code fragment is legal; in
particular, it never overflows, since int
has no upper bound. In
contrast, if lo
and hi
are variables of a newtype int32
declared
as follows:
newtype int32 = x  0x80000000 <= x < 0x80000000
then the code fragment is erroneous, since the result of the addition
may fail to satisfy the predicate in the definition of int32
. The
code fragment can be rewritten as
var mid := lo + (hi  lo) / 2;
in which case it is legal for both int
and int32
.
Since a newtype is incompatible with its base type and since all
results of the newtype's operations are members of the newtype, a
compiler for Dafny is free to specialize the runtime representation
of the newtype. For example, by scrutinizing the definition of
int32
above, a compiler may decide to store int32
values using
signed 32bit integers in the target hardware.
Note that the bound variable x
in Q
has type M
, not N
.
Consequently, it may not be possible to state Q
about the N
value. For example, consider the following type of 8bit 2's
complement integers:
newtype int8 = x: int  128 <= x < 128
and consider a variable c
of type int8
. The expression
128 <= c < 128
is not welldefined, because the comparisons require each operand to
have type int8
, which means the literal 128
is checked to be of
type int8
, which it is not. A proper way to write this expression
would be to use a conversion operation, described next, on c
to
convert it to the base type:
128 <= int(c) < 128
If possible Dafny will represent values of the newtype using
a native data type for the sake of efficiency. This action can
be inhibited or a specific native data type selected by
using the (:nativeType)
attribute, as explained in
section 24.1.11.
There is a restriction that the value 0
must be part of every
newtype.^{12}
For every numeric type N
, there is a conversion function with the
same name. It is a partial identity function. It is defined when the
given value, which can be of any numeric type, is a member of the type
converted to. When the conversion is from a realbased numeric type
to an integerbased numeric type, the operation requires that the
realbased argument has no fractional part. (To round a realbased
numeric value down to the nearest integer, use the .Trunc
member,
see Section 6.1.)
To illustrate using the example from above, if lo
and hi
have type
int32
, then the code fragment can legally be written as follows:
var mid := (int(lo) + int(hi)) / 2;
where the type of mid
is inferred to be int
. Since the result
value of the division is a member of type int32
, one can introduce
yet another conversion operation to make the type of mid
be int32
:
var mid := int32((int(lo) + int(hi)) / 2);
If the compiler does specialize the runtime representation for
int32
, then these statements come at the expense of two,
respectively three, runtime conversions.
NatType_ = "nat"
A subset type is a restricted use of an existing type, called the base type of the subset type. A subset type is like a combined use of the base type and a predicate on the base type.
An assignment from a subset type to its base type is always allowed. An assignment in the other direction, from the base type to a subset type, is allowed provided the value assigned does indeed satisfy the predicate of the subset type. (Note, in contrast, assignments between a newtype and its base type are never allowed, even if the value assigned is a value of the target type. For such assignments, an explicit conversion must be used, see Section 19.0.)
Dafny supports one subset type, namely the builtin type nat
,
whose base type is int
.^{13} Type nat
designates the nonnegative subrange of int
. A simple example that
puts subset type nat
to good use is the standard Fibonacci
function:
function Fib(n: nat): nat
{
if n < 2 then n else Fib(n2) + Fib(n1)
}
An equivalent, but clumsy, formulation of this function (modulo the
wording of any error messages produced at call sites) would be to use
type int
and to write the restricting predicate in pre and
postconditions:
function Fib(n: int): int
requires 0 <= n; // the function argument must be nonnegative
ensures 0 <= Fib(n); // the function result is nonnegative
{
if n < 2 then n else Fib(n2) + Fib(n1)
}
Type inference will never infer the type of a variable to be a
subset type. It will instead infer the type to be the base type
of the subset type. For example, the type of x
in
forall x :: P(x)
will be int
, even if predicate P
declares its argument to have
type nat
.
Stmt = ( BlockStmt  AssertStmt  AssumeStmt  PrintStmt  UpdateStmt
 VarDeclStatement  IfStmt  WhileStmt  MatchStmt  ForallStmt
 CalcStmt  ModifyStmt  LabeledStmt_  BreakStmt_  ReturnStmt
 YieldStmt  SkeletonStmt
)
Many of Dafny's statements are similar to those in traditional programming languages, but a number of them are significantly different. This grammar production shows the different kinds of Dafny statements. They are described in subsequent sections.
LabeledStmt_ = "label" LabelName ":" Stmt
A labeled statement is just the keyword label
followed by and
identifier which is the label followed by a colon and a
statement. The label may be referenced in a break statement
to transfer control to the location after that statement.
BreakStmt_ = "break" ( LabelName  { "break" } ) ";"
A break statement breaks out of one or more loops (if the
statement consists solely of one or more break
keywords),
or else transfers control to just past the statement
bearing the referenced label, if a label was used.
BlockStmt = "{" { Stmt } "}"
A block statement is just a sequence of statements enclosed by curly braces.
ReturnStmt = "return" [ Rhs { "," Rhs } ] ";"
A return statement can only be used in a method. It is used to terminate the execution of the method. To return a value from a method, the value is assigned to one of the named return values sometime before a return statement. In fact, the return values act very much like local variables, and can be assigned to more than once. Return statements are used when one wants to return before reaching the end of the body block of the method. Return statements can be just the return keyword (where the current value of the out parameters are used), or they can take a list of values to return. If a list is given the number of values given must be the same as the number of named return values.
YieldStmt = "yield" [ Rhs { "," Rhs } ] ";"
A yield statement can only be used in an iterator. See section Iterator types for more details about iterators.
The body of an iterator is a coroutine. It is used to yield control to its caller, signaling that a new set of values for the iterator's yield parameters (if any) are available. Values are assigned to the yield parameters at or before a yield statement. In fact, the yield parameters act very much like local variables, and can be assigned to more than once. Yield statements are used when one wants to return new yield parameter values to the caller. Yield statements can be just the yield keyword (where the current value of the yield parameters are used), or they can take a list of values to yield. If a list is given the number of values given must be the same as the number of named return yield parameters.
UpdateStmt = Lhs { "," Lhs }
( ":=" Rhs { "," Rhs }
 ":" [ "assume" ] Expression(allowLemma: false, allowLambda: true)
)
";""
The update statement has two forms. The first more normal form
allows for parallel assignment of righthandside values to the
lefthand side. For example x,y := y,x
to swap the values
of x
and y
. Of course the common case will have only one
rhs and one lhs.
The form that uses “:
” assigns some values to the lefthandside
variables such that the boolean expression on the right hand side
is satisfied. This can be used to make a choice as in the
following example where we choose an element in a set.
function PickOne<T>(s: set<T>): T
requires s != {}
{
var x : x in s; x
}
Dafny will report an error if it cannot prove that values exist which satisfy the condition.
In addition, though the choice is arbitrary, given identical circumstances the choice will be made consistently.
In the actual grammar two additional forms are recognized for purposes of error detection. The form:
Lhs { Attribute} ;
is assumed to be a malformed call.
The form
Lhs ":"
is diagnosed as a label in which the user forgot the label keyword.
VarDeclStatement = [ "ghost" ] "var" { Attribute }
(
LocalIdentTypeOptional { "," { Attribute } LocalIdentTypeOptional }
[ ":=" Rhs { "," Rhs }
 { Attribute } ":" [ "assume" ] Expression(allowLemma: false, allowLambda: true)
]

"(" CasePattern { "," CasePattern } ")"
":=" Expression(allowLemma: false, allowLambda: true)
)
";"
A VarDeclStatement
is used to declare one or more local variables in a method or function.
The type of each local variable must be given unless the variable is given an initial
value in which case the type will be inferred. If initial values are given, the number of
values must match the number of variables declared.
Note that the type of each variable must be given individually. The following code
var x, y : int;
does not declare both x
and y
to be of type int
. Rather it will give an
error explaining that the type of x
is underspecified.
The lefthand side can also contain a tuple of patterns which will be matched against the righthandside. For example:
function returnsTuple() : (int, int)
{
(5, 10)
}
function usesTuple() : int
{
var (x, y) := returnsTuple();
x + y
}
Guard = ( "*"  "(" "*" ")"  Expression(allowLemma: true, allowLambda: true) )
Guards are used in if
and while
statements as boolean expressions. Guards
take two forms.
The first and most common form is just a boolean expression.
The second form is either *
or (*)
. These have the same meaning. An
unspecified boolean value is returned. The value returned
may be different each time it is executed.
BindingGuard(allowLambda) =
IdentTypeOptional { "," IdentTypeOptional } { Attribute }
":" Expression(allowLemma: true, allowLambda)
A BindingGuard
is used as a condition in an IfStmt
.
It binds the identifiers declared in the IdentTypeOptional
s.
If there exists one or more assignments of values to the bound identifiers
for which Expression
is true, then the BindingGuard
returns true and the identifiers are bound to values that make the
Expression
true.
The identifiers bound by BindingGuard
are ghost variables
and cannot be assigned to nonghost variables. They are only
used in specification contexts.
Here is an example:
predicate P(n: int)
{
n % 2 == 0
}
method M1() returns (ghost y: int)
requires ∃ x :: P(x)
ensures P(y)
{
if x : int : P(x) {
y := x;
}
}
IfStmt = "if"
( IfAlternativeBlock

( BindingGuard(allowLambda: true)
 Guard
 "…"
)
BlockStmt [ "else" ( IfStmt  BlockStmt ) ]
)
In the simplest form an if
statement uses a guard that is a boolean
expression. It then has the same form as in C# and other common
programming languages. For example
if x < 0 {
x := x;
}
If the guard is an asterisk then a nondeterministic choice is made:
if * {
print "True";
} else {
print "False";
}
IfAlternativeBlock =
"{" { "case"
(
BindingGuard(allowLambda:false)
 Expression(allowLemma: true, allowLambda: false)
) "=>" { Stmt } } "}" .
The if
statement using the IfAlternativeBlock
form is similar to the
if ... fi
construct used in the book “A Discipline of Programming” by
Edsger W. Dijkstra. It is used for a multibranch if
.
For example:
if {
case x <= y => max := y;
case y <= x => max := y;
}
In this form the expressions following the case
keyword are called
guards. The statement is evaluated by evaluating the guards in an
undetermined order until one is found that is true
or else all have
evaluated to false
. If none of them evaluate to true
then the if
statement does nothing. Otherwise the statements to the right of =>
for the guard that evaluated to true
are executed.
WhileStmt = "while"
( LoopSpecWhile WhileAlternativeBlock
 ( Guard  "…" ) LoopSpec
( BlockStmt
 "…"
 /* go bodyless */
)
)
WhileAlternativeBlock =
"{" { "case" Expression(allowLemma: true, allowLambda: false) "=>" { Stmt } } "}" .
See section 4.5 for a description of LoopSpec
.
The while
statement is Dafny's only loop statement. It has two general
forms.
The first form is similar to a while loop in a Clike language. For example:
var i := 0;
while i < 5 {
i := i + 1;
}
In this form the condition following the while
is one of these:
*
), meaning nondeterministically yield either
true
or false
as the value of the condition
...
), which makes the while statement a skeleton
while
statement. TODO: What does that mean?
The body of the loop is usually a block statement, but it can also be a skeleton, denoted by ellipsis, or missing altogether. TODO: Wouldn't a missing body cause problems? Isn't it clearer to have a block statement with no statements inside?
The second form uses the WhileAlternativeBlock
. It is similar to the
do ... od
construct used in the book “A Discipline of Programming” by
Edsger W. Dijkstra. For example:
while
decreases if 0 <= r then r else r;
{
case r < 0 =>
r := r + 1;
case 0 < r =>
r := r  1;
}
For this form the guards are evaluated in some undetermined order until one is found that is true, in which case the corresponding statements are executed. If none of the guards evaluates to true then the loop execution is terminated.
For some simple loops such as those mentioned previously Dafny can figure
out what the loop is doing without more help. However in general the user
must provide more information in order to help Dafny prove the effect of
the loop. This information is provided by a LoopSpec
. A
LoopSpec
provides information about invariants, termination, and
what the loop modifies. LoopSpecs
are explained in
section 4.5. However the following sections
present additional rationale and tutorial on loop specifications.
While
loops present a problem for Dafny. There is no way for Dafny to
know in advance how many times the code will go around the loop. But
Dafny needs to consider all paths through a program, which could include
going around the loop any number of times. To make it possible for Dafny
to work with loops, you need to provide loop invariants, another kind of
annotation.
A loop invariant is an expression that holds upon entering a loop, and after every execution of the loop body. It captures something that is invariant, i.e. does not change, about every step of the loop. Now, obviously we are going to want to change variables, etc. each time around the loop, or we wouldn't need the loop. Like pre and postconditions, an invariant is a property that is preserved for each execution of the loop, expressed using the same boolean expressions we have seen. For example,
var i := 0;
while i < n
invariant 0 <= i
{
i := i + 1;
}
When you specify an invariant, Dafny proves two things: the invariant holds upon entering the loop, and it is preserved by the loop. By preserved, we mean that assuming that the invariant holds at the beginning of the loop, we must show that executing the loop body once makes the invariant hold again. Dafny can only know upon analyzing the loop body what the invariants say, in addition to the loop guard (the loop condition). Just as Dafny will not discover properties of a method on its own, it will not know any but the most basic properties of a loop are preserved unless it is told via an invariant.
Dafny proves that code terminates, i.e. does not loop forever, by using
decreases
annotations. For many things, Dafny is able to guess the right
annotations, but sometimes it needs to be made explicit. In fact, for all
of the code we have seen so far, Dafny has been able to do this proof on
its own, which is why we haven't seen the decreases annotation explicitly
yet. There are two places Dafny proves termination: loops and recursion.
Both of these situations require either an explicit annotation or a
correct guess by Dafny.
A decreases
annotation, as its name suggests, gives Dafny an expression
that decreases with every loop iteration or recursive call. There are two
conditions that Dafny needs to verify when using a decreases expression:
Many times, an integral value (natural or plain integer) is the quantity that decreases, but other things that can be used as well. In the case of integers, the bound is assumed to be zero. For example, the following is a proper use of decreases on a loop (with its own keyword, of course):
while 0 < i
invariant 0 <= i
decreases i
{
i := i  1;
}
Here Dafny has all the ingredients it needs to prove termination. The variable i gets smaller each loop iteration, and is bounded below by zero. This is fine, except the loop is backwards from most loops, which tend to count up instead of down. In this case, what decreases is not the counter itself, but rather the distance between the counter and the upper bound. A simple trick for dealing with this situation is given below:
while i < n
invariant 0 <= i <= n
decreases n  i
{
i := i + 1;
}
This is actually Dafny's guess for this situation, as it sees i < n
and
assumes that n  i
is the quantity that decreases. The upper bound of the
loop invariant implies that 0 <= n â€“ i
, and gives Dafny a lower bound on
the quantity. This also works when the bound n
is not constant, such as
in the binary search algorithm, where two quantities approach each other,
and neither is fixed.
If the decreases clause of a loop specified “*”, then no termination check will be performed. Use of this feature is sound only with respect to partial correctness.
In some cases we also must specify what memory locations the loop body
is allowed to modify. This is done using a modifies
clause.
See the discussion of framing in methods for a fuller discussion.
MatchStmt = "match" Expression(allowLemma: true, allowLambda: true)
( "{" { CaseStatement } "}"
 { CaseStatement }
)
CaseStatement = CaseBinding_ "=>" { Stmt }
The match
statement is used to do case analysis on a value of inductive
or coinductive type. The form with no leading Ident
is for matching
tuples. The expression after the match
keyword is the (co)inductive
value being matched. The expression is evaluated and then matched against
each of the case clauses.
There must be a case clause for each constructor of the data type.
The identifier after the case
keyword in a case clause, if present,
must be the name of one of the data type's constructors.
If the constructor takes parameters then a parenthesisenclosed
list of identifiers (with optional type) must follow the
constructor. There must be as many identifiers as the constructor
has parameters. If the optional type is given it must be the same
as the type of the corresponding parameter of the constructor.
If no type is given then the type of the corresponding parameter
is the type assigned to the identifier.
When an inductive value that was created using constructor
expression C1(v1, v2)
is matched against a case clause
C2(x1, x2
), there is a match provided that C1
and C2
are the
same constructor. In that case x1
is bound to value v1
and
x2
is bound to v2
. The identifiers in the case pattern
are not mutable. Here is an example of the use of a match
statement.
datatype Tree = Empty  Node(left: Tree, data: int, right: Tree)
// Return the sum of the data in a tree.
method Sum(x: Tree) returns (r: int)
{
match x {
case Empty => r := 1;
case Node(t1 : Tree, d, t2) => {
var v1 := Sum(t1);
var v2 := Sum(t2);
r := v1 + d + v2;
}
}
}
Note that the Sum
method is recursive yet has no decreases
annotation.
In this case it is not needed because Dafny is able to deduce that
t1
and t2
are smaller (structurally) than x
. If Tree
had been
coinductive this would not have been possible since x
might have been
infinite.
AssertStmt =
"assert" { Attribute }
( Expression(allowLemma: false, allowLambda: true)
 "…"
) ";"
Assert
statements are used to express logical proposition that are
expected to be true. Dafny will attempt to prove that the assertion
is true and give an error if not. Once it has proved the assertion
it can then use its truth to aid in following deductions.
Thus if Dafny is having a difficult time verifying a method
the user may help by inserting assertions that Dafny can prove,
and whose true may aid in the larger verification effort.
If the proposition is ...
then (TODO: what does this mean?).
AssumeStmt =
"assume" { Attribute }
( Expression(allowLemma: false, allowLambda: true)
 "…"
) ";"
The Assume
statement lets the user specify a logical proposition
that Dafny may assume to be true without proof. If in fact the
proposition is not true this may lead to invalid conclusions.
An Assume
statement would ordinarily be used as part of a larger
verification effort where verification of some other part of
the program required the proposition. By using the Assume
statement
the other verification can proceed. Then when that is completed the
user would come back and replace the assume
with assert
.
If the proposition is ...
then (TODO: what does this mean?).
PrintStmt =
"print" Expression(allowLemma: false, allowLambda: true)
{ "," Expression(allowLemma: false, allowLambda: true) } ";"
The print
statement is used to print the values of a commaseparated
list of expressions to the console. The generated C# code uses
the System.Object.ToString()
method to convert the values to printable
strings. The expressions may of course include strings that are used
for captions. There is no implicit new line added, so to get a new
line you should include “\n” as part of one of the expressions.
Dafny automatically creates overrides for the ToString() method
for Dafny data types. For example,
datatype Tree = Empty  Node(left: Tree, data: int, right: Tree)
method Main()
{
var x : Tree := Node(Node(Empty, 1, Empty), 2, Empty);
print "x=", x, "\n";
}
produces this output:
x=Tree.Node(Tree.Node(Tree.Empty, 1, Tree.Empty), 2, Tree.Empty)
ForallStmt = "forall"
( "(" [ QuantifierDomain ] ")"
 [ QuantifierDomain ]
)
{ [ "free" ] ForAllEnsuresClause_ }
[ BlockStmt ]
The forall
statement executes ensures expressions or a body in
parallel for all quantified values in the specified range.
The use of the parallel
keyword is deprecated. Use
forall
instead. There are several variant uses of the forall
statement. And there are a number of restrictions.
In particular a forall
statement can be classified as one of the following:
forall
statement is used for simultaneous assignment.
The target must be an array element or an object field.
forall
has ensure
expressions which are effectively
quantified or proved by the body (if present).
An assign forall
statement is to perform simultaneous assignment.
The following is an excerpt of an example given by Leino in
Developing Verified Programs with Dafny.
When the buffer holding the queue needs to be resized,
the forall
statement is used to simultaneously copy the old contents
into the new buffer.
class {:autocontracts} SimpleQueue<Data>
{
ghost var Contents: seq<Data>;
var a: array<Data>; // Buffer holding contents of queue.
var m: int // Index head of queue.
var n: int; // Index just past end of queue
...
method Enqueue(d: Data)
ensures Contents == old(Contents) + [d]
{
if n == a.Length {
var b := a;
if m == 0 { b := new Data[2 * a.Length]; }
forall (i  0 <= i < n  m) {
b[i] := a[m + i];
}
a, m, n := b, 0, n  m;
}
a[n], n, Contents := d, n + 1, Contents + [d];
}
}
Here is an example of a call forall
statement and the
callee. This is contained in the CloudMakeConsistentBuilds.dfy
test in the Dafny repository.
forall (cmd', deps', e'  Hash(Loc(cmd', deps', e')) == Hash(Loc(cmd, deps, e))) {
HashProperty(cmd', deps', e', cmd, deps, e);
}
ghost method HashProperty(cmd: Expression, deps: Expression, ext: string,
cmd': Expression, deps': Expression, ext': string)
requires Hash(Loc(cmd, deps, ext)) == Hash(Loc(cmd', deps', ext'))
ensures cmd == cmd' && deps == deps' && ext == ext'
From the same file here is an example of a proof forall
statement.
forall (p  p in DomSt(stCombinedC.st) && p in DomSt(stExecC.st))
ensures GetSt(p, stCombinedC.st) == GetSt(p, stExecC.st)
{
assert DomSt(stCombinedC.st) <= DomSt(stExecC.st);
assert stCombinedC.st == Restrict(DomSt(stCombinedC.st), stExecC.st);
}
More generally the statement
forall x  P(x) { Lemma(x); }
is used to invoke Lemma(x)
on all x
for which P(x)
holds. If
Lemma(x)
ensures Q(x)
, then the forall statement establishes
forall x :: P(x) ==> Q(x).
The forall
statement is also used extensively in the desugared forms of
copredicates and colemmas. See section 18.2.
TODO: List all of the restrictions on the forall
statement.
ModifyStmt =
"modify" { Attribute }
( FrameExpression(allowLemma: false, allowLambda: true)
{ "," FrameExpression(allowLemma: false, allowLambda: true) }
 "…"
)
( BlockStmt  ";" )
The modify
statement has two forms which have two different
purposes.
When the modify
statement ends with a semicolon rather than
a block statement its effect is to say that some undetermined
modifications have been made to any or all of the memory
locations specified by the frame expressions.
In the following example, a value is assigned to field x
followed by a modify
statement that may modify any field
in the object. After that we can no longer prove that the field
x
still has the value we assigned to it.
class MyClass {
var x: int;
method N()
modifies this
{
x := 18;
modify this;
assert x == 18; // error: cannot conclude this here
}
}
When the modify
statement is followed by a block statement
we are instead specifying what can be modified in that
block statement. Namely, only memory locations specified
by the frame expressions of the block modify
statement
may be modified. Consider the following example.
class ModifyBody {
var x: int;
var y: int;
method M0()
modifies this
{
modify {} {
x := 3; // error: violates modifies clause of the modify statement
}
}
method M1()
modifies this
{
modify {} {
var o := new ModifyBody;
o.x := 3; // fine
}
}
method M2()
modifies this
{
modify this {
x := 3;
}
}
}
The first modify
statement in the example has an empty
frame expression so it cannot modify any memory locations.
So an error is reported when it tries to modify field x
.
The second modify
statement also has an empty frame
expression. But it allocates a new object and modifies it.
Thus we see that the frame expressions on a block modify
statement only limits what may be modified of existing
memory. It does not limit what may be modified in
new memory that is allocated.
The third modify
statement has a frame expression that
allows it to modify any of the fields of the current object,
so the modification of field x
is allowed.
CalcStmt = "calc" { Attribute } [ CalcOp ] "{" CalcBody "}"
CalcBody = { CalcLine [ CalcOp ] Hints }
CalcLine = Expression(allowLemma: false, allowLambda: true) ";"
Hints = { ( BlockStmt  CalcStmt ) }
CalcOp =
( "==" [ "#" "[" Expression(allowLemma: true, allowLambda: true) "]" ]
 "<"  ">"
 "!="  "<="  ">="
 "<==>"  "==>"  "<=="
)
The calc
statement supports calculational proofs using a language feature called programoriented calculations (poC). This feature was introduced and explained in the Verified Calculations paper by
Leino and Polikarpova[22]. Please see that paper for a more complete explanation
of the calc
statement. We here mention only the highlights.
Calculational proofs are proofs by stepwise formula manipulation as is taught in elementary algebra. The typical example is to prove an equality by starting with a lefthandside, and through a series of transformations morph it into the desired righthandside.
Nonsyntactic rules further restrict hints to only ghost and sideeffect free statements, as well as impose a constraint that only chaincompatible operators can be used together in a calculation. The notion of chaincompatibility is quite intuitive for the operators supported by poC; for example, it is clear that “<” and “>” cannot be used within the same calculation, as there would be no relation to conclude between the first and the last line. See the paper for a more formal treatment of chaincompatibility.
Note that we allow a single occurrence of the intransitive operator “!=” to
appear in a chain of equalities (that is, “!=” is chaincompatible with
equality but not with any other operator, including itself). Calculations
with fewer than two lines are allowed, but have no effect. If a step
operator is omitted, it defaults to the calculationwide operator,
defined after the calc
keyword. If that operator if omitted, it defaults
to equality.
Here is an example using calc
statements to prove an elementary
algebraic identity. As it turns out Dafny is able to prove this without
the calc
statements, but it helps to illustrate the syntax.
lemma docalc(x : int, y: int)
ensures (x + y) * (x + y) == x * x + 2 * x * y + y * y
{
calc {
(x + y) * (x + y); ==
// distributive law: (a + b) * c == a * c + b * c
x * (x + y) + y * (x + y); ==
// distributive law: a * (b + c) == a * b + a * c
x * x + x * y + y * x + y * y; ==
calc {
y * x; ==
x * y;
}
x * x + x * y + x * y + y * y; ==
calc {
x * y + x * y; ==
// a = 1 * a
1 * x * y + 1 * x * y; ==
// Distributive law
(1 + 1) * x * y; ==
2 * x * y;
}
x * x + 2 * x * y + y * y;
}
}
Here we started with (x + y) * (x + y)
as the lefthandside
expressions and gradually transformed it using distributive,
commutative and other laws into the desired righthandside.
The justification for the steps are given as comments, or as
nested calc
statements that prove equality of some subparts
of the expression.
The ==
to the right of the semicolons show the relation between
that expression and the next. Because of the transitivity of
equality we can then conclude that the original lefthandside is
equal to the final expression.
We can avoid having to supply the relational operator between
every pair of expressions by giving a default operator between
the calc
keyword and the opening brace as shown in this abbreviated
version of the above calc statement:
calc == {
(x + y) * (x + y);
x * (x + y) + y * (x + y);
x * x + x * y + y * x + y * y;
x * x + x * y + x * y + y * y;
x * x + 2 * x * y + y * y;
}
And since equality is the default operator we could have omitted
it after the calc
keyword.
The purpose of the block statements or the calc
statements between
the expressions is to provide hints to aid Dafny in proving that
step. As shown in the example, comments can also be used to aid
the human reader in cases where Dafny can prove the step automatically.
SkeletonStmt =
"…"
["where" Ident {"," Ident } ":="
Expression(allowLemma: false, allowLambda: true)
{"," Expression(allowLemma: false, allowLambda: true) }
] ";"
The grammar of Dafny expressions follows a hierarchy that reflects the precedence of Dafny operators. The following table shows the Dafny operators and their precedence in order of increasing binding power.
operator  description 

;  In LemmaCall;Expression 
<==> , ⇔  equivalence (if and only if) 
==> , ⇒  implication (implies) 
<== , ⇐  reverse implication (follows from) 
&& , ∧  conjunction (and) 
, ∨  disjunction (or) 
! , ¬  negation (not) 
==  equality 
==#[k]  prefix equality (coinductive) 
!=  disequality 
!=#[k]  prefix disequality (coinductive) 
<  less than 
<=  at most 
>=  at least 
>  greater than 
in  collection membership 
!in  collection nonmembership 
!!  disjointness 
+  addition (plus) 
  subtraction (minus) 
*  multiplication (times) 
/  division (divided by) 
%  modulus (mod) 
  arithmetic negation (unary minus) 
! , ¬  logical negation 
Primary Expressions  
We are calling the UnaryExpression
s that are neither
arithmetic nor logical negation the primary expressions.
They are the most tightly bound.
In the grammar entries below we explain the meaning when the operator for that precedence level is present. If the operator is not present then we just descend to the next precedence level.
Expression(allowLemma, allowLambda) =
EquivExpression(allowLemma, allowLambda)
[ ";" Expression(allowLemma, allowLambda) ]
The “allowLemma” argument says whether or not the expression to be parsed is allowed to have the form S;E where S is a call to a lemma. “allowLemma” should be passed in as “false” whenever the expression to be parsed sits in a context that itself is terminated by a semicolon.
The “allowLambda” says whether or not the expression to be parsed is allowed to be a lambda expression. More precisely, an identifier or parenthesizedenclosed commadelimited list of identifiers is allowed to continue as a lambda expression (that is, continue with a “reads”, “requires”, or “=>”) only if “allowLambda” is true. This affects function/method/iterator specifications, if/while statements with guarded alternatives, and expressions in the specification of a lambda expression itself.
Sometimes an expression will fail unless some relevant fact is known.
In the following example the F_Fails
function fails to verify
because the Fact(n)
divisor may be zero. But preceding
the expression by a lemma that ensures that the denominator
is not zero allows function F_Succeeds
to succeed.
function Fact(n: nat): nat
{
if n == 0 then 1 else n * Fact(n1)
}
lemma L(n: nat)
ensures 1 <= Fact(n)
{
}
function F_Fails(n: nat): int
{
50 / Fact(n) // error: possible division by zero
}
function F_Succeeds(n: nat): int
{
L(n);
50 / Fact(n)
}
EquivExpression(allowLemma, allowLambda) =
ImpliesExpliesExpression(allowLemma, allowLambda)
{ "<==>" ImpliesExpliesExpression(allowLemma, allowLambda) }
An EquivExpression
that contains one or more “<==>”s is
a boolean expression and all the contained ImpliesExpliesExpression
must also be boolean expressions. In that case each “<==>”
operator tests for logical equality which is the same as
ordinary equality.
See section 6.0.0 for an explanation of the
<==>
operator as compared with the ==
operator.
ImpliesExpliesExpression(allowLemma, allowLambda) =
LogicalExpression(allowLemma, allowLambda)
[ ( "==>" ImpliesExpression(allowLemma, allowLambda)
 "<==" LogicalExpression(allowLemma, allowLambda)
{ "<==" LogicalExpression(allowLemma, allowLambda) }
)
]
ImpliesExpression(allowLemma, allowLambda) =
LogicalExpression(allowLemma, allowLambda)
[ "==>" ImpliesExpression(allowLemma, allowLambda) ]
See section 6.0.2 for an explanation
of the ==>
and <==
operators.
LogicalExpression(allowLemma, allowLambda) =
RelationalExpression(allowLemma, allowLambda)
[ ( "&&" RelationalExpression(allowLemma, allowLambda)
{ "&&" RelationalExpression(allowLemma, allowLambda) }
 "" RelationalExpression(allowLemma, allowLambda)
{ "" RelationalExpression(allowLemma, allowLambda) }
)
]
See section 6.0.1 for an explanation
of the &&
(or ∧) and 
(or ∨) operators.
RelationalExpression(allowLemma, allowLambda) =
Term(allowLemma, allowLambda)
[ RelOp Term(allowLemma, allowLambda)
{ RelOp Term(allowLemma, allowLambda) } ]
RelOp =
( "==" [ "#" "[" Expression(allowLemma: true, allowLambda: true) "]" ]
 "<"  ">"  "<="  ">="
 "!=" [ "#" "[" Expression(allowLemma: true, allowLambda: true) "]" ]
 "in"
 "!in"
 "!!"
)
The relation expressions that have a RelOp
compare two or more terms.
As explained in section 6, ==
, !=
, <
, >
, <=
, and >=
and their corresponding Unicode equivalents are chaining.
The in
and !in
operators apply to collection types as explained in
section 9 and represent membership or nonmembership
respectively.
The !!
represents disjointness for sets and multisets as explained in
sections 9.0 and 9.1.
Note that x ==#[k] y
is the prefix equality operator that compares
coinductive values for equality to a nesting level of k, as
explained in section 18.2.3.0.
Term(allowLemma, allowLambda) =
Factor(allowLemma, allowLambda)
{ AddOp Factor(allowLemma, allowLambda) }
AddOp = ( "+"  "" )
Terms
combine Factors
by adding or subtracting.
Addition has these meanings for different types:
Subtraction is arithmetic subtraction for numeric types, and set or multiset difference for sets and multisets.
Factor(allowLemma, allowLambda) =
UnaryExpression(allowLemma, allowLambda)
{ MulOp UnaryExpression(allowLemma, allowLambda) }
MulOp = ( "*"  "/"  "%" )
A Factor
combines UnaryExpression
s using multiplication,
division, or modulus. For numeric types these are explained in
section 6.1.
Only *
has a nonnumeric application. It represents set or multiset
intersection as explained in sections 9.0 and 9.1.
UnaryExpression(allowLemma, allowLambda) =
( "" UnaryExpression(allowLemma, allowLambda)
 "!" UnaryExpression(allowLemma, allowLambda)
 PrimaryExpression_(allowLemma, allowLambda)
)
A UnaryExpression
applies either numeric (section 6.1)
or logical (section 6.0) negation to its operand.
PrimaryExpression_(allowLemma, allowLambda) =
( MapDisplayExpr { Suffix }
 LambdaExpression(allowLemma)
 EndlessExpression(allowLemma, allowLambda)
 NameSegment { Suffix }
 SeqDisplayExpr { Suffix }
 SetDisplayExpr { Suffix }
 MultiSetExpr { Suffix }
 ConstAtomExpression { Suffix }
)
After descending through all the binary and unary operators we arrive at
the primary expressions which are explained in subsequent sections. As
can be seen, a number of these can be followed by 0 or more Suffix
es
to select a component of the value.
If the allowLambda
is false then LambdaExpression
s are not
recognized in this context.
LambdaExpression(allowLemma) =
( WildIdent
 "(" [ IdentTypeOptional { "," IdentTypeOptional } ] ")"
)
LambdaSpec_
LambdaArrow Expression(allowLemma, allowLambda: true)
LambdaArrow = ( "=>"  ">" )
See section 4.3 for a description of LambdaSpec
.
In addition to named functions, Dafny supports expressions that define functions. These are called lambda (expression)s (some languages know them as anonymous functions). A lambda expression has the form:
(params) specification => body
where params
is a commadelimited list of parameter
declarations, each of which has the form x
or x: T
. The type T
of a parameter can be omitted when it can be inferred. If the
identifier x
is not needed, it can be replaced by “_
”. If
params
consists of a single parameter x
(or _
) without an
explicit type, then the parentheses can be dropped; for example, the
function that returns the successor of a given integer can be written
as the following lambda expression:
x => x + 1
The specification
is a list of clauses requires E
or
reads W
, where E
is a boolean expression and W
is a frame
expression.
body
is an expression that defines the function's return
value. The body must be wellformed for all possible values of the
parameters that satisfy the precondition (just like the bodies of
named functions and methods). In some cases, this means it is
necessary to write explicit requires
and reads
clauses. For
example, the lambda expression
x requires x != 0 => 100 / x
would not be wellformed if the requires
clause were omitted,
because of the possibility of divisionbyzero.
In settings where functions cannot be partial and there are no
restrictions on reading the heap, the eta expansion of a function
F: T > U
(that is, the wrapping of F
inside a lambda expression
in such a way that the lambda expression is equivalent to F
) would
be written x => F(x)
. In Dafny, eta expansion must also account for
the precondition and reads set of the function, so the eta expansion
of F
looks like:
x requires F.requires(x) reads F.reads(x) => F(x)
Lhs =
( NameSegment { Suffix }
 ConstAtomExpression Suffix { Suffix }
)
A lefthandside expression is only used on the left hand
side of an UpdateStmt
.
TODO: Try to give examples showing how these kinds of lefthandsides are possible.
Rhs =
( ArrayAllocation_
 ObjectAllocation_
 Expression(allowLemma: false, allowLambda: true)
 HavocRhs_
)
{ Attribute }
An Rhs
is either array allocation, an object allocation,
an expression, or a havoc righthandside, optionally followed
by one or more Attribute
s.
Righthandside expressions appear in the following constructs:
ReturnStmt
, YieldStmt
, UpdateStmt
, or VarDeclStatement
.
These are the only contexts in which arrays or objects may be
allocated, or in which havoc may be produced.
ArrayAllocation_ = "new" Type "[" Expressions "]"
This allocates a new single or multidimensional array as explained in section 14.
ObjectAllocation_ = "new" Type [ "(" [ Expressions ] ")" ]
This allocated a new object of a class type as explained in section 12.
HavocRhs_ = "*"
A havoc righthandside produces an arbitrary value of its associated
type. To get a more constrained arbitrary value the “assignsuchthat”
operator (:
) can be used. See section 21.5.
ConstAtomExpression =
( LiteralExpression_
 FreshExpression_
 OldExpression_
 CardinalityExpression_
 NumericConversionExpression_
 ParensExpression
)
A ConstAtomExpression
represent either a constant of some type, or an
atomic expression. A ConstAtomExpression
is never an lvalue. Also, a
ConstAtomExpression
is never followed by an open parenthesis (but could
very well have a suffix that starts with a period or a square bracket).
(The “Also…” part may change if expressions in Dafny could yield
functions.)
LiteralExpression_ =
( "false"  "true"  "null"  Nat  Dec 
charToken  stringToken  "this")
A literal expression is a boolean literal, a null object reference, an unsigned integer or real literal, a character or string literal, or “this” which denote the current object in the context of an instance method or function.
FreshExpression_ = "fresh" "(" Expression(allowLemma: true, allowLambda: true) ")"
fresh(e)
returns a boolean value that is true if
the objects referenced in expression e
were all
freshly allocated in the current method invocation.
The argument of fresh
must be either an object reference
or a collection of object references.
OldExpression_ = "old" "(" Expression(allowLemma: true, allowLambda: true) ")"
An old expression is used in postconditions. old(e)
evaluates to
the value expression e
had on entry to the current method.
CardinalityExpression_ = "" Expression(allowLemma: true, allowLambda: true) ""
For a collection expression c
, c
is the cardinality of c
. For a
set or sequence the cardinality is the number of elements. For
a multiset the cardinality is the sum of the multiplicities of the
elements. For a map the cardinality is the cardinality of the
domain of the map. Cardinality is not defined for infinite maps.
For more see section 9.
NumericConversionExpression_ =
( "int"  "real" ) "(" Expression(allowLemma: true, allowLambda: true) ")"
Numeric conversion expressions give the name of the target type
followed by the expression being converted in parentheses.
This production is for int
and real
as the target types
but this also applies more generally to other numeric types,
e.g. newtypes
. See section 19.0.
ParensExpression =
"(" [ Expressions ] ")"
A ParensExpression
is a list of zero or more expressions
enclosed in parentheses.
If there is exactly one expression enclosed then the value is just the value of that expression.
If there are zero or more than one the result is a tuple
value.
See section 18.1.
SeqDisplayExpr = "[" [ Expressions ] "]"
A sequence display expression provide a way to constructing a sequence with given values. For example
[1, 2, 3]
is a sequence with three elements in it. See section 9.2 for more information on sequences.
SetDisplayExpr = [ "iset" ] "{" [ Expressions ] "}"
A set display expression provide a way to constructing
a set with given elements. If the keyword iset
is present
then a potentially infinite set is constructed.
For example
{1, 2, 3}
is a set with three elements in it. See section 9.0 for more information on sets.
MultiSetExpr =
"multiset"
( "{" [ Expressions ] "}"
 "(" Expression(allowLemma: true, allowLambda: true) ")"
)
A multiset display expression provide a way to constructing a multiset with given elements and multiplicity. For example
multiset{1, 1, 2, 3}
is a multiset with three elements in it. The number 1 has a multiplicity of 2, the others a multiplicity of 1.
On the other hand, a multiset cast expression converts a set or a sequence into a multiset as shown here:
var s : set<int> := {1, 2, 3};
var ms : multiset<int> := multiset(s);
ms := ms + multiset{1};
var sq : seq<int> := [1, 1, 2, 3];
var ms2 : multiset<int> := multiset(sq);
assert ms == ms2;
See section 9.1 for more information on multisets.
MapDisplayExpr = ("map"  "imap" ) "[" [ MapLiteralExpressions ] "]"
MapLiteralExpressions =
Expression(allowLemma: true, allowLambda: true)
":=" Expression(allowLemma: true, allowLambda: true)
{ "," Expression(allowLemma: true, allowLambda: true)
":=" Expression(allowLemma: true, allowLambda: true)
}
A map display expression builds a finite or potentially infinite
map from explicit MapLiteralExpressions
. For example:
var m := map[1 := "a", 2 := "b"];
ghost var im := imap[1 := "a", 2 := "b"];
Note that imap
s may only appear in ghost contexts. See
section 9.3 for more details on maps and imaps.
EndlessExpression(allowLemma, allowLambda) =
( IfExpression_(allowLemma, allowLambda)
 MatchExpression(allowLemma, allowLambda)
 QuantifierExpression(allowLemma, allowLambda)
 SetComprehensionExpr(allowLemma, allowLambda)
 StmtInExpr Expression(allowLemma, allowLambda)
 LetExpr(allowLemma, allowLambda)
 MapComprehensionExpr(allowLemma, allowLambda)
)
EndlessExpression
gets it name from the fact that all its alternate
productions have no terminating symbol to end them, but rather they
all end with an Expression
at the end. The various
EndlessExpression
alternatives are described below.
IfExpression_(allowLemma, allowLambda) =
"if" Expression(allowLemma: true, allowLambda: true)
"then" Expression(allowLemma: true, allowLambda: true)
"else" Expression(allowLemma, allowLambda)
The IfExpression
is a conditional expression. It first evaluates
the expression following the if
. If it evaluates to true
then
it evaluates the expression following the then
and that is the
result of the expression. If it evaluates to false
then the
expression following the else
is evaluated and that is the result
of the expression. It is important that only the selected expression
is evaluated as the following example shows.
var k := 10 / x; // error, may divide by 0.
var m := if x != 0 then 10 / x else 1; // ok, guarded
CaseBinding_ =
"case"
( Ident [ "(" CasePattern { "," CasePattern } ")" ]
 "(" CasePattern { "," CasePattern } ")"
)
CasePattern =
( Ident "(" [ CasePattern { "," CasePattern } ] ")"
 "(" [ CasePattern { "," Casepattern } ] ")"
 IdentTypeOptional
)
Case bindings and patterns are used for (possibly nested)
pattern matching on inductive or coinductive values.
The CaseBinding_
construct is used in
CaseStatement
and CaseExpression
s.
Besides its use in CaseBinding_
, CasePattern
s are used
in LetExpr
s and VarDeclStatement
s.
When matching an inductive or coinductive value in
a MatchStmt
or MatchExpression
, there must be
a CaseBinding_
for each constructor. A tuple is
considered to have a single constructor.
The Ident
of the CaseBinding_
must match the name
of a constructor (or in the case of a tuple the Ident
is
absent and the second alternative is chosen).
The CasePattern
s inside the parenthesis are then
matched against the argument that were given to the
constructor when the value was constructed.
The number of CasePattern
s must match the number
of parameters to the constructor (or the arity of the
tuple).
The CasePattern
s may be nested. The set of nonconstructorname
identifiers contained in a CaseBinding_
must be distinct.
They are bound to the corresponding values in the value being
matched.
MatchExpression(allowLemma, allowLambda) =
"match" Expression(allowLemma, allowLambda)
( "{" { CaseExpression(allowLemma: true, allowLambda: true) } "}"
 { CaseExpression(allowLemma, allowLambda) }
)
CaseExpression(allowLemma, allowLambda) =
CaseBinding_ "=>" Expression(allowLemma, allowLambda)
A MatchExpression
is used to conditionally evaluate and select an
expression depending on the value of an algebraic type, i.e. an inductive
type, or a coinductive type.
The Expression
following the match
keyword is called the
selector. There must be a CaseExpression
for each constructor of
the type of the selector. The Ident
following the case
keyword in a
CaseExpression
is the name of a constructor of the selector's type.
It may be absent if the expression being matched is a tuple since these
have no constructor name.
If the constructor has parameters then in the CaseExpression
the
constructor name must be followed by a parenthesized list of CasePattern
s.
If the constructor has no parameters then the
CaseExpression
must not have a following CasePattern
list.
All of the identifiers in the CasePattern
s must be distinct.
If types for the identifiers are not given then types are inferred
from the types of the constructor's parameters. If types are
given then they must agree with the types of the
corresponding parameters.
A MatchExpression
is evaluated by first evaluating the selector.
Then the CaseClause
is selected for the constructor that was
used to construct the evaluated selector. If the constructor had
parameters then the actual values used to construct the selector
value are bound to the identifiers in the identifier list.
The expression to the right of the =>
in the CaseClause
is then
evaluated in the environment enriched by this binding. The result
of that evaluation is the result of the MatchExpression
.
Note that the braces enclosing the CaseClause
s may be omitted.
QuantifierExpression(allowLemma, allowLambda) =
( "forall"  "exists" ) QuantifierDomain "::"
Expression(allowLemma, allowLambda)
QuantifierDomain =
IdentTypeOptional { "," IdentTypeOptional } { Attribute }
[ "" Expression(allowLemma: true, allowLambda: true) ]
A QuantifierExpression
is a boolean expression that specifies that a
given expression (the one following the “::”) is true for all (for
forall) or some (for exists) combination of values of the
quantified variables, namely those in the QuantifierDomain
.
Here are some examples:
assert forall x : nat  x <= 5 :: x * x <= 25;
(forall n :: 2 <= n ==> (∃ d :: n < d && d < 2*n))
or using the Unicode symbols:
assert ∀ x : nat  x <= 5 • x * x <= 25;
(∀ n • 2 <= n ==> (∃ d • n < d && d < 2*n))
The quantifier identifiers are bound within the scope of the
expressions in the QuantifierExpression
.
It types are not given for the quantified identifiers then Dafny attempts to infer their types from the context of the expressions. It this is not possible the program is in error.
SetComprehensionExpr(allowLemma, allowLambda) =
[ "set"  "iset" ]
IdentTypeOptional { "," IdentTypeOptional } { Attribute }
"" Expression(allowLemma, allowLambda)
[ "::" Expression(allowLemma, allowLambda) ]
A set comprehension expression is an expressions that yields a set
(possibly infinite if iset
is used) that
satisfies specified conditions. There are two basic forms.
If there is only one quantified variable the optional "::" Expression
need not be supplied, in which case it is as if it had been supplied
and the expression consists solely of the quantified variable.
That is,
set x : T  P(x)
is equivalent to
set x : T  P(x) :: x
For the full form
var S := set x1:T1, x2:T2 ...  P(x1, x2, ...) :: Q(x1, x2, ...)
the elements of S
will be all values resulting from evaluation of Q(x1, x2, ...)
for all combinations of quantified variables x1, x2, ...
such that
predicate P(x1, x2, ...)
holds. For example,
var S := set x:nat, y:nat  x < 2 && y < 2 :: (x, y)
would yield S == {(0, 0), (0, 1), (1, 0), (1,1) }
The types on the quantified variables are optional and if not given Dafny
will attempt to infer them from the contexts in which they are used in the
P
or Q
expressions.
If a finite set was specified (“set” keyword used), Dafny must be able to prove that the result is finite otherwise the set comprehension expression will not be accepted.
Set comprehensions involving reference types such as
set o: object  true
are allowed in ghost contexts. In particular, in ghost contexts, the check that the result is finite should allow any set comprehension where the bound variable is of a reference type. In nonghost contexts, it is not allowed, because–even though the resulting set would be finite–it is not pleasant or practical to compute at run time.
StmtInExpr = ( AssertStmt  AssumeStmt  CalcStmt )
A StmtInExpr
is a kind of statement that is allowed to
precede an expression in order to ensure that the expression
can be evaluated without error. For example:
assume x != 0; 10/x
Assert
, assume
and calc
statements can be used in this way.
LetExpr(allowLemma, allowLambda) =
[ "ghost" ] "var" CasePattern { "," CasePattern }
( ":="  { Attribute } ":" )
Expression(allowLemma: false, allowLambda: true)
{ "," Expression(allowLemma: false, allowLambda: true) } ";"
Expression(allowLemma, allowLambda)
A let
expression allows binding of intermediate values to identifiers
for use in an expression. The start of the let
expression is
signaled by the var
keyword. They look much like a local variable
declaration except the scope of the variable only extends to the
enclosed expression.
For example:
var sum := x + y; sum * sum
In the simple case the CasePattern
is just an identifier with optional
type (which if missing is inferred from the rhs).
The more complex case allows destructuring of constructor expressions. For example:
datatype Stuff = SCons(x: int, y: int)  Other
function GhostF(z: Stuff): int
requires z.SCons?
{
var SCons(u, v) := z; var sum := u + v; sum * sum
}
MapComprehensionExpr(allowLemma, allowLambda) =
( "map"  "imap" ) IdentTypeOptional { Attribute }
[ "" Expression(allowLemma: true, allowLambda: true) ]
"::"