/* Some literate Haskell and Scala to demonstrate 1) phantom types 2) I am a masochist 3) ??? 4) profit! The code is probably best viewed with a syntax colorizer for one language or the other but I've colorized all my comments. > {-# LANGUAGE EmptyDataDecls #-} > module RocketModule(test1, test2, createRocket, addFuel, addO2, launch) where */ object RocketModule { /* None of these data types have constructors, so there are no values with these types. That's okay because I only need the types at compile time. Hence "phantom" - ethereal and unreal. > data NoFuel > data Fueled > data NoO2 > data HasO2 */ sealed trait NoFuel sealed trait Fueled sealed trait NoO2 sealed trait HasO2 /* The Rocket data type takes two type parameters, fuel and o2. But the constructor doesn't touch them. I don't export the constructor so only this module can create rockets. > data Rocket fuel o2 = Rocket */ final case class Rocket[Fuel, O2] private[RocketModule]() /* createRocket produces a rocket with no fuel and no o2 > createRocket :: Rocket NoFuel NoO2 > createRocket = Rocket */ def createRocket = Rocket[NoFuel, NoO2]() /* addFuel takes a rocket with no fuel and returns one with fuel. It doesn't touch the o2 > addFuel :: Rocket NoFuel o2 -> Rocket Fueled o2 > addFuel x = Rocket */ def addFuel[O2](x : Rocket[NoFuel, O2]) = Rocket[Fueled, O2]() /* Similarly, addO2 adds o2 without touching the fuel > addO2 :: Rocket fuel NoO2 -> Rocket fuel HasO2 > addO2 x = Rocket */ def addO2[Fuel](x : Rocket[Fuel, NoO2]) = Rocket[Fuel, HasO2]() /* launch will only launch a rocket with both fuel and o2 > launch :: Rocket Fueled HasO2 -> String > launch x = "blastoff" */ def launch(x : Rocket[Fueled, HasO2]) = "blastoff" /* This is just a pretty way of stringing things together, stolen shamelessly from F#. Adding infix operations is a bit verbose in Scala. > a |> b = b a */ implicit def toPiped[V] (value:V) = new { def |>[R] (f : V => R) = f(value) } /* Create a rocket, fuel it, add o2, then launch it > test1 = createRocket |> addFuel |> addO2 |> launch */ def test1 = createRocket |> addFuel |> addO2 |> launch /* This compiles just fine, too. It doesn't matter which order we put in the fuel and o2 > test2 = createRocket |> addO2 |> addFuel |> launch */ def test2 = createRocket |> addO2 |> addFuel |> launch //This won't compile - there's no fuel // > test3 = createRocket |> addO2 |> launch // def test3 = createRocket |> addO2 |> launch // This won't compile either - there's no o2 // > test4 = createRocket |> addFuel |> launch // def test4 = createRocket |> addFuel |> launch // This won't compile because you can't add fuel twice // > test5 = createRocket |> addFuel |> addO2 |> addFuel |> launch // def test5 = createRocket |> addFuel |> addO2 |> addFuel |> launch }
Monday, October 18, 2010
Phantom Types In Haskell and Scala
Friday, August 27, 2010
Why Scala's "Option" and Haskell's "Maybe" types will save you from null
Cedric Beust makes a bunch of claims in his post on Why Scala's "Option" and Haskell's "Maybe" types won't save you from null, commenters say he doesn't get it and he says nobody has argued with his main points. So I thought I'd do a careful investigation to see what, if anything, is being missed.
First, right off the top here: Scala has true blue Java-like null; any reference may be null. Its presence muddies the water quite a bit. But since Beust's article explicitly talks about Haskell he's clearly not talking about that aspect of Scala because Haskell doesn't have null. I'll get back to Scala's null at the end but for now pretend it doesn't exist.
Second, just to set the record straight: "Option" has roots in programming languages as far back as ML. Both Haskell's "Maybe" and Scala's "Option" (and F#'s "Option" and others) trace their ancestry to it.
Now, to the post.
First Objection
in practice, I find that hash tables allowing null values are rare to the point where this limitation has never bothered me in fifteen years of Java.
Really? Let me give one concrete example. You're writing a cache system (in the form of an associative map) in front of a persistent storage with nullable values. Question: how do you differentiate between "I haven't cached this value" from "I've cached the value and it was null in the database"? The answer is that you can't use "null" to mean both things, it's ambiguous. That may not "bother" Beust, but I find it irritating that you end up having to use two different mechanisms to say this in Java.
With Maybe/Option you just say that the map holds optional values. A fetch result of None/Nothing means you haven't put that key in your map. A result of Some(None)/Just Nothing means you've got the key but it maps to nothing, and a result of Some(value)/Just value means that the key is in the map and that there's a "real" value there in storage.
Save Me!
Back to Beust
The claim that Option eliminates NullPointerException is more outrageous and completely bogus. Here is how you avoid a null pointer exception with the Option class (from the blog):
val result = map.get( "Hello" ) result match { case None => print "No key with that name!" case Some(x) => print "Found value" + x }See what's going on here? You avoid a NullPointerException by... testing against null, except that it's called None. What have we gained, exactly?
Here, is, IMHO the heart of what people are arguing with Beust about. He thinks this code is no different from the kind of "if (result == null)" code you write in Java. Sure, identical. Except for one, huge, glaring, major, stab you in the face difference: in Java the type system says Map<T>#get will return a T. It doesn't say it MIGHT return a T. That's left to the documentation where it's out of reach for the compiler. Here in Java.
Map<String, Integer> myStuff = new HashMap<String, Integer()>; myStuff.put("hello", 42); int result = myStuff.get("goodbye") * 2; // runtime NPE
Java NPEs when you try to use the result of the fetch. On the other hand, when you use a Scala map you get an Option[T] while Haskell's says it returns a Maybe t. The type system says you can't use the underlying value directly. Here's me trying to pull something from a map and manipulate it directly in both Haskell
mystuff = fromList[("hello", 42)] result = (lookup "goodbye" myStuff) * 2 -- compile time error
And Scala
val myStuff = Map("hello" -> 42) val result = (myStuff get "goodbye") * 2 // compile time error
In both cases I get a static error.
Now...there are a lot of (IMHO crappy) debates on static vs dynamic typing. But one thing must be very clear: the two situations are at least different. This isn't the same as a Java NPE (or the equivalents in Ruby/Python/whatever). Option/Maybe tell the static type system that you may or may not have what you're looking for.
Blowups Can Happen
Onward.
The worst part about this example is that it forces me to deal with the null case right here. Sometimes, that's what I want to do but what if such an error is a programming error (e.g. an assertion error) that should simply never happen? In this case, I just want to assume that I will never receive null and I don't want to waste time testing for this case: my application should simply blow up if I get a null at this point.
There is a cavernous hole in Haskell Maybe and Scala Option. Beust's post does not talk about it at all, though. The hole is that Haskell and Scala are Turing complete, and that means that the type system cannot prevent you from throwing optionality out if that's what you really want to do. In Haskell
loopForever :: a loopForever = loopForever -- really, forever stripOption :: Maybe t -> t stripOption Just x = x stripOption _ = loopForever -- or better yet stripOption = fromMaybe loopForever
And Scala
final def loopForever[A] : A = loopForever // and ever, amen def stripOption[T](o : Option[T]) : T = o match { case Some(x) => x case None => loopForever } // or, better still def stripOption[T](o : Option[T]) : T = o getOrElse loopForever
In their respective languages language the following compile just fine but will cause non-termination at runtime.
result = (stripOption (lookup "goodbye" myStuff)) * 2 val result = stripOption(myStuff get "goodbye") * 2
Of course, if you really want to write that in real code you would replace "loopForever" with throwing an exception. That's exactly what Scala's Option#get and Haskell's Data.Option.fromJust do.
result = (fromJust (lookup "goodbye" myStuff)) * 2 val result = (myStuff get "goodbye").get * 2
Why did I write a silly loop? To make a point: Turing completeness punches a hole in the type system. "Throw" style operators exploit essentially the same hole. Every programmer should know this in the bottom of their heart.
Whether using the loopForever version or the exception version, stripOption/get/fromJust is different from null dereferencing. It requires an explicit step on the part of the programmer. The programmer says "I know what I'm doing here, let me continue, blow up if I'm wrong." But, IMHO, it's easy enough to say that it kills his objection of "forcing" him to deal with null instead of just blowing up. Yes, there's a hoop to jump through, but it's tiny. KABOOM!
Nullable Types
Next point:
When it comes to alleviating the problems caused by null pointer exceptions, the only approach I've seen recently that demonstrates a technical improvement is Fantom (Groovy also supports a similar approach), which attacks the problem from two different angles, static and runtime:
Static: In Fantom, the fact that a variable can be null or not is captured by a question mark appended to its type:
Str // never stores null Str? // might store nullThis allows the compiler to reason about the nullability of your code.
One technical quibble. Since Groovy doesn't have static types this static bit isn't relevant to Groovy.
Anyway, the equivalents in Scala and Haskell are String vs Option[String]/Maybe String. More characters, but the same idea. Except that Fantom and Nice and other languages with nullable types don't tend to support saying ??String whereas Option[Option[String]] and Maybe (Maybe String) are totally reasonable. See my caching example above.
Safe Invoke
Runtime: The second aspect is solved by Fantom's "safe invoke" operator, ?.. This operator allows you to dereference a null pointer without receiving a null pointer exception:
// hard way Str? email := null if (userList != null) { user := userList.findUser("bob") if (user != null) email = user.email } // easy way email := userList?.findUser("bob")?.emailNote how this second example is semantically equivalent to the first one but with a lot of needless boiler plate removed.
And in Scala that might look like
userList flatMap (_ findUser "bob") flatMap (_.email)
while haskell would use
userList >>= findUser "bob" >>= email
There are some more keystrokes indeed, but here's the thing...
Better
So, can someone explain to me how Option addresses the null pointer problem better than Fantom's approach
Option/Maybe/flatMap/>>= are all ordinary library code. No magic. The types are algebraic data types, an idea that is massively useful. The methods belong to a class of types called Monad. If Option/Maybe don't work for you as-is you can build your own versions. With nullable types and ?. operator you're stuck with what the language provides. If they don't work for your particular case then you can't create your own variants. Quick, in Fantom extend the ?. operator so that works for exception-like semantics such that failure carries a reason instead of just computing null. In Haskell and Scala that already exists and it's ordinary library code in a type called Either.
Conclusion
In practice, outside of learning, I've never ever gotten the equivelent of an NPE by abusing Haskell's fromJust or Scala's Option#get, not even in testing. On the other hand NPE's are a fact of my life when dealing with nulls in Java et al. And while it's true that testing has revealed the vast majority of potential NPEs (or equivalent) in my life, it certainly hasn't dug up all of them. I've seen plenty of production systems brought to their knees by NPEs.
Exceptions plus a deliberate weakening of the type system allow developers to create "oh, just blow up" code when that's the right answer. But the developer has to explicitly say so.
Fantom/Nice style nullable types plus a "safe invoke" operator are a nice improvement over plain old null. But they are not composable (no ??T) and they are special purpose magic aimed at a very narrow domain instead of general purpose magic such as algebraic data types and higher order functions that are broadly applicable to huge problem domains.
So there you have it, Cedric, what are we missing in your post?
PostScript on Scala and Null
Which brings me back to Scala's null. For interoperability reasons Scala has a full blown Java-like null. But the experience that all the Scala developers I know have is that NPE is mostly only a problem when dealing with existing Java libraries because Scala programmers and libraries avoid it. Plus, Scala's "Option(foo)" promotes foo to either Some(foo) or None depending on whether it's null or not that at least enables the map/flatMap style of "safe invoke." It's far less than perfect, but about the best that can be done when dealing directly with Java.
For Haskell, null is a non-issue.
Friday, August 21, 2009
Getting to the Bottom of Nothing At All
Except when I'm being a total smart ass or self indulgent emo telling the Ruby community to f' off, most of what I write talks about practical stuff, often with a small dose of theory. But this time it's about theory with a small dose of practice.
Last time I talked about the way programmers in popular C derived languages C++, Java, and C# think of a void function as being one that returns nothing. I contrasted this with the functional view that such functions return something, they just return the unit value (), something that doesn't carry any information. To expand on this I want to talk about actually returning nothing.
In any Turing complete language it is possible to write a function that really truly never returns anything, not even a made-up singleton value like (). In C style languages the simplest such function might contain an infinite loop like
while(true);For my purposes a better way to explore the idea is with infinite recursion. Pretend your favorite C style language does tail call optimization[1] so we can ignore stack overflow issues and examine the following
X foo(){ return foo(); }
The question is what should the X be? And the answer is that if you plug that line of code into C, C++, C#, or Java then X can be just about any type you want[2].
If that doesn't give you pause then consider this: your type system appears to be lying to you. The function is promising an X but has no hope of delivering. And its quite happy to promise you any X. If you write "String foo(){ return foo(); }" then your type system considers the expression "foo();" to have type String so that you can write "String myFoo = foo();". But guess what, foo doesn't return a String and myFoo will never get a String. Is the type system lying?
The answer is no, of course not. It's just proving something a bit weaker than you might think at first. Instead of proving that "foo() will compute something that is a String" it's proving that "foo() won't compute something that isn't a String." If your "Law of the Excluded Middle" sense just tweaked, then you're on the right page. You'd think there are only two cases to consider: either the function definitely computes a String or it definitely doesn't. But there's this third case where it doesn't compute anything, and since the type checker can't reliably detect non-termination (c.f. Turing Halting Problem) it has to do something a bit odd. First, it assumes that the foo() declaration is correct and does return a String. Then it goes looking for a contradiction such as returning an int. Inside the function the compiler sees that the return is the result of the expression foo(). But the compiler has already assumed that foo() returns a String. There's no contradiction between declared and result so all is happy. The word "tautology" should come to mind right about now. By assuming X is true the type checker has proved that X is true. This weakening is a practical consequence of the Turing Halting Problem and there are at least two good ways to think about it. But first some more examples of the phenomenon.
Exceptions and Exits and ...
I very casually suggested that we ignore stack overflow issues in the infinite recursion above. But exceptions are an important part of this story because they are, in some interesting way, related to non-termination. Consider this function (or its equivalent in your favorite language)X foo(){ throw new RuntimeException(); }Once again, X can be any type. And once again foo() does not in fact compute an X.
Clearly these two definitions of foo are different things. Non-termination means we're stuck whereas an exception means we might be able to recover and try something else, or at least notify that there's a problem and shutdown cleanly. None-the-less they both behave similarly. First, they hijack the normal flow of control in such a way that it never returns back to the caller. And second they are both examples of a kind of weakening in the type system since any type can be substituted for X. Formally they are both examples of functions that diverge (functions that return normally are said to converge).
The Type
Here's a third example of a diverging function in Java. Translate as necessary for your language (in Java System.exit(n) stops the program immediately and returns n to the calling process).X foo(){ System.exit(1); return null; }
Yet another case where foo() won't compute anything, it diverges. In fact, the return statement after the exit call is dead code. However, this is example is slightly different than the other examples because we had to create the dead code for the compiler to be happy[3] and, in fact for a different "X" like int the return value might have to be changed. That bit of dead code is closely related to the heart of this article.
Java can detect some kinds of dead code. If you write "X foo() { throw new RuntimeException(); return null; };" then Java recognizes that the return is unreachable and complains. In my System.exit(1) example Java didn't recognize that the call to exit() will never return so it required a following return statement. Obviously "throw" is a keyword and can get special attention that a mere library function can't but it would be useful to be able to let Java know that, like throw, exit() diverges.
One of the best ways to tell a compiler how a function behaves is by using types and, in type theory, there's a type that expresses just what we need. The type is called bottom (often written ⊥), and while there are different ways to look at the bottom type I'm going to go with a subtyping based view that should be accessible to C++, C#, and Java programmers.
If a language has subtyping and a function says it will return a type "X" then the function is allowed to return a "Y" instead as long as Y is a subtype of X. In my example of a function that just throws an exception the return type could be anything at all. So if we wanted System.exit(1) to indicate that it diverges the same way throw does, then its return type should be a subtype of all types. And indeed, that's exactly what bottom is.[4] bottom is a subtype of String, and int, and File, and List<Foo>, and every other type. Usefully, conventional type hierarchies are written with supertypes above subtypes, which makes a convenient mnemonic for "bottom" which needs to go below everything on such a hierarchy.
Now, if you're used to OO thinking then you expect a value with a certain subtype to in some sense be substitutable everywhere that a supertype is expected. But how can any one object behave like a String, an int, a File, etc? Remember that bottom indicates divergence: an expression with type bottom can never compute a value. So if exit()'s return type was bottom it would be totally safe to write "String foo() { return System.exit(1); }" while another bit of code could have "int bar() {return System.exit(1); }."
Making it Useful, A Divergence to Scala Divergence
Occasionally it might be useful to indicate that a function diverges. Examples are functions like System.exit(1), or functions that always throw an exception, perhaps after logging something or doing some useful calculation to create the exception. But interestingly out of all the statically typed languages that have any following outside of pure research only Scala has an explicit bottom type, which it calls Nothing. The reason Scala has a bottom type is tied to its ability to express variance in type parameters.For some reason a lot of programmers run screaming into the night when you say "covariance" or "contravariance." It's silly. I won't get into all the details of variance, but I will say that in Scala the declaration "class List[+T] {...}" means that List[Subtype] is a subtype of List[Supertype]. No screaming necessary. And List[+T] brings me to one extremely practical use of bottom - what type should an empty List have?
Well, an empty List can have type List[String] or List[Foo] or List[int]. T can be whatever. And what's a subtype of whatever for all values of whatever? You guessed it: bottom (Nothing). Indeed, Scala has one constant called Nil whose type is List[Nothing]: a subtype of List[String], List[int], and List[whatever]. It all ties up in a bow when you consider that a List[T] has a method called head which returns the first element of the list as type T. But an emtpy list has no first value, it must throw an exception. And sure enough, head in List[Nothing] has type Nothing.
C# 4.0 is supposed to be getting definition site variance similar to Scala's but using the clever mnemonic keywords "in" and "out". I haven't heard anything yet on whether it will also add a bottom type, but it would make a lot of sense.
Java has usage site variance using wildcards. You can say "List<? extends Supertype> x" to indicate that x can have a List<Supertype> or a List<Subtype>. The bottom type would be useful in Java, too, although not as compelling since wildcards are so verbose people rarely use them even when they would make sense. Plus, Java folk tend to mutate everything and List[Nothing] in part makes sense in Scala because Scala Lists are immutable.
C++ does not have any simple way to express this kind of variance so the bottom type is even less compelling in C++ than it is in Java.
Back On Track
Haskell and languages in the ML family don't have an explicit bottom type. Their type systems don't have subtyping so adding bottom as a subtype would confuse things. None-the-less, they do have a nice way to express bottom that can be teleported back to Java, C#, and C++ (but not C). Recall that bottom is a subtype of all types. Another way of saying that is that if a function returns type bottom then for all types A, the function returns something compatible with A so why not express that directly? In Haskell the "error" function takes a string and throws an exception.Prelude> :type error error :: [Char] -> aIn Haskell, a lower case identifier in type position is always a type parameter, and [Char] means "list of Char", aka String. So for all types a, if "error" doesn't diverge then it will take a String and return an a. That can pretty much be expressed directly in Java
public static <A> A error(String message) { throw new RuntimeException(message); }
or C++
class message_exception : public std::exception { public: explicit message_exception(const std::string& message) : message_(message) {} virtual ~message_exception() throw() {};virtual const char * what() const throw() { return message_.c_str(); };
private: const std::string message_; }; template <typename A> A error(const std::string& message) {throw message_exception(message); }
And for either language, usage would be something like
int divide(int x, int y) { if (y == 0) { return error<int>("divide by zero"); // drop the "<int>" in Java } else { return x / y; } }
Haskell also has a function called "undefined" that simply throws an exception with the message "undefined." It's useful when you want get started writing some code without fully specifying it.
Prelude> :type undefined undefined :: a
The function isn't as interesting as the type, which promises that for any type a "undefined" can compute an a or diverge. Since "undefined" can't possible just produce a value of any arbitrary type, it has no choice but to diverge. The same idea can be added to Java
public static <A> A undefined() {return error("undefined"); }or C++
template <typename A> A undefined() { return error<A>("undefined"); }
In either language it might be used as
string IDontKnowHowToComputeThis(int input) { return undefined<string>(); // again, make appropriate changes for Java }
Given the requirement to write the "return" keyword in C#, Java, and C++ I'm not sure how practical something like a generified error function really is as compared to having it return an exception and making the user write 'throw error("blah")', nor whether "undefined" is that much more useful than just throwing an UndefinedException, but this does illustrate the relationship between the bottom type and functions that, in Haskell terms, compute "forall a.a" or in C#/Java/C++ terms return the type parameter A without taking an A as an argument.
Also, as always care should be taken when transliterating from one language to another. Java would allow a function like "undefined" to return a null instead of diverging. C++ would allow it to return anything all and would only fail to compile if it were used in an incompatible way. That contrasts with languages like Haskell and ML in which the only way to implement "undefined :: a" is to make it diverge in some form or fashion[5].
The Bottom Value
I've spent some time talking about the bottom type as having no values. But it does have expressions like "undefined()" and that leads to a rather philosophical notion of how to think of bottom as having a value. Sorta. Skip this section if you don't like total gray beard philosophizing. If you're brave, then stick with me and imagine a subset of your favorite C derived language that does not allow side effects. No mutation, no IO, and no exceptions. In this strange world functions either just compute values or they diverge by not halting. In such a world the order of evaluation mostly isn't important as long as data dependencies are met - in f(a(), b()), you can compute a() before b() or b() before a(), whatever, they can't interfere with each other. Doesn't matter. The only time order of evaluation is forced on us is when there are data dependencies, so "a() + b()" must compute a() and b() (or b() and a()) before their sum can be computed. Nice.Well, almost. Order of evaluation can matter for expressions that diverge. Let me give an example.
int infiniteLoop() {return infiniteLoop();} int f(int x) {return 42;} int result = f(infiniteLoop());
Because f ignores its argument, if we call "f(infiniteLoop());" there are two possible outcomes. If "infiniteLoop()" is eagerly evaluated before f is called then the program will diverge. On the other hand, if the expression "infiniteLoop()" is lazily remembered as being potentially needed later then f can successfully return 42 and the diverging expression can be forgotten just like it never happened (because it didn't).
We've gone to the pain of eliminating side effects from our language so it's a little irritating to have to keep thinking about evaluation order just to deal with divergence, so we perform a little mental trick; a little white lie. Imagine that functions like infiniteLoop above don't get stuck, they compute a value called ⊥, which is the only value of type bottom.
Now, since the bottom type is a subtype of all types and ⊥ is the bottom value, then it follows that every type must be extended with ⊥. Boolean = {⊥, true, false}, int = {⊥, 0, 1, -1, 2, -2, ...}, and unit = {⊥, ()}. That means we need some rules for ⊥ and how it interacts with everything else. In the vast majority of languages including Java, C#, and C++ but also impure functional languages like F# and OCaml doing just about anything with ⊥ can only compute ⊥. In other words, for all functions f, f(⊥) = ⊥. If you write f(infiniteLoop()) in those languages then the result is ⊥. This kind of rule is called "strictness".
In contrast, Haskell is often called a "lazy language," meaning that expressions aren't evaluated until they're needed. That's not quite technically correct. The Haskell spec just says that it is "non-strict." The spec doesn't care when expressions are evaluated so long as programs allow ⊥ to slide as far as possible. An expression like f(infiniteLoop) must evaluate to 42. Haskell basically forces an expression involving ⊥ to evaluate to ⊥ only when the argument must be used[6]. The distinction between "lazy" and "non-strict" is subtle, but by being "non-strict" rather than "lazy" a Haskell compiler can use eager evaluation anytime it can prove that doing so doesn't change behavior in the face of ⊥. If a function always uses its first argument in a comparison, then Haskell is free to use eager evaluation on that argument. Since Haskell truly does forbid side effects(unlike our imagined neutered language above), the choice of evaluation strategy is up to the compiler and invisible except for performance consequences[7].
C++, Java, and C# have just a tiny bit of non-strictness. In these languages "true || ⊥" is true and "false && ⊥" is false. If these languages were totally strict then "true || ⊥" would be ⊥. Users of these languages call this behavior "short circuiting" and it's done for performance reasons rather than being a philosophical goal, but it's still a curious departure from their normal rules.
There you have it. The bottom value ⊥ is a clever mental hack to allow purely declarative functional code to be reasoned about without injecting sequencing into the logic. It allows people to talk about the difference between a purely declarative strict language and a purely declarative non-strict language without getting into details of evaluation order. But since we're talking about languages that aren't so purely declarative, we can take off our philosopher hats and return back to the world where side effects are unrestricted, bottom is a type with no values and divergence means that flow of control goes sideways.
Tuples and Aesthetics
Last time I talked about the unit type and how, if you interpret types as logical propositions, the unit type behaves as a "true" and tuple types act as logical and "∧." I also talked about an algebraic interpretation where unit type acts like 1 and tuple types act like multiplication "×". So the type (A,B) can also be written as A∧B or A×B. The type (A,unit) is isomorphic to A, A×1 = A, and A∧True <=> A.Bottom has similar interpretations as 0 or False. The type (A,bottom) is isomorphic to the bottom type because you can't compute any values of type (A,bottom). A×0 = 0, and A∧False <=> False. Nice how it all hangs together, eh?
Bottom behaves like False in another way. In logic if you assume that False is true you can prove anything. Similarly in type systems the bottom type allows the programmer to promise to do even the impossible. For instance, here's a Java function signature that promises to convert anything to anything.
public static <A,B> B convert(A arg) {...}If you ignore dynamic casting and null (they're both weird in different ways) there's no meaningful way to implement that function except by diverging. More on this in an upcoming episode.
Conclusion
I somehow don't think anybody will be running into the office saying "I just read an article on the bottom type, so now I know how to solve our biggest engineering challenge." But the bottom type is still interesting. It's a kind of hole in your static type system that follows inevitably from the Turing Halting Problem[8]. It says that a function can't promise to compute a string, it can only promise to not compute something that isn't a string. It might compute nothing at all. And that in turn leads to the conclusion that in a Turing complete language static types don't classify values (as much as we pretend they do) they classify expressions.Footnotes
- gcc does do tail call optimization under some circumstances.
- Oddly, in C, C#, and Java X can't be "void" because its an error to return an expression of type void. C++ does allow void to make writing templates a bit easier.
- Yes, I can make it a void function and not have a return. But again that would illustrate how exit() is different from a throw: throw doesn't require the return type to be void.
- Note I'm saying "subtype" here and not "subclass." int, List<String>, and List<File> are 3 different types. In C++, Java, and C# int doesn't have a class. In Java and C# List<String> and List<File> both come from the same class. Yet bottom must be a subtype of all 3 so it can't be a subclass.
- A plea to my readers: I'm too lazy to see if "forall a.a" is bottom in F# or C#, or if, like Java, null would be allowed. My bet is that null won't be allowed because only Java has the peculiar notion that type parameters must be bound to reference types.
- Very loosely speaking. Please see the Haskell Report for all the gory details about when Haskell implementations are allowed to diverge.
- Haskell has another clever hack where throwing an exception isn't an effect, but catching one is. Since order of evaluation is unspecified in Haskell, the same program with the same inputs could conceivably cause different exceptions. Exceptions are theoretically non-deterministic in Haskell.
- A language that isn't Turing complete can prove termination and does not need a bottom type, but such a language isn't powerful enough to have a program that can interpret the language.
Thursday, July 23, 2009
Void vs Unit
I suspect most people who come to one of the statically typed functional languages like SML, OCaml, Haskell, F#, or Scala will have only seen static typing in one of the popular C derived languages like C++, Java, or C#. Some differences in type system become clear fast. One particular spot is the relationship and difference between void and unit.
So imagine your favorite C derived language. Consider your language's equivalent of the following function definition [1]
void printHello(String name) { printf("hello %s", name); }
What does the word "void" mean? To any C, C++, Java, C# programmer it means that printHello doesn't return anything. Simple.
But a functional programmer[2] sees it quite differently. This article will explain the different point of view and why it's useful.
What's a Function?
It turns out to be very hard to write a single definition of "function" that covers all the ways the word has been used. But to a functional programmer at the very least a function is something that maps inputs to outputs. In the impure functional languages that function may also have an effect like printing "hello." I'll stick with the impure definition in this article.
Now, when a functional programmer talks about mapping inputs to outputs he or she doesn't mean taking a string as an input and outputting it on the console. Instead he or she is talking about arguments and returns of a function. If something is to have any pretense at being a function, or at least a function that returns normally, then it must return something.
printHello clearly does return normally so a functional programmer insists that it returns something. But a functional programmer also must recognize that it doesn't return anything "meaningful."
Thus rather than the concept "void," a functional programmer thinks of unit. In some languages the unit type is written as (), but for clarity I'm going to stick with "unit." Now, unit is a type, conceptually not much different from say "int". But unit, as its name somewhat implies, has exactly one value conventionally represented as an empty pair of parentheses ().
In OO terms () is a "singleton". There's only one instance of it. It's also immutable. In a very real sense it carries no information. You can't write a program that behaves differently based on what a function of type unit returns - it always returns the same ().
A Performance Diversion
At this point a little guy in the back of your head might be asking "doesn't returning something meaningless just waste cycles?" Well, it's important to distinguish between abstraction and implementation. The unit value, (), is an abstraction that the implementation is free to optimize away. If you write in Scala
def printHello : Unit = { println("hello") () }
Then the compiler will generate the equivalent of the original void based code. If in Scala you write
print(printHello)
Then the compiler might generate something like
printHello print( () )
Because () is a singleton it doesn't matter at the machinecode/bytecode level that it's not "actually" returned. The compiler can figure it out with no performance hit at all.
Making It Useful
Okay, blah, blah, blah, functional think this, unit that. Whatever. What's the point? () is a value with no information, unit is a type with only that value...what's the practical use?
Well, it turns out that its useful for writing generic[3] code. Java and C++, somewhat famously, do not have first class functions. But they can be faked. Here's the Java
interface Function<Arg, Return> { Return apply(Arg arg); } Function<String, Integer> stringLength = new Function<String,Integer> { Integer apply(String arg) { return arg.length(); } }
In C++ you can do it essentially the same way with a template and operator() overloading[4]
template <typename Arg, typename Result> struct Function { virtual Result operator()(Arg arg) = 0; }; struct StringLength : Function<string, int> { int operator()(string arg) { return string.length(); } } Function<int, string> *stringLength = new StringLength();
C# has a function type and lambdas so it's very easy to write.
Func<string, int> stringLength = s => s.Length;
So, if stringLength is an object of type Function<String, int> / Function<int, string> * / Func<string, int> then I can write the following
int result = stringLength.apply("hello"); // Java int result = (*stringLength)("hello"); // C++ int reulst = stringLength("hello"); // C#
So here's the question: how would I convert the printHello function earlier into a Function<String,X>/Function<string, X> */Funct<string, X>. What type is X?
If you've been playing along so far you recognize that it "should be" void, but Java and C# don't allow it. It's invalid to write Function<String, void> in either language.
Let's pretend you could. After all, C++ will let you write a class PrintHello : Function<std::string, void>. But there's still a problem even in C++. Generic code might expect a result and none of these languages let you denote a value of type void. For instance (in Java) imagine thise simple bit of reusable code.
public static <A,B> List<B> map(List<A> in, Function<A,B> f) { List<B> out = new ArrayList<B>(in.size()); for (A a : in) { out.add(f.apply(a)); } return out; }
C++ has std::transform and C# has IEnumerable#Map, which are more or less the same idea.
Anyway, given something like the above calling map/transform/Map with a list of strings and the printHello object should result in a List<void> of the same length as the original list. But what's in the list? The answer, in a functional language, is a bunch of references to (). Tada, the meaningless is useful for writing generic code.
C++ will allow a Function<std::string, void> but any attempt to call something like map with it will fail to compile. C++ doesn't have a () value so our generic code has become slightly less generic.
The Work Arounds
Java, C++, and C# programmers do solve this problem all the time. One common option is to not use Function<string, void> but some other concept like "Action<string>" to mean a function that takes a string, performs a side effect, and returns nothing useful. Then you essentially duplicate "map" into something called perhaps "foreach" that expects an Action<T> and returns void. That probably makes sense in this case since a list of references to a meaningless value is perhaps silly, but it also means a great deal of code duplication if this kind of higher order programming is common. For instance, you can't write just one compose function that creates a function from two other functions, you also have to write a compose that takes an action and a function to create an action.
Another answer is to make the printHello function object have type Function<String,Object>/Function<string,void*>/Func<string,object> and return a null. That's pretty disgusting.
Perhaps the best option is to create your own Unit enumeration with only a single value. That does leave Unit nullable in Java, but railing against nullability in Java is another blog post.
enum Unit { unit }; // C++, Java, or C#
As for good old C, well, the type system doesn't do parametric polymorphism so generic code in C is problematic no matter what you do. Still, with function pointers it's conceivable you might find a use for a unit value. Typically a C programmer would use 0.
A Quick Aside About Purity
For the most part I've casually used effects everywhere. Most functional languages like SML, OCaml, F#, and Scala are impure and let the programmer get away with that. Some functional languages, e.g. Haskell without unsafe* extensions, do not. They must track effects using the type system. But even with this caveat, the unit type is still useful to, for instance, distinguish from a function that fetches a string from the keyboard which would have might have type IO String vs one that prints to the console which might have type String -> IO Unit.
Conclusion
Void and unit are essentially the same concept. They're used to indicate that a function is called only for its side effects. The difference is that unit actually has a value. That means that unit functions can be used generically wherever a generic first class function is required. The C derived languages miss out by treating void functions as functions that don't return anything instead of as functions that don't return anything meaningful.
Next time I'll talk about functions that really, truly return nothing.
Postscript on Aesthtics and Tuples
Mostly I've talked about the practical value of the unit type in this article, but I do love it when convenient practice and beautiful theory line up. So here's an attack from a different direction.
Many functional languages have tuples. A tuple is an ordered fixed size collection of values of (possibly) different types[5]. Tuple types are usually written with parens. For instance, (Foo, Bar) is the type of pairs (2-tuples) with one Foo, one Bar. The same type is sometimes written as Foo × Bar, which makes sense too. You can see the type as a set that consists of a kind of Cartesian product of the Foo and Bar sets. For instance if Foo has only two possible values {f1, f2} and Bar has only two possible values {b1, b2} then the type Foo × Bar has 4 possible values, {(f1, b1), (f1, b2), (f2, b1), and (f2, b2)}.
You can have tuples that with higher numbers of members like triples and 4 tuples and such.
What about a 1-tuple? Well, the type Foo can be seen as a 1-tuple. There's would be no useful difference between the type Foo and the type (Foo).
The question to ask is, given that we have 3-tuples, 2-tuples, and 1-tuples does it make sense to have a 0 tuple? The answer, dear reader, is unit. The unit value is a tuple with no members. That's why Haskell calls both the type and the value "()" - it's like the type (A,B) without the A,B. And just as an empty list and empty set aren't quite "nothing" neither is the empty tuple.
And here's another interesting thing, type theorists will also call the unit type "1" (not to be confused with the value 1). To see why, look at the type (Foo, unit) which can also be written as Foo × 1. If Foo only has 2 possible values {f1, f2} then the entire set of possible values for the type Foo × 1 is {(f1, ()), (f2, ())}. It's the same 2 values extended with a little bit of "meaningless" information that can be stripped off or added back in as desired. Formally speaking it's isomorphic to the type Foo. To abuse notation slightly, Foo × 1 = Foo. So unit is not only the type that has only 1 value, it's the type that behaves like a 1 at the type level.
Footnotes
- In C of course it would be char *name, in C++ it might be char * or I might use the string and iostream libraries. Whatever. The differences aren't that relevant to this discussion.
- To save myself a lot of time I'm using phrases like "functional language" to mean the statically typed functional languages that have been influenced to a large extent by ML. Some of what I'm talking about does translate into some dynamically typed languages as well, although they often "pun" things so that there isn't a unit value that's distinct from nil, null, false or some other singleton value.
- I use the phrase "generic" here to mean what the functional community means by "parametrically polymorphic" or just "polymorphic." However, my target audience is more likely to associate polymorphism with OO dispatch, so "generic" it is.
- In C++ you can also do it without the pure virtual base class and just use template structural typing based on operator(), but shortly it'll be easier for me to describe a problem if I use a base class to give the concept a name. I could also allocate on the stack but that slightly confuses my point. I'm going this route to keep things symmetrical among the languages. Besides, this isn't a tutorial on why C++ isn't Java. On a related note it's a shame C++0x won't have concepts.
- This is quite different from a Python tuple which is really just a list.
Wednesday, July 15, 2009
Groovy Does Not Have Optional Static Typing
Every now and then somebody writes that Groovy supports optional static typing. But it doesn't. They've confused the use of type annotations on variables with a static type system.
This article is emphatically not an entry in the long, sad, and mostly insipid static vs. dynamic debate. It is a clarification of a misunderstanding.
Definitions
The word "static" in "static type system" is related to phrases like "static analysis." Basically, without executing the code in question, a static type system analyzes code to prove that certain kinds of misbehaviors won't ever happen.
The word "dynamic" in "dynamic type system" is related to the "dynamic execution" of a program. In contrast to a static type system, a "dynamic type system" automatically tests properties when expressions are being (dynamically) evaluated.[1]
The Proof
With those definitions its easy to show that adding type annotations to a Groovy program does not make it statically typed. Create a file named test.groovy. In it define a couple of unrelated classes
class Foo {} class Bar {}
Now write a function that promises to take a Bar and return a Foo. Add a println for fun.
Foo test(Bar x) { return x } println("Everything's groovy")
Compile it and run it (explicitly compiling isn't necessary, it just reinforces my point)
~/test$ groovyc test.groovy ~/test$ groovy test Everything's groovy
There were no complaints at all. A static type system would have rejected the program.
Open up test.groovy and make a slight adjustment so that the function is actually called
Foo test(Bar x) {return x} println("Everything's groovy") test(new Bar())
Compile and run this
~/test$ groovyc test.groovy ~/test$ groovy test Everything's groovy Caught: org.codehaus.groovy.runtime.typehandling.GroovyCastException: Cannot cast object 'Bar@861f24' with class 'Bar' to class 'Foo' at test.test(test.groovy:4) at test.run(test.groovy:6) at test.main(test.groovy)
A runtime exception is finally thrown at the (dynamic) moment when the test function is executed. That's dynamic checking rather than any form of static proof.
Actual Static Typing
Now open a file named test.scala.
class Foo class Bar def test(x : Bar) : Foo = x
Compile as a script
~/test$ scala -Xscript test test.scala (virtual file):1: error: type mismatch; found : this.Bar required: this.Foo object test { ^ one error found
Ta da, static typing. With no attempt to execute my test function the type checker says "I have reason to believe this might go sideways."
The Misunderstanding
There's an oft repeated phrase that "in a statically typed language variables are typed." There's also a common misunderstanding that static typing and type annotations are the same thing. I can show a simple counter example to both misconceptions with one language: Haskell.
Create a file named Test.hs. Here's a function called flatten. It flattens a list of lists one "level" to just be a list.[2]
flatten = foldl (++) []
No variables were harmed during the creation of that function. It's written in what's called "point free" style. Yet with neither variables nor type annotations I can show it's statically typed by defining a bogus main function
main = print $ flatten [1,2,3,4,5,6]
And trying to compile
~/test$ ghc Test.hs -o test Test.hs:3:24: No instance for (Num [a]) arising from the literal `1' at Test.hs:3:24 Possible fix: add an instance declaration for (Num [a]) In the expression: 1 In the first argument of `flatten', namely `[1, 2, 3, 4, ....]' In the second argument of `($)', namely `flatten [1, 2, 3, 4, ....]'
Thus you need neither variables nor annotations to have static typing. A small change fixes things right up
main = print $ flatten [[1,2,3],[4,5,6]] ~/test$ ghc Test.hs -o test ~/test$ ./test [1,2,3,4,5,6]
It's easy enough to see what type GHC has assigned the function. Just add a module declaration at the very top of the file
module Main (flatten, main) where
And fire up ghci
~/test$ ghci Loading package base ... linking ... done. Prelude> :load Test Ok, modules loaded: Main. Prelude Main> :type flatten flatten :: [[a]] -> [a]
Exactly as described, it takes a list of lists and returns a list.
Groovy Does Have Static Typing
Having shown that Groovy does not have static typing let me show that it does have static typing. :-) At least, it has static typing for some kinds of things. Open up test.groovy again and add this
class MyRunnable implements Runnable {}
Compile that and you get an error
~/test$ groovyc test.groovy org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed, test.groovy: 8: Can't have an abstract method in a non-abstract class. The class 'MyRunnable' must be declared abstract or the method 'void run()' must be implemented. @ line 8, column 1. class MyRunnable implements Runnable {} ^ 1 error
The error happens without executing anything so it's a form of static analysis, and it's caused by failure to adhere to the Runnable type's requirements hence it's a static type error. But it's not an optional static type check - you can't turn it off.
Interesting, no?
Conclusion
Whatever you feel positively or negatively about Groovy please don't perpetuate the myth that it has optional static types. It doesn't. Groovy's optional type annotations are used for dynamic testing not static proofs. It's just that with type annotations Groovy dynamically tests are for "dynamic type" tags attached to objects rather than its more normal mechanism of dynamically testing for the presence of certain methods. What little static typing Groovy does have is not optional.
Update: Some time after this article was written, a team announced Groovy++, an extension to Groovy that really does have optional static typing.
Footnotes
[1] In a certain formal sense the phrase "dynamic type system" is an oxymoron and "static type system" is redundant. However, the phrases are commonly used and there doesn't seem to be a commonly recognized concise phrase to describe what a "dynamic type system" does. Pedants will argue for "untyped" but that doesn't give me a concept on which to hang the different ways that say Groovy, Ruby, and Scheme automatically deal with runtime properties.
[2] A more general function called join already exist for all monads in Control.Monad module. It can be defined as "join m = m >>= id"
Tuesday, April 14, 2009
But, But...You Didn't Mutate Any Variables
In a private email, somebody asked of my previous article "Okay, I can see that there must be state since you've created a state machine. But you aren't mutating any variables so where does the state come from?" It's a good question. If you feel you "got it" then skip this article. But if you're still hesitant or think I'm misusing the word "state" please read on.
First, let me remind that I contend that state is 1) something responding to the same inputs differently over time as far as another observer is concerned and 2) an abstraction and not the representation. None-the-less, state always ends up with some representation. For instance, files can represent state. And even users of programs can hold state (quick, I'm in a drunken state and I plan to write a long rant, who am I?).
I want to ignore all but memory based state and show that, ultimately, even the Erlang code must manipulate mutable memory and that message sending effective leaks a little bit of that detail. Now, on Reddit an offended Erlanger called me a Java fanboy. As any of my loyal readers know, it is true that I think Java is the most incredibly awesome language EVER and I can hardly stop writing glowing things about it. But for this article I'll struggle against my natural instincts and use Ruby, Haskell, assembly, and Erlang.
I'll use Ruby to illustrate some points about state and representation. First the tired old state diagram and then some Ruby

class SockMachineSimple def initialize @state = :zerocoins end def insertcoin case @state when :zerocoins @state = :onecoin :nothing when :onecoin @state = :twocoins :nothing when :twocoins :coin end end def pushbutton if @state == :twocoins @state = :zerocoins :tubeofsocks else :nothing end end end
If you don't read Ruby then Ruby control flow structures generally result in the value of their last contained expression so explicit returns aren't needed. :foo is a symbol (kinda like a string). @state declares a field (which is private) named state. I called it state because it so clearly matches to the state space specified by the diagram. But here's another way to write SockMachine with a completely different representation
class SockMachineComplex def initialize @count = 0 end def insertcoin if @count % 3 == 2 :coin else @count = @count + 1 :nothing end end def pushbutton if @count % 3 == 2 @count = @count + 1 :tubeofsocks else :nothing end end end
Instances of this class keep track of the total number of coins or button pushes ever accepted and use the modulus operator to decide what to do about it. The representation is quite different from that of SockMachineSimple, but to anybody using this class the difference is mostly irrelevant - it still conforms to the same state diagram. Internally it has a very large number of states but externally it still has the same three. And here's one last stateful machine
class SockMachineDelegated def initialize @machine = SockMachineComplex.new end def insertcoin @machine.insertcoin end def pushbutton @machine.pushbutton end end
By looking at the source it would appear that this version never mutate any variables after being initialized, but of course it does by calling methods on SockMachineComplex. Hiding the manipulation of representation is not the same thing as making something stateless. And now one last example, one that is totally different because it is not stateful.
class SockMachineStateless def pushbutton machine = SockMachineComplex.new machine.insertcoin machine.insertcoin machine.pushbutton end end
Internally, pushbutton uses a stateful machine but externally it is not stateful. Short of using a debugger you can't observe the state changes in SockMachineStateless's internal machine. Actually, I guess you could monkey patch SockMachineComplex to do some kind of IO or callback to expose the workings so maybe I shouldn't have used Ruby to illustrate my OO points. Too late now.
Hiding State
Okay, but that's OO stuff. Erlang is a functional language and the code never appeared to have any mutable variables or nested mutable objects. So what gives? Well, to illustrate Erlang I'm going to use Haskell, another functional language. Haskell's great advantage to this exposition is that it's really, really easy to get at some nice clean assembly.
So here's a very silly Haskell function for determining if a positive Int is even or odd. Its implementation is silly, but it allows me to make a point.
flurb :: Int -> String flurb n = if n == 0 then "even" else if n == 1 then "odd" else flurb $ n - 2
This is a pure function - it mutates nothing. Yet when compiled with ghc -S -O to get AT&T syntax assembly, the function looks like this
Main_zdwflurb_info: movl (%ebp),%eax cmpl $1,%eax jl .Lczq cmpl $1,%eax jne .Lczo movl $rxA_closure,%esi addl $4,%ebp andl $-4,%esi jmp *(%esi) .Lczo: addl $-2,%eax movl %eax,(%ebp) jmp Main_zdwflurb_info .Lczq: testl %eax,%eax jne .Lczo movl $rxy_closure,%esi addl $4,%ebp andl $-4,%esi jmp *(%esi)
GHC has compiled the flurb function down to a loop and the immutable n variable is represented with the mutable eax register [1]. Some pseudo Haskell/assembly mix might help illustrate
flurb n = movl n, eax -- copy n into the eax register Main_zdwflurb_info: if eax == 0 then "even" else if eax == 1 then "odd" else addl $-2, eax -- decrement eax by 2 jmp Main_zdwflurb_info -- jump to the top of the loop
As you can see, the Haskell code does use mutable "variables" at runtime. This is a common optimization technique in functional languages for dealing with direct tail recursion. But all that machinery is hidden from you as a programmer so just like SockMachineStateless the end result is stateless unless you peel back the covers with a debugger.
Finally, Some Damn Erlang
All right, I've written Ruby and Haskell, generated some assembly, and then written in an impossible chimeric language. But my original question was about Erlang. Here's a simple Erlang counter actor
-module(counter). -export([create/0, increment/1]). create() -> spawn(fun() -> loop(0) end). increment(I) -> I ! self(), receive X -> X end. loop(N) -> receive From -> From ! N, loop(N + 1) end.
Again, no variables are mutated. But if I assume that Erlang does the same kind of direct tail call optimization that Haskell does, the pseudo Erlang/Assembly loop looks like this
loop(N) -> movl N, eax % copy n into the eax register .looptop: receive From -> From ! eax, % send the current value in eax back to the original sender inc eax % increment eax by 1 jmp .looptop % jump back to the top of the loop end.
It's still a loop like the Haskell one, but there's an important difference. Each message receive sends back the current value of the mutable eax register then mutates it. So this behaves a bit like SockMachineDelegated - the original code didn't appear directly manipulate any mutable variables, but there was mutation under the covers and unlike SockMachineStateless but like SockMachineDelegated this change of state is visible beyond the local confines. [2]
Now, there are other ways to deal with recursion. I don't know Erlang's implementation, but it doesn't matter. Something is being mutated somewhere and that change is being made visible by messages being sent back. Typically non tail calls mutate the stack pointer so that it points to new stack frames that hold the current value of arguments, or then again some arguments might stay in registers. Tail calls that aren't direct tail recursion might mutate the stack region pointed to by an unchanging stack pointer. Whatever, it always involves mutation, and it's always hidden from the programmer when using pure functions. But actor loops are not pure functions. The sends and receives are side effects that can allow a little tiny bit of the mutable machine representation to be mapped to state that an observer can witness.
But Really Where Are The Mutable Variables?
So all that's fine, but it doesn't really show where the state is in my original Erlang code. The code didn't have an N to stick in a mutable register or a mutable stack frame. Here's the core of it.
zerocoins() -> receive {coin, From} -> From ! nothing, onecoin(); {button, From} -> From ! nothing, zerocoins() end. onecoin() -> receive {coin, From} -> From ! nothing, twocoins(); {button, From} -> From ! nothing, onecoin() end. twocoins() -> receive {coin, From} -> From ! coin, twocoins(); {button, From} -> From ! tubeofsocks, zerocoins() end.
For this final answer I'm afraid I have to do a bit of hand waving. The explanation is even more abstract than the mutable variable as register/stack local explanation. You see, no matter what, your code always mutates one register: the instruction pointer. Its job is to point to the next instruction to be executed. As the CPU executes instructions the IP moves around, typically just being bumped up a bit for everything but jumps which move it more substantially.
In purely functional code the IP moves around but these moves can't be observed by other parts of the program as they are happening. In other words, in purely functional code, the changing IP is invisible. It's a completely black box unless you use a low level debugger. It's very much like SockMachineStateless where the internals were mutable but invisible to the outside world.
But with message receives the IP can be induced to move around based on things communicated from outside the function. The IPs current instruction can then define in large part what a message received by a process will do. If the IP points at the receive in the "zerocoins()" function then that's one behavior and if it it points to the receive in the "twocoins()" function then that's another. The different behaviors can be observed by other actors via messages sent back to them. When an actor sends a SockMachine a coin or buttonpress message it may indirectly be causing the IP to be mutated. And when it gets back nothing, coin, or tubeofsocks it's really being told, in a very indirect way, something about the value of the IP.
Conclusion
State is not the same thing as representation. None-the-less, with an omniscient view of things you can always find the representation of state. It might be in bits stored on a drive, it might be in neurons in a user's head, or it might be in a computer's memory and registers. The representation might not be directly visible in the source you're looking at but hidden in low level machinery. It might just be the instruction pointer pointing at different parts of your code base.
If you equate state with representation you end up with the very un-useful property that everything is stateful since at some level of representation every bit of executing code on your computer involves stateful processes. This view robs you of the ability to think about high level languages, especially purely functional ones like Haskell, at an appropriate level of abstraction.[3]
Purely functional code ensures that that the state represented by registers and stack pointers and instruction pointers is kept local. But side effects like message sends and receives allow that low level machinery to be used as the representation of shared mutable state even if you don't see it in your code.
In the end, it's far more useful in most cases to think of state from the point of view of an observer than the point of view of its implementation. The SockMachine actor was stateful regardless of how that state was represented simply because other actors could observe it changing state. Digging down into how send and receive allow a modicum of access to the instruction pointer just isn't a useful model for thinking about stateful actors normally. So the short answer to the original question is "Who cares how the state was represented? It was shared mutable state."
Foot notes
- Actually, it appears to be moving n back and forth between memory pointed to by eap and eax, which seems oddly wasteful given that it never branches out of this function. It also does 3 tests when only 2 should be necessary.
- Technically the Erlang code probably can't keep the current value of N in a register when calling receive since receive may cause a task switch to another process, so assume that receive copies registers out to memory before executing then swaps them back in when its done.
- Some people equate state with data, but that's just as bad. Code is data so all code must be stateful - another useless conclusion.