jueves, 12 de noviembre de 2015

Custom pattern-matching and advanced macrology

Hello, and welcome to the 3rd entry in the Lux Devlog, a blog dedicated to the design and implementation of the Lux Programming Language, an upcoming functional language in the Lisp tradition.

Today, I'll talk about 2 really interesting subjects that go hand in hand and that give Lux a lot of its power and expressiveness.

First: Custom pattern-matching


Before explaining the details, I'll talk a bit about the genesis of this feature...

Back in the early days of Lux's design & development, I was thinking about syntax for lists.
Some languages (such as Haskell and JavaScript) offer custom syntax for list or array data-structures since the regular syntax for building data-structures tends to be a bit cumbersome to use, while lists are very commonly used data-structures.

Just consider the difference between writings this:
 [1, 2, 3, 4]  
versus writing this:
 Cons 1 (Cons 2 (Cons 3 (Cons 4 Nil)))  

While designing Lux, I came to the conclusion that adding custom syntax for lists was, in a way, betraying the language. Rather than come up with a quick-and-dirty fix to the problem of lists, I wanted to have consistent and general ways to deal with data-structures, while also having comfortable syntax for lists. In short, I wanted to have my cake and eat it too.

The solution for building lists was pretty easy. In a language with macros, the easiest way to fix these kinds of syntax issues was to write a macro... and that's exactly what I did:

 (@list 1 2 3 4) ## This might surprise some of you, as an earlier version didn't have the @ sign...  

There is also an alternative version, which takes a "tail" as it's last argument

 (@list& 1 2 3 4 (@list 5 6 7 8))  

OK, so that takes care of building lists, but there is still something missing...
There's no point in building data-structures if you can't tear them apart later on. The mechanism for that is pattern-matching. And here is where we encounter our problem...

Macros work well in regular code, but pattern-matching is special. Patterns are not expressions. You're not supposed to evaluate a pattern in order to get something out of it.
However, the syntax for writing patterns turns out to be identical to the syntax for writing (certain kinds of) expressions.

The answer to this question might seem obvious to a lot of you... and it's also obvious to me (in hindsight). But to a prior version of me, many months ago, it wasn't such an obvious thing. I struggled with an answer for weeks and even considered just having custom syntax for lists and give up on the subject...

And then I had my "aha!" moment. If macros give me the syntax to build lists, and that syntax is the same I need for patterns, then why not just generate patterns with macros too. Sounds obvious, right? (In hindsight...)

But there was still the matter of how do I implement it.
Do I traverse the patterns to check for all macros that show up and expand those?
That seems easy, but then I thought of something...

Macros are pretty flexible tools. Building code for making data-structures is just one of the myriad things you can achieve.
But what if my patterns could care about more than just destructuring things.
I can't just expand macros in place whenever I see them, because I'd be assuming every macro is there to generate destructuring code for me.
I need to add more control, and include macros that allow me to do more than just easily decompose data-structures.

And so, the idea for custom pattern-matching was born.

The concept is pretty simple, the pattern-matching macro (case) checks patterns to see if there is a top-level macro invocation. If so, that macro gets involved with both the pattern and the body that is to be executed for that pattern. Whatever comes out of the macro gets substituted for the pattern and the body, and the macro-expansion process is repeated until no more macros are left.

The beautiful thing is that, since the body is also included in the macro call, you can have pattern-matching macros that transform their bodies or that repeat them an arbitrary amount of times. The power unleashed is very impressive, and I have only scratched the surface of what can be achieved.

Now, without further ado, it's time for some demonstrations :D

 (def (to-pairs xs)  
   (All [a] (-> (List a) (List (, a a))))  
   (case xs  
     (\ (@list& x1 x2 xs'))  
     (@list& [x1 x2] (to-pairs xs'))  
     _  
     #;Nil))  

The \ macro has the simple task of expanding every macro it finds inside the pattern. It's the simplest of the pattern-matching macros and its use is very common (specially when working with lists).

 (deftype Weekday  
   (| #Sunday  
      #Monday  
      #Tuesday  
      #Wednesday  
      #Thursday  
      #Friday  
      #Saturday))

 (def (weekend? day)  
   (-> Weekday Bool)  
   (case day  
     (\or #Sunday #Saturday)  
     true  
     _  
     false))  

The \or macro repeats the body given to it for every pattern you give it. That way, you can reuse the body whenever you have patterns that involve returning the same result.

 ## This is an actual structure from the lux/meta/ast file:  
 (defstruct #export AST/Eq (Eq AST)  
   (def (= x y)  
     (case [x y]  
       (\template [<tag> <struct>]  
        [[[_ (<tag> x')] [_ (<tag> y')]]  
        (:: <struct> (= x' y'))])  
       ([#;BoolS   Bool/Eq]  
        [#;IntS    Int/Eq]  
        [#;RealS   Real/Eq]  
        [#;CharS   Char/Eq]  
        [#;TextS   Text/Eq]  
        [#;SymbolS Ident/Eq]  
        [#;TagS    Ident/Eq])  

       (\template [<tag>]  
        [[[_ (<tag> xs')] [_ (<tag> ys')]]  
        (and (:: Int/Eq (= (size xs') (size ys')))  
             (foldL (lambda [old [x' y']]  
                      (and old (= x' y')))  
               true  
               (zip2 xs' ys')))])  
       ([#;FormS]  
        [#;TupleS])  

       [[_ (#;RecordS xs')] [_ (#;RecordS ys')]]  
       (and (:: Int/Eq (= (size xs') (size ys')))  
            (foldL (lambda [old [[xl' xr'] [yl' yr']]]  
                     (and old (= xl' yl') (= xr' yr')))  
              true  
              (zip2 xs' ys')))  

       _  
       false)))  

\template is the pattern-matching sibling to the do-template macro. It allows you to reuse the code of the body, with some minor modifications to make it more suitable to each particular case.
After the \ macro, this is probably the most handy pattern-matching macro to have around.

 ## Please forgive the super-contrived example  
 (deftype MyRecord  
   (& #foo Int  
      #bar Int  
      #baz Text))  
   
 (def (sum rec)  
   (-> MyRecord Int)  
   (case rec  
     (\slots [#foo #bar])  
     (i:+ foo bar)))  

\slots is Lux's counterpart to Clojure's :keys destructuring syntax.

 ## Again, sorry for the contrived example...  
 (def type-1 "foo")  
 (def type-2 "bar")  
 (def type-3 "baz")  
   
 (def (process-message message)  
   (-> (, Text Data) (,))  
   (case message  
     (\~ [(~ type-1) data]) (do-something      data)  
     (\~ [(~ type-2) data]) (do-something-else data)  
     (\~ [(~ type-3) data]) (do-another-thing  data)))  

Have you ever wanted to reuse a literal value in a situation that doesn't allow you the use of variables?
That's a bit problematic, as you end up repeating the same literal value over and over again, introducing the risk of bugs, should the value every change.

The \~ macro is there for precisely this purpose. Just tell it what you need inlined and let it work it's magic.
Note: It only works with Bool, Int, Real, Char and Text

Finally, I've got a nice treat for you guys...

Lux favors eager evaluation over lazy evaluation. However, we all know that some times laziness can be useful, and there are even some data-structures that depend on it, such as streams.

Lux offers a type for doing lazy evaluation:

 (deftype #export (Lazy a)  
   (All [b]  
     (-> (-> a b) b)))  

In Lux, Lazy is just like the Cont type in Haskell, except that it's arguments are in the reverse order.
Streams are defined in terms of Lazy:

 (deftype #export (Stream a)  
   (Lazy (, a (Stream a))))  

This means that streams are actually functions.

Now... some of you might think "if streams are functions, that means I can't pattern-match against them".
Well, my friend, you're wrong!

 (def (take-s n xs)  
   (All [a] (-> Int (Stream a) (List a)))  
   (if (i:<= n 0)  
     #;Nil  
     (case stream  
       (\stream& x xs')  
       (#;Cons x (take-s (dec n) xs')))))  

The \stream& macro modifies the body so that pattern-matching on streams amounts to running the functions appropriately to extract the values.
Thanks to pattern-matching macros, we can actually pattern-match against functions ^_^ .

BTW, I have talked about what macros come by default in the Lux standard library, but I haven't shown how they're implemented.
Just so you can get an idea, here's the implementation for the \ macro:

 (defmacro' #export (\ tokens)  
   (case tokens  
     (#Cons body (#Cons pattern #Nil))  
     (do Lux/Monad  
       [pattern+ (macro-expand-all pattern)]  
       (case pattern+  
         (#Cons pattern' #Nil)  
         (wrap (@list pattern' body))  
        
         _  
         (fail "\\ can only expand to 1 pattern.")))  
     
     _  
     (fail "Wrong syntax for \\")))  

Also, I haven't mentioned something very important.
Even though those macros can only work thanks to the case macro macro-expanding the patterns, there are other macros out there which use case in their implementations, and they can also benefit from pattern-matching macros.

Some of those macros are the let macro, the lambda macro, and the do macro.
That's right, you can use custom pattern-matching against the arguments to your functions, or inside your do-notation, or even in good ol' let forms. How cool is that!?

_____________________________________________________________________________

Second: Inter-operating macros


I know I've talked a lot already, but there's one other topic I want to discuss on this post, and that is how to get macros to inter-operate.

As the custom pattern-matching examples show, you can unlock great power when your macros can work together, rather than separately, and Lux already reaps some of that power outside of the boundaries of pattern-matching.

To get macros to play together, you need to do 2 things:

  1. Some macros must perform macro-expansions on the arguments they receive, in order to process the outputs
  2. There must be some kind of "contract" between the outer macro and the inner macros

The first part is pretty obvious. If not because case does macro-expansion on the patterns it receives, none of the magic would happen.

But the second part is often missed by macro writers in other lisps. Without a common contract, communication becomes impossible and there can be no cooperation.

What's a common contract?


Consider this: whatever your pattern-matching macros generate, some rules must always stand:

  1. They must have an even number of outputs (since you're substituting both the pattern and the body).
  2. For the patterns being generated, they must either be primitive forms suitable for regular pattern-matching, or they must be macro calls to be further expanded.

If any of these 2 rules is broken, the case macro is going to complain about it.

However, this isn't the only common contract macros can have and Lux already has a few macros with their own contracts.

The defsig common contract


You might remember the defsig macro from the last blog post (if not, I advise you to go read it).
What you might not know is that you can actually use other macros inside it's body.

Here's a nice example (from the lux/control/number module):

 (defsig #export (Number n)  
   (do-template [<name>]  
     [(: (-> n n n) <name>)]  
     [+] [-] [*] [/] [%])  
   
   (do-template [<name>]  
     [(: (-> n n) <name>)]  
     [negate] [signum] [abs])  
   
   (: (-> Int n)  
     from-int)  
   )  

The Number signature provides simple math operations.
There are already implementations for Int and Real in the standard library.
And, as you can see, I make liberal use of the do-template macro to reduce boiler-plate.

The reason why this works is simple: every member of the signature must take the form:
(: <type> <name>)

Anything that generates forms of that kind is going to be welcomed. You could even implement and use your own macros in there, provided that they generate that kind of code.

defstruct also has a similar contract...

The defstruct common contract


 (defstruct #export Int/Number (Number Int)  
   (do-template [<name> <op>]  
     [(def (<name> x y) (<op> x y))]  
  
     [+ _jvm_ladd]  
     [- _jvm_lsub]  
     [* _jvm_lmul]  
     [/ _jvm_ldiv]  
     [% _jvm_lrem])  
   
   (def (from-int x)  
     x)  
   
   (def (negate x)  
     (_jvm_lmul -1 x))  
   
   (def (abs x)  
     (if (_jvm_llt x 0)  
       (_jvm_lmul -1 x)  
       x))  

   (def (signum x)  
     (cond (_jvm_leq x 0) 0  
       (_jvm_llt x 0) -1  

       ## else  
       1))  
   )  

In this case, what defstruct is looking for is forms that define things.
Note that, the def macro being used here is the very same one used to define everything else in Lux.

Pretty cool, huh?

Finally, there's one last piece of macro awesomeness I want to share before we call it quits.
I came with it fairly recently, so I haven't settled on a name yet.
For now, let's just call it let%

Before I explain how it works, I'll show the itch it's meant cure:

 (defstruct #export Json/Read (Read Json)  
   (def (read input)  
     (case (:: JsonNull/Read (read input))  
       (#;Some value)  
       (#;Some (#Null value))  
    
       #;None  
       (case (:: JsonBoolean/Read (read input))  
         (#;Some value)  
         (#;Some (#Boolean value))  
   
         #;None  
         (case (:: JsonNumber/Read (read input))  
           (#;Some value)  
           (#;Some (#Number value))  
   
           #;None  
           (case (:: JsonString/Read (read input))  
             (#;Some value)  
             (#;Some (#String value))  
   
             #;None  
             (case (:: (JsonArray/Read [read]) (read input))  
               (#;Some value)  
               (#;Some (#Array value))  
   
               #;None  
               (case (:: (JsonObject/Read [read]) (read input))  
                 (#;Some value)  
                 (#;Some (#Object value))  
   
                 #;None  
                 #;None))))))  
     ))  

Do you see that? That train-wreck? That monstrosity?

It's from a JSON library for Lux I'm working on.
The Read signature in Lux is for structures that try to parse something out of text. If they fail, you get #None

As you can see, I'm having to do some testing, to try to figure out what I'm parsing, but the code is ruled by repetitive case expressions where everything is the same, except what parser I'm using and what tag to give to the result.

Surely, there must be a better way of doing it!

First... let's flatten the structure:

 (defstruct #export Json/Read (Read Json)  
   (def (read input)  
     (|> #;None  
         (case (:: (JsonObject/Read [read]) (read input))  
           (#;Some value)  
           (#;Some (#Object value))  
   
           #;None  
           )  
         (case (:: (JsonArray/Read [read]) (read input))  
           (#;Some value)  
           (#;Some (#Array value))  
   
           #;None  
           )  
         (case (:: JsonString/Read (read input))  
           (#;Some value)  
           (#;Some (#String value))  
   
           #;None  
           )  
         (case (:: JsonNumber/Read (read input))  
           (#;Some value)  
           (#;Some (#Number value))  
   
           #;None  
           )  
         (case (:: JsonBoolean/Read (read input))  
           (#;Some value)  
           (#;Some (#Boolean value))  
   
           #;None  
           )  
         (case (:: JsonNull/Read (read input))  
           (#;Some value)  
           (#;Some (#Null value))  
   
           #;None  
           ))  
     ))  

This might not be much, but it's a start.
By using the piping macro |>, I can avoid all the nesting and keep all the tests in the same level.
Now it's even more obvious that every form in there has the same shape, minus the parser and the tag.

Man... wouldn't it be nice of we just had a macro for repeating things, while passing in parameters...

 (defstruct #export Json/Read (Read Json)  
   (def (read input)  
     (|> #;None  
         (do-template [<struct> <tag>]  
           [(case (:: <struct> (read input))  
              (#;Some value)  
              (#;Some (<tag> value))  
   
              #;None  
              )]  
       
           [(JsonObject/Read [read]) #Object]  
           [(JsonArray/Read [read]) #Array]  
           [JsonString/Read     #String]  
           [JsonNumber/Read     #Number]  
           [JsonBoolean/Read     #Boolean]  
           [JsonNull/Read      #Null]))  
         ))  

do-template seems like a pretty wise choice here, doesn't it?
The problem is that it doesn't play well with |>, as |> doesn't do any kind of macro-expansion of it's member forms prior to piping them.
Because of that, I can't combine |> with do-template; as awesome as that might be.

let% to the rescue

 (defstruct #export Json/Read (Read Json)  
   (def (read input)  
     (let% [<tests> (do-template [<struct> <tag>]  
                      [(case (:: <struct> (read input))  
                         (#;Some value)  
                         (#;Some (<tag> value))  
   
                         #;None  
                         )]  
             
                      [(JsonObject/Read [read]) #Object]  
                      [(JsonArray/Read [read])  #Array]  
                      [JsonString/Read          #String]  
                      [JsonNumber/Read          #Number]  
                      [JsonBoolean/Read         #Boolean]  
                      [JsonNull/Read            #Null])]  
       (|> #;None  
           <tests>))  
       ))  

let% is meant for those situations in which you want to expand certain parts of the bodies of macros prior to expanding outer macros.With let%, you can bind symbols to arbitrary amounts of forms produced by macro expansions, and then they're going to be spliced wherever the symbols appear, further down the line.

With this, I can do all of my parsing while only focusing on the common logic and the details that change.

Note: let% is just a temporary name until I come up with something better.
Do you have any suggestion? Leave it in the comments.
And, yes, I have already considered with-macro-expansions, but it's too damned long...

_____________________________________________________________________________

That's it for today.
I know it was a long post, but hopefully you'll now understand the amazing power that can be unlocked when you write macros that play well together.

See you next time :D

jueves, 5 de noviembre de 2015

Lux's values, types, signatures & structures

Hello, and welcome to the 2nd entry in the Lux Devlog, a blog dedicated to the design and implementation of the Lux Programming Language, an upcoming functional language in the Lisp tradition.

Today I'll touch 4 topics that are essential to the functioning & structure of Lux programs. You'll also learn about some of the unconventional language design involved in Lux and how to use these features.

Without further ado, let's begin!

Values


Lux gives you 5 different kinds of base values and 3 different kinds of composite values. This is besides having access to whatever types of objects the host-platform provides.

First, we've got the Bool type, with only two member values: true and false.

Second, we've got the Int type. Right now, it ranges over the values in Java's long type.

Third, we've got the Real type. It ranges over the values offered by Java's double type.

Fourth, we've got the Char type. It's syntax is a little unconventional; they look like these: #"a", #"\n", #"3".

Fifth, we've got the Text type. It's basically just strings, and has "the same conventional syntax everyone is used to".

Regarding the composite values, Lux offers variants, tuples and records (or sums and products, as they're also called; with records being a kind of product).

Tuples always look the same; just a sequence of values encased in brackets, like this one: [1 true "YOLO"].
There's a special tuple called "unit", it's special because all instances are identical. The reason for that is simple: it's empty [].

Variants are structures containing 2 elements: a tag and a value; like this: (#Some 123)
If the value is unit, you can write something like (#None []), or, for convenience, you can just write #None.
If you're trying to associate more than 1 value with a tag, you'll need to use tuples for that. An example is the #Cons tag: (#Cons [head tail])
However (also for convenience), you can omit the brackets and Lux will infer you're using a tuple when it sees more than 1 element following the tag. E.g. (#Cons head tail)

Finally, we have to talk about records. Their syntax looks like this: {#foo 1 #bar true #baz "YOLO"}
The order in which you specify the fields doesn't matter to the compiler.

Records, I must reveal, are special among all of the value-types that I have mentioned, in that they only exist as a syntactic structure. In reality, all records are translated to tuples.

Now that I have talked about these fairly simple & easy concepts, let's complicate things a bit. Let's talk about why the basic data-types have unconventional names, what are tags and how Lux stores variants & tuples.

What's the deal with the names?


You might find it annoying that I'm using the terms Bool, Int, Real, Char and Text for stuff that already has names within the Java platform (Boolean, Integer, Double, Character and String). The reason for me doing so has to do with portability.

Here's the thing: I'm designing Lux so it's easy to port to other platforms. Because of that, I had to make a few choices regarding what can or can't be done in Lux, and how you do it.

One thing you might have noticed if you were careful is that I never mention that Lux has support for floats (only for doubles). The same is true for bytes, shorts & ints (the 32-bit variety). That must look strange, coming from a language with a JVM compiler.

Well... the first thing is that you can use all of those types; Lux just doesn't give you literal syntax for making them, so you have to resort to casting (there are a bunch of casting special forms in Lux, like _jvm_l2i, a.k.a. long to int).

The second thing is that I can't rely on all platforms offering that much variety in terms of datatypes, so Lux was designed to rely on as few data-types as possible.

The funny thing is that, even the amount of types Lux has is too much for some platforms (consider that JavaScript lacks both chars & ints). However, those 5 basic types seemed like a good enough set.

Finally, there's the issue of the names (sorry that it took me so long to actually talk about the subject of this subsection :P )

Since Lux's 5 basic types are meant to shield you from the details of what the host platform provides, there wasn't any reason to base their names on the types of any one platform (remember that, for instance, JS has Number, rather than Int or Float). So, I got creative, I kept the names short and I fixed (in my opinion) some historical mistakes (floats were renamed as reals because a type's name should reflect it's meaning, rather than it's implementation; and strings were renamed as text because... well... nothing else made sense).

What the hell are tags?


This is a very important question, considering that this is a Lisp we're talking about.
First things first: tags are not Lux's equivalent to Lisp keywords. Lux doesn't even have an equivalent to keywords.

Tags are, in reality, nothing more than glorified ints.
That's it. There you have it, folks. Now you know Lux's deep, dark secret.

Wait... ints? What the heck is going on here!?

Ok, ok, ok. Here's what's going on: you can't use tags before you declare them and when you declare them, they get assigned int values. We'll talk more about declaring tags in the Types section.
Suffice it to say that ints make it possible for Lux to do efficient pattern-matching on variants, since it's faster than comparing text tags, and it also makes it possible to reorder records as tuples, since you always know where every field needs to go.

How does Lux store variants & tuples?


Some of you might be familiar with Scala and think that, just like that language, Lux must create classes every time you define new data-types.
If you think that, you're dead wrong.

You shouldn't feel bad about it, though. I almost went down a similar road in the early days of Lux's design. Eventually, though, I came up with a much simpler model: arrays.

It's very simple, really: n-tuples are just n-arrays, while variants are 2-arrays (index 0 is reserved for the tag, index 1 is for the value).

Types


Types are very interesting in Lux, for a few reasons:
  • They are first-class objects
  • They are structural, rather than nominal
  • If you know what you're doing, you can do awesome things with them

First-class types


The trick to do this is to make types using the very same data-types I discussed in the previous section, and to allow the compiler to evaluate those data-structures and incorporate them into the type-checking phase.

To find out how to make types, let's first check out their type:
 (deftype #rec Type  
   (| (#DataT Text (List Type))  
      (#VariantT (List Type))  
      (#TupleT (List Type))  
      (#LambdaT Type Type)  
      (#BoundT Int)  
      (#VarT Int)  
      (#ExT Int)  
      (#UnivQ (List Type) Type)  
      (#ExQ (List Type) Type)  
      (#AppT Type Type)  
      (#NamedT Ident Type)  
   ))  

#DataT is for representing host-platform types. It includes the name of the type and any type parameters it might have (if it is, for instance, a generic type in Java).

#VariantT and #TupleT require a list of all the member types (you might wonder where tags enter the mix, but I'll explain that soon enough).

All functions in Lux are curried, so they only take a single argument. That's the reason behind #LambdaT only having a type for it's input, and a type for it's output.

#BoundT represents a De Bruijn index, useful when working with both univeral and existential quantification (what #UnivQ and #ExQ are for).

Also, the lists of types in #UnivQ and #ExQ represent their captured context (types with multiple parameters are also curried and require context for that).

#VarT is a type variable. The int it holds serves to look up types inside a context when the compiler is doing type-checking.

#ExT is for existential types. Instances can only match when they have the same number.

#AppT is for applying universally-quantified types to parameters.

Finally, #NamedT is a convenient tool for giving names to types (helpful for debugging and documentation purposes).

Now, you might be puzzled that there's a #rec tag in from of the Type definition. The reason is that types are not recursive by default and #rec allows deftype to perform a minor cosmetic rearrangement on your type to make it recursive.

Another thing that deftype does is to (automatically) wrap your type definitions inside #NamedT so you don't have to worry about it.

A final thing that it does is declare the tags referenced in your type. The int values each would get would depend on the order in which they appear, beginning with index 0 (tags cannot have negative values). The next time Lux sees the #TupleT tag, it will know it actually means 2.

Now, quick trivia for the curious:
Q: Does that mean that I can actually use ints instead of tags when writing variants?

A: Yes. In fact, if you check out the lux.lux (prelude) file, that's exactly what I do at the beginning to define types, as the tags for Type & List aren't defined at the very beginning.

Structural, rather than nominal types


This should be obvious just from looking at the definition of Type.
An interesting consequence of this is that, since types are just Lux data-structures, I can pattern-match against them and get useful information.

Doing awesome stuff with types


The Lux compiler stores a lot of useful information regarding types. For instance, you can know the type of every definition in any module that has been compiled, and you can also ask which tags are associated to which types.

You'll see some examples of what that enables in the next section.

Signatures & Structures


Lux handles polymorphism the same way that ML does, via signatures and structures.
Think of signatures as interfaces that describe what a suitable implementation should provide.
Structures are those implementations.

Here is one example:
 (defsig #export (Functor f)  
   (: (All [a b] (-> (-> a b) (f a) (f b)))  
      map))  
 (defstruct #export List/Functor (Functor List)  
   (def (map f fa)  
     (case fa  
       #;Nil          #;Nil  
       (#;Cons a fa') (#;Cons (f a) (map f fa')))))  
One difference between ML languages and Lux is that ML separates signatures and structures from the rest of the language, whereas Lux piggybacks on plain-old types and values in order to implement them.

How? By using record/tuple-types as signatures and records as structures.
The conversion is performed by the defsig and defstruct macros, which only serve to provide a more pleasant syntax for working with signatures & structures.

Without the sugar they provide, our previous example would look like this:
 (deftype (Functor f)  
   (& #map (All [a b] (-> (-> a b) (f a) (f b)))  
      ))  
 (def List/Functor  
  (Functor List)  
  {#map (lambda map [f fa]  
          (case fa  
            #;Nil          #;Nil  
            (#;Cons a fa') (#;Cons (f a) (map f fa'))))  
   })  
_____________________________________________________________________________

Lux offers many ways to work with structures in an easy way, depending on what you're trying to do.

If you want to work with a structure inside a local scope, use the using macro:
 (using List/Monoid  
   (foldL ++ unit list-of-lists))  
If you want to work with a structure in several places throughout a module, use the open macro:
 (open List/Fold)  
If you only want to use a specific element in a structure, use the :: macro:
 (:: Text/Eq (= x y))  
 (:: List/Monoid unit)  
Also, remember when I told you that the Lux compiler stores a lot of type information? Those macros access that information to do their magic. They find out what are the names of the necessary tags and create all the code to generate lexical bindings, full-blown definitions, or just simple access code.

Bonus content: get@, set@ and update@


Oh... I almost forgot. There's one last bit I want to share with you.
Remember records? They're really nice and all, but in order to access their contents you have to pattern-match on them.

As you can imagine, pattern-matching on an record with 8 fields is not going to be nice, even if you do it in tuple-form.

For that reason, there are 3 simple macros that take care of a lot of the complexity for you.

get@ allows you to access a single field from a record:
 (get@ #;modules state)  
set@ allows you to set a single field from a record:
 (set@ #pl;name "Lux" lang)  
update@ allows you to transform a single field from a record:
 (update@ #;seed (i:+ 1) state)  
Again, these macros access the type information in the compiler to generate all the pattern-matching and record-building code necessary to perform these operations.

_____________________________________________________________________________

I'm sorry that the post was so long, but I wanted to be thorough.

Next week is going to be pretty awesome, as I'm going to talk about custom-pattern matching & advanced macrology in Lux.