27 Jul 2016, 21:20

On (Essentially) Coeffects

Snipped and backdated from a Reddit comment of mine.

That’s an interesting way to put. It’s almost like const correctness or checked exceptions (done right) for side-effects since you have to maintain purity with what you are calling or otherwise becoming impure.

there might need to be some sort of distinction between a class containing only pure functions and immutable fields, and one containing procedures and/or mutable fields

Yes, const correctness for types, fields, procedures, functions, and everything would be useful to know what can change what.

for a class to be pure it must contain only functions. Similarly, […] a function can contain only expressions, including other functions

Makes sense. Impure code can call what it wants. Pure functions can only call pure functions. The only thing is that if you want a drop of impurity, everything starts becoming impure and it doesn’t convey much of any information.

Even if everything was marked as pure or impure, cost or mutable, we don’t know if an impure procedure is writing to a file, executing a system command, sending a message over sockets, etc. We would want more granularity to know side-effects procedures are causing down the call tree.

Similar to checked exceptions or dependency injection, an idea is that you could have a separate parameter list for resources and side-effects a procedure requires. It’s similar to the ideas in /u/DanielWaterworth’s plastic language or free moands. I also talked about this idea a while back in this comment thread for reference.

Example in some made up syntax:

// `class` or whatever you want to call this
class Cache {


  // technically `self` would be mutable the way I wrote this
  fn get(self, key: String)(effects @fs: FileSystem) {
    if (self.contains(key)) {
      return self[key]

    let value = @fs.read(key)
    self[key] = value
    return value;

// uses the default, global, "real" file system ("business as usual")

// uses dummy file system for unit testing

25 Jul 2016, 02:40


Snipped and backdated from a Reddit comment of mine.

The biggest win for compile-time features is eliminating the need for clunky, external build tools when you need to generate boilerplate code. (See F#’s type providers)

Instead, you can write functions that run at compile-time to generate ASTs for you instead of mudging with text source code.

This feature in general is called compile-time function execution (CTFE).


The common scenario is that you have a pre-existing schema you want to generate code from (e.g. an SQL database or a web service using JSON, XML, Protobufs, or an overhyped four-letter acronym from the early-2000s).

CTFE solves lots of problems with “traditional” text-based code-gen:

  • Since you’re generating text instead of an actual AST, there is nothing stopping the tool from generating invalid source code the doesn’t compile. You end up working backward figuring out where you went wrong.

    • …with CTFE, you’re working with the AST of the language (along the lines of Lisp macros that are executed at compile-time). Given a strongly typed language, invalid ASTs are compiler errors in the first place.
  • Generated source code can easily get out of sync with the source schema (e.g. forgot to regen the code after adding a column, etc.)

    • …with CTFE, the schema can be read during development time to allow for code completion. Out of sync code becomes a compiler error, not a run-time error.
  • Controlling the source code generated generally is limited or non-existent, or requires dealing with extra configuration files (or some dynamically-loaded library reflection magic).

    • …with CTFE, everything is just part of the code base; you can just pass a value to a function.

Other nice CTFE use cases:

  • “Strongly typed” strings: compile JSON and XML strings into actual values, safe string interpolation
  • Partial evaluation: pre-compute expressions ahead-of-time instead of at runtime (C++’s constexpr)
  • AST metaprogramming: generate types, functions bodies, etc.
  • Anything else you might want to do at compile-time…


The major con with implementing CTFE is that introduces a great deal of complexity into the compiler implementation and needs to be well designed.

  • You end up with a dependency graph of compile-time and run-time code that depends on the output of a compile-time function (hopefully you have a keyword that distinguishes what functions you intend to CTFE).

  • Because of that, you have to figure the order in which you need to run each compile-time function before compiling the dependent, “regular” code that executes at run-time.

  • CTFE is going to run really slow unless you can run as much as possible in parallel.

13 Jul 2016, 06:03

On Polymorphic Row Types

Snipped and backdated from a Reddit comment of mine.

There are a lot of interesting similarities here. I also found the original paper that the blog post was referencing: “First-class labels for extensible rows” with discussion on Lambda the Ultimate.

I didn’t as prominently show it in the example, but my examples use structural typing and union and intersection types (examples from my other comments).

It seems to me that what I’m suggesting effectively is the same as using row polymorphism as shown in the blog post. The example from the blog post again:

f :: {a: Number, b: Number | ρ} -> Number
let f x = x with {sum: x.a + x.b}
let answer = f {a: 1, b: 2, c: 100}

// "regular" structural typing
answer :: {a: Number, b: Number, sum: Number}

// row polymorphism
answer :: {a: Number, b: Number, c: Number, sum: Number}

And back to my made up syntax, the example above should be equivalent to the following:

f <'x> :: 'x: { a: Number, b: Number } -> 'x & { sum: Number } 
fn f(x) {
  x with { sum: x.a + x.b }

answer :: f('x) when 'x = { a: Number, b: Number, c: Number }
       :: { a: Number, b: Number, c: Number } & { sum: Number }
       :: { a: Number, b: Number, c: Number, sum: Number }
let answer = f { a: 1, b: 2, c: 100 }

To break the important line down:

f <'x> :: 'x: { a: Number, b: Number } -> 'x & { sum: Number } 
  • <'x> declares the type variable 'x in the scope of the type definition
  • the function take one parameter of type 'x: { a: Number, b: Number } (which essentially uses subtyping) to say that the type variable 'x needs to at least have the structure { a: Number, b: Number }.

  • the function’s return type is 'x & { sum: Number } which says the return type must be whatever type variable 'x actually is and, in addition, must also have the structure { sum: Number }.

13 Jul 2016, 02:40

Using structural types with a lot of type inference

Snipped and backdated from a previous Reddit post of mine.

As /u/x-paste was suggesting in the [Official] Object Orientation Discussion thread about ad-hoc data structures, I still think you can get the niceness and flexibility of just throwing code togeather while still having strong typing.

That or have gradual typing to avoid the messy example I wrote for the otherwise simple eval() and can just use any like in TypeScript and don’t bother with being extremely sound and specific.

Or you would still check everything at runtime but be a bit more smart about it like using guards in Elixir (which runs on Erlang):


Either way, I’ve thrown together some general ideas and wanted to see what you guys think. I’m questioning how far you really want to be able to take static typing with type inference. I know from the poll a sizeable amount of people were looking for something functional along with OO.

Dealing with Nominal and Structural Typing

I was thinking about a way to “soundly” deal with having a type system where you could have both nominal and structural typing and be able to use each where appropriate. And likewise, wrote an example of how this would work in some fake syntax:


Structural Type Inference

I when a bit crazy with how powerful the type system can be and what is offered for free with type inference alone.

I wanted to see if it was possible to get away with inferring the type of something you probably could write in JavaScript or a dynamic language where you’re just passing anonymous data structures everywhere.


08 Jul 2016, 23:19

On Mainstream FP

Snipped and backdated from a Reddit comment of mine.

If functional programming is so great, why is it still niche? We have a product that can practically eliminate runtime errors, make refactoring much easier, lighten the testing burden, all while being quite delightful to use. What’s the hold up?

One factor is that we make things artificially hard to learn, sometimes with a seemingly pathological glee.

This is from the Elm’s “Let Be Mainstream!” talk. We literally could use it as a manifesto for building the functional language people actually use and actually understand (and isn’t stuck running on JavaScript).

Seriously, what language already does that?

Hell, we don’t even need to use the word “monad”. What honestly is the benefit of using the word “monad” other than to “sound smart” and for newcomers to Google it and become even more confused than they were already? (Sure, knowing something is a monad is probably useful to the miniscule group of people actually understand it.)

Take a look at Elm’s documentation. Elm’s documentation for Maybe.andThen describes in absolute simple terms how to “chain together many computations that may fail.” It describes in plain English how to practically use “monads” without even using the word.

Point being, we don’t need to make things to unnecessarily complicated. You already have Haskell for that.