I wouldn't be the first to think of a language that could be functional and compile to JS and I probably won't be the last one.
A lot of what's good about functional languages doesn't work well with Javascript when working with limited hardware.
If you're someone who finds RAM and CPU to be very cheap today, this post isn't for you
Let's take a tiny JS snippet and set a little context.
Here' we have a very common logic to transform and filter based on a predicate.
;[1, 2, 3].map(x => x * 2).filter(x => x > 2) // [4,6]
It looks harmless but lets see a few operations that the runtime will be doing
.map
and .filter
callAnd all of that holds memory. It looks fine for 3 items in an array but what if it was 100 items, let's add a little more to that, what if this was running on a server and you end up running this transform and filter for every user's request.
You now hold a subsequent amount of memory for what looks harmless. This might still be fast based on what the items inside the array are but the memory still gets used so that's that.
Now if I had to write the same thing in raw JS it'd look like so
const arr = [1, 2, 3]
const result = []
for (let i = 0; i < arr.length; i++) {
const mul = arr[i] * 2
if (mul < 2) continue
result.push(mul)
}
Memory is now allocated for
mul
every iterationSo, the only advantage that the JS engine got here was that it didn't have to hold the entire array's clone into memory and didn't have to run through the array twice.
The 2nd codeblock might give you a slight perf boost as well depending on the size of the array.
I never told you why all of this matters or why I'm talking about this and what does having to build a language that compiles to JS have to do with any of this?
Now, before people come running at me with pitchforks, I'm not saying you should quit writing functional style JS, I'm just stating a fact. I've personally been a fan of functional programming for a very long time and I still write most of my initial solutions in functional js and have a lot of helpers for the same.
But that doesn't change the fact that I end up having to rewrite areas of it to either speed up things or to use less memory. There's a whole bunch of optimizations you could do to make things faster and a lot of them deal with just reducing the amount of stuff that the engine has to do. This involves un-needed functional references, or heavy recursive functions, object clones etc etc etc.
Let me add in a few posts that dive deeper into these concepts
Moving back to the original reason for this post.
I like the idea of building Meta languages that compile to JS or a faster functional backend and a lot of people have tried to make it work out really well but at the end of the day, you'll add a lot of additional abstraction that will might not be apt if you are compiling to js
The only exception is probably nim lang since it uses pointer math instead of human readable js, also it doesn't support nodejs properly so that's that.
Here's a few languages built on the concept of "One language for everything" and the one's that I've worked with enough.
In all of them, the compiler comes with backends that allow you to compile to a native binary or Javascript.
The reasoning is that you can write your server and client in the same language and compile to different backends to take advantage of each language's power.
The problem?
A boatload of javascript polyfills.
It is no surprise that Javascript's functional helpers are limited and the language has a lot of in-place modifications that need to be re-written (ex: Date) and so the functional language has to implement it's own set of helpers around the knick-knacks of Javascript to keep things immutable where they are supposed to be immutable and so a little of the type information is lost when writing interop code for Javascript.
This is necessary and not something that can be avoided unless you are making something like coffescript which is just an alternative syntax and doesn't introduce new functionality that JS doesn't already support.
An example might help visualize the additions done by the functional languages.
We'll take the original example with just the .map
function
;[1, 2, 3].map(x => x * 2)
the ReasonML version of this will look like so
List.map(x=>x*2, [1,2,3])
Not that different, is it? Now let's look at the compiled output of the above
var List = require('./stdlib/list.js')
List.map(
function (x) {
return x << 1
},
/* :: */ [1, /* :: */ [2, /* :: */ [3, /* [] */ 0]]]
)
A simple transform requires a List
module and it's methods from stdlib
to be
a part of the final distributable package; which may or may not be desirable
depending on the size of the business logic. If you want to use the JS version
of arrays then you'll have to write a slightly more longer version. Notice that
the order of the data and function has changed and this is something that reason
handles internally where the |>
operator automatically decides where the data
would go but then you have to keep in mind that all reason ml functions take
data last and that the JS native types might not.
Js.Array2.map([|1,2,3|],(x)=>x*2)
The ocaml output would now need the impl details of the above function from melange but that's okay, we can talk about that in a different post if needed.
So then...
When do we use languages like these? It depends
The conditions might be obvious to some but I'll still point them out for the sake of it.
let add = (x,y) => x+y
Js.Console.log(add(1,2))
Js.Console.log(add(2,3))
// output
function add(x, y) {
return (x + y) | 0
}
console.log(3)
console.log(5)
exports.add = add
End of the day, use whatever the fuck you want if it's fun. I wouldn't be able to write all this if I never tried them out so use them and figure it out if you like the experience but know that if you work with tight optimization constraints then the path of using compile to JS functional languages might not be as good of an idea as people make it seem.
That's all for now, Adios