In programming, there are a lot of benefits to using immutable values, but the
benefits are unrelated to whether variables can be reassigned. Immutable values
are a type of data that are guaranteed to never change (as opposed to mutable
data, which can change). Using immutable values helps you:
-
Safely store values in a cache so the same value can be reused later.
-
Use trees or lists with shared subcomponents.
-
Preserve invariants in datastructures like hash tables.
-
Store unchanging historical records like undo logs.
-
Share data between threads safely without synchronization primitives.
In essence, immutability is a guarantee that can make it easier to build complex
programs with fewer bugs.
There is a separate, but related concept that I will refer to as “variability”.
Variability is something that is not a property of data, but is a
property of the symbols we use as a placeholder to represent data. In a
programming language, a symbol might be either a constant, like
PI
, which will hold the same value for the entire lifetime of a
program, or a variable like x
, which can represent various
different values over the course of a program’s lifetime. In
functional programming languages, “local constants” are called a “let bindings”,
but this term has become hopelessly confusing, because Javascript and Rust both
use the keyword let
to declare symbols that can take on new values
over time. Some languages also have something which I will call a “local
constant”, which is a symbol that represents the same value for the duration of
the block where the symbol was defined (its lexical scope).
Continue reading…
Packrat parsing is a relatively new approach to parsing code introduced by Bryan
Ford in his 2002 Master’s thesis. Packrat parsers
are capable of very efficiently parsing a class of grammars called Parsing
Expression Grammars (PEGs). I covered PEGs in more depth in my previous post, PEGs and the Structure of
Languages, so if you’re new to the topic, I recommend you begin there. The
packrat parser is based on earlier work on recursive descent parsers, dating
back to the 70s, but focused specifically on Parsing Expression Grammars. In
this post, I’ll attempt to demystify packrat parsers by walking through a
complete packrat parser implementation that fits in a few dozen lines of
Javascript code.
Continue reading…
When I first began working on my programming language Nomsu, one of the
first challenges I faced was how to get the computer to see code written in my
language as more than just a long sequence of letters. Code has
structure, and finding that structure is called parsing. After spending
a long time digging through different approaches to the problem, one
lesser-known (though increasingly popular) approach called Parsing Expression
Grammars became my favorite, and allowed me to make my own programming language.
This post will explain what Parsing Expression Grammars (PEGs) are, and a bit of
background on the problem they solve. There are some very good PEG libraries
available (I first found out about PEGs through the Lua library LPEG), and in a follow-up post, I’ll even explain how to
implement a PEG parser from scratch. Hopefully, after reading this post, you’ll
have enough understanding to try defining your own language (programming or
otherwise) with a Parsing Expression Grammar.
Continue reading…
C can be a very fun language to program in, if you can set up your project well.
A lot of what I know about managing C projects was learned the hard way by
experimentation and studying other people’s code. This post is meant to be the
guide I never had to organizing and polishing nontrivial C projects, as well as
setting up Makefiles for building them and manpages for documentation.
Continue reading…
I’d like to present a handful of programming concepts that I find incredibly
useful, all of which can be implemented in under 20 lines of code. These
techniques can be used in a wide range of contexts, but here, I’ll show how they
can all build off of each other to produce a full physics engine in just 75
lines of code.
Continue reading…
Noise functions are a way to produce continuous pseudorandom values across one
or more dimensions. The simplest example is a one-dimensional noise function
that makes the y-value wobble up and down smoothly, but unpredictably, as you
move along the x-axis. It’s also possible to produce 2-dimensional noise, where
the amplitude of the noise depends on the x,y coordinates you’re sampling.
Continue reading…
Contour lines, also known as isolines, are lines drawn
along along paths of constant elevation. They are often used when drawing maps
to indicate the slopes of hills and mountains.
But they can also be used to visualize functions that map (x,y) values to z
values, or to find the boundaries of implicit surfaces. One of the common algorithms for finding
contour lines is Marching Squares. The algorithm assumes
that you have a set of elevation data points spaced out at regular intervals on
a grid, and generates contour lines for each square of the grid. A simpler
algorithm exists, called Meandering Triangles, and it divides the data points
into triangles, rather than squares, to reduce the number of cases the algorithm
has to handle. Here, I’ll give a full explanation of how the Meandering
Triangles algorithm works, with some examples to show it in action.
Continue reading…
Consider this problem: You have a list of items and a list of their weights. You
want to randomly choose an item, with probability proportional to the item’s
weight. For these examples, the algorithms will take a list of weights and
return the index of the randomly selected item.
Continue reading…