Bash and POSIX shells are fairly error-prone languages to write in. There are so many details, especially about quoting rules and word splitting, that you need to be aware of to write scripts which don’t fall over at the slightest weirdness in the input.
If you want to escape this on your own computer, you can install a non-standard shell like Fish (which I highly recommend); but Bash and similar shells continue to be widely available by default on all sorts of Unix-like environments. Hence, despite the numerous downsides, many important and complex software components like curl | sh
installers, build configure tools, and OS startup and deployment scripts take the form of shell scripts because that’s the best thing they can rely on to be installed.
I think it’s worth asking: why do we still write shell manually? I did a bit of digging but couldn’t find any popular or active languages designed to compile to Bash.1 What kinds of features can we imagine having in such a language? Let’s think about it! After all, safer languages are all the rage nowadays, and language adoption can’t be hard: web dev people even adopted a language just to make JS a bit prettier ;)
- Sensible splitting by default
- Command substitutions split by lines
- Declarative parameters
- Defer blocks and parameter-less closures
- Raising and catching errors
- First-class arrays, for closures or otherwise
- Environment isolation
- Explicit external program use
- Portable wrappers for common programs
Sensible splitting by default
To use an array variable such as $@
in Bash properly, so that it expands to its original contents and doesn’t get re-split, you have to put it in quotes, like "$@"
.
In Fish, variables expand properly by default and you have to explicitly do joining and splitting if you want the other behaviour. Let’s have that!
We can also treat all variables as arrays; so let’s drop the brackets from array syntax and just give multiple words in an assignment:
x=(a b c) # shell
x = a b c # our lang
This isn’t a problem to compile since "${var[@]}"
in Bash is the same as "$var"
if var
is either an array of 1 or not an array at all.
Command substitutions split by lines
The output of command substitutions in Fish are only split on newlines by default, unlike in other shells where they have normal word splitting. This is usually much closer to the expected behaviour, since many tools have line-based output. This is easy to compile as a change to IFS
.
I write new posts about once a month - subscribe by email to stay updated!
Declarative parameters
As an appeal to convention, I suppose, shell function definitions have the appearance of accepting bracketed arguments, but don’t.
# shell
my_func() { # why the () ?? nothing can go in there
# ...
}
To actually parse arguments you have to write some kind of loop involving case esac
and shift
. You also might forget to handle special parameter conventions like --
to stop processing options.
Let’s define functions the same way we call and document them:
# our lang
def clear_cache [-f/--force] [-s SIZE] LOCATION {
# ...
}
The compiler can parse the parameter mini-language and generate the parsing loop and any variables for the options. If we add a way to annotate documentation, we could even generate --help
behaviour for the script as a whole with some kind of directive:
# in the file
%options ? "Do the thing" \
[-t SECS ? "stop after a given amount of time"] \
[-v ? "increase logging verbosity"]
Then, when using the script:
$ ./myscript -h
Usage: myscript [-t SECS] [-v]
Do the thing
Options:
-t SECS stop after a given amount of time
-v increase logging verbosity
Defer blocks and parameter-less closures
It’s often necessary to clean up resources like temporary files after a script or function. In Bash, you can set traps to call commands when the shell exits or returns from a function, which gets us part-way there. However, return traps don’t nest. Also, trap commands are expanded when they execute, not when they are set, like we would expect from a closure.
The good news is we can represent parameter-less closures in shell as array variables containing the function and any bound values.
# our lang
local name = "Jane"
closure {
local greeting = "Hello"
echo "$greeting, $name"
}
The code in { }
can be compiled to some function called block0
which has its free variables as parameters, then the closure is represented as a variable cl0=(block0 "$name")
. We call it by running the command "${cl0[@]}"
.
To defer a closure so it runs whenever a function exits, we compile code to add the name of the closure variable to a global defer stack where the defer
statement is issued, and then compile code to remove and run it before any return
. Globally, we have an exit trap set to go through the defer stack and run all the closures in reverse order.
# example defer
def my_func {
tmpdir = $(mktemp -d)
defer { rm -r $tmpdir }
# ...
}
To compile closures, we need to do scope analysis and figure out when a variable is declared vs when it is just set. For example, in the above example, we need to work out that greeting
is declared in the closure before being refered to, and so is not free at that point. This adds complications to how variables in conditional blocks of code like if
, for
, and while
should work.
There are many options for distinguishing declarations: local
, var
, let
, my
, :=
instead of =
, or even reverse it and use nonlocal
to “declare” as inherited, like Python. I don’t know which is most ergonomic for a shell-like language.
Raising and catching errors
Shells provide set -e
which is designed to make error-handling nicer: it makes the shell exit whenever a command fails—but, well, it has quite a few non-obvious behaviours, and also, it doesn’t easily admit catching and continuing.
Since we’re compiling the shell code, we can just compile || return $?
to the end of a line whenever we need to “throw” an error from a function, or || exit $?
outside one. This can be the default, or enabled by a safe mode.
Ignoring errors can be done by manually writing || true
, but then we’d lose the return code. Also, catching the first error in a { }
compound block might be desirable, but blocks aren’t functions (unless we make them) so we can’t return from them. Maybe we need a special catch VARNAME { }
syntax which compiles to a function and puts the caught code in a variable of our naming. Or it could be easily done manually by the programmer with a nice enough closure syntax.
Another option is compiling &&
between all commands, instead of newlines, which gives us full flexibility over the control flow. Each input line would need to go in its own compound block so any input && ||
are grouped properly and don’t merge with the compiler’s.
See also: Oil Fixes Shell’s Error Handling.
First-class arrays, for closures or otherwise
In the previous example of passing variables to a closure by cl0=(block0 "$name")
, you may have noticed that this won’t work if name
is an array variable. Even if we write "${name[@]}"
, this will just expand and merge with the surrounding array, so we lose it as a single variable. We need a way to put arrays as values into other arrays.
Newer Bash has declare -n
which declares a variable as a reference, i.e. containing another variable name, which all accesses and assignments pass through to. However, I don’t think this helps, since that’s a property of the variable, not the value.
If we don’t want to expose first-class arrays to the programmer, we could just save each argument to the block into its own variable:
# shell
cl0v0=("${name[@]}")
cl0=(block0 cl0v0)
We’d have to modify how we compiled closure blocks so the first thing they did was retrieve the variable from the given name. I think declare -n
would work here, but we’d need an alternative for shells without it.
# shell
block0() {
local -n name="$1"
# ...
}
Environment isolation
For extra predictability, we can add file directives which isolate various external inputs:
-
Only allow the given environment variables.
%env HOME USER MY_OPTION1
We can compile this to a loop over the output of
export -p
which unsets all other variables. -
Configure the available programs.
%prog python python3
This could create a command-line option
--prog:python ...
which accepts a path, defaulting topython3
, and makes it so anypython
command in the script is overriden by the given program. We can compile checks right at the start of the script which error out if any specified programs don’t exist.We could have an option whereby no external commands are allowed unless declared by
%prog
. -
Configure starting directory.
%working-dir script|inherit|temp
Unless it’s relevant, it’s annoying to have to think about from where you run a script, and it’s easily to overlook writing e.g.
cd "$(dirname "$0")"
. We can force the user to choose the initial working directory.If they don’t chose, we could pick a default option, or start in some non-existent directory by compiling something like:
# shell d=$(mktemp -d) cd "$d" rmdir "$d"
Explicit external program use
Maybe we can force the programmer to distinguish external programs from functions, e.g. by some prefix:
# our lang
@mv my-file my-dir
Then we have freedom to add built-ins without worrying about shadowing commands.
Portable wrappers for common programs
Core programs like grep
, mkdir
, ln
, etc. can differ in what features they have, making the environment a shell script runs in even more unpredictable. To work around this, we could have a set of built-in functions which smooth out differences and implement common denominator flags.
The extreme of this would be to implement them using Perl if possible, which is a standard & portable language and I imagine as similarly widespread as shells. Hell, the whole language could just target Perl.
I think making robust shell scripts easier and faster to write has some value to it, and only a few features are needed get quite far in that direction.