At least nowadays LLMs can rewrite Bash to JS/Python/Ruby pretty quickly.
#!/usr/bin/env rad
---
Dev automation script.
---
args:
build b bool # Build the project
test t bool # Run tests
lint l bool # Run linter
run r bool # Start dev server
release R bool # Release mode
filter f str? # Test filter pattern
filter requires test
if build:
mode = release ? "--release" : ""
print("Building ({release ? 'release' : 'debug'})...")
$`cargo build {mode}`
if lint:
print("Linting...")
$`cargo clippy -- -D warnings`
if test:
f = filter ? "-- {filter}" : ""
print("Running tests{filter ? ' (filter: {filter})' : ''}...")
$`cargo test {f}`
if run:
bin = release ? "target/release/server" : "target/debug/server"
$`./{bin}`
Usage: ./dev -b (build), ./dev -blt -f "test_auth" (build, lint, test auth), ./dev -r (just run).
Actively being developed!- when ls started quoting filenames with spaces (add -N)
- when perl stopped being installed by default in CentOS and AlmaLinux (had to add dnf install -y perl)
- when egrep alias disappeared (use grep -E)
Your fault: http://mywiki.wooledge.org/ParsingLs
The best way is a scripting language with locked-down dependency spec inside the script. Weirdly .NET is leading the way here.
bash is glue and for me, glue code must survive the passage of time. The moment you use a high-level language for glue code it stops being glue code.
Such as?
One good example is `uuidgen`
That's neither a standard CLI utility nor a bash builtin.
I suppose it can be nice if you are already in a JS environment, but wouldn't the author's need be met by just putting their shell commands into a .sh file? This way is more than a little over-engineered with little benefit in return for that extra engineering.
The reasons (provided by the author) for creating a Make.ts file is completely met by popping your commands into a .sh file.
With the added advantage that I don't need to care about what else needs to be installed on the build system when I check out a project.
I just don't see the advantages.
The use-case, as per the author's stated requirements, was to do away with pressing up arrow or searching history.
Exactly what benefit does Make.ts provide over Make.sh in this use-case? I mean, I didn't choose what the use-case it, the author did, and according to the use-case chosen by him, this is horrible over-engineered, horribly inefficient, much more fragile, etc.
For both scripts, everything interesting is installed via Nix, so there's little reliance on special casing various distros', built-in package managers.
In both cases, all scripts have to pass ShellCheck to "build". They can't be deployed or committed with obvious parse errors or ambiguities around quoting or typos in variable names.
In the case of the scripts that are tools for developers, the Bash interpreter, coreutils, and all external commands are provided by Nix, which hardcodws their full path into the scripts. The scripts don't care if you're on Linux or macOS— they don't even care what's on your PATH (or if it's empty). They embrace "modern" Bash features and use whatever CLI tools provide the most readable interface.
Is it my favorite language? No. But it often has the best ROI, and portability and most gotchas are solved pretty well if you know what tools to use, especially if your scripts are simple.
As soon as you have state accumulating somewhere, branching or loops it becomes chaotic too quickly.
The thrust of the article could be summarized as: if you type more than one command into the shell, make a script.
If you combine that with relative working folders it’s very easy to manage large projects.
And you can get shell completion, which is extra nice.
Just integrate fzf into your shell and use ctrl-r to instantly summon a fuzzy shell history search and re-execute any command from your history!
I cannot imagine going back to using a terminal without this.
I still write plenty of scripts if I need to repeat multi command processes but for one liners just use fzf to reexecute it.
Also in a shared project you can ignore script files with .git/info/exclude instead of .gitignore so you don’t have to check in your personal exclusion patterns to the main branch.
Seriously people if you use a terminal you need the following tools to dominate the shell:
ripgrep, zoxide, fzf, fd
I made a function called y that is like the z function but is git worktree / jj workspace aware. So useful!
"I want to be clear here, I am not advocating writing “proper” scripts, just capturing your interactive, ad-hoc command to a persistent file."
What's the difference? Why not version control it, share it with colleagues. Imagine writing a unit test to test a new feature then deleting it when done, what a waste. Ok it's not exactly the same because you aren't using these scripts to catch regressions, but all of that useful learning and context can be reused.
I don't think the language you use for scripting is too important as long as the runtime is pinned and easily available on all engineers machines, perhaps using a toolchain manager like... mise[3].
[1] https://mise.jdx.dev/tasks/ [2] https://mise.jdx.dev/shell-aliases.html [3] https://mise.jdx.dev/dev-tools/
You've written 2,438 comments at HN and only two of those messages include "Linux". Two! And the most recent of those two comments includes this gem:
"fighting my way through a Linux CLI is exactly the kind of thing I use Chatgpt for professionally."
Maybe you shouldn't be telling more capable technologists what languages they should avoid.
Obviously I was using hyperbole. But I can say for sure that my life as a developer and second-rate sysadmin improved when I adopted as policy that I would never ever write any Bash script. Not even for a supposed one-liner, since I could just write that in ZX and avoid any temptation to write (and badly maintain) a Bash script.
import { $ } from 'zx'
await $`echo my one liner`Because I'm hardcoding directory paths.
Because I'm assuming things are set up a particular way: the way they are on my machine.
Because this is hardcoded to a particular workflow that I'm using here and now, and that's it.
Because I do not want to be responsible for it after no longer needing it.
Because I don't want to justify it.
Because I'm hard-coding things that shouldn't be checked in.
Because I don't want to be responsible for establishing the way we do things based on this script.
Given the choice between starting with an almost-working script or starting from scratch, I’ll take the former, it might save a few hours.
My colleagues and I don’t do this 100% of the time, but I never regret it and always appreciate it when others do.
The major thing to be concerned about there is leaking things like hard-coded secrets and that's where something like .env files can come in handy and knowing your tools to make use of them. Deno (as the running example) makes using .env files easy enough by adding the `--env` flag to your `deno run` shebang/task-line and then using `Deno.env` like any other environment variable. (Then don't forget to .gitignore your .env files.)
Another quite standard way of savings your command history in a file that I have seen used in all ecosystems is called "make", which even saves you a few characters when you have to type it, and at least people don't have to discover your custom system, have auto complete work out of the box, etc
I quite like make or just as a task runner, since the syntax / indentation / etc overhead is a lot lower. I haven't yet tried to introduce it in any JS based projects though, because it adds yet another tool.
One very big upside I have to use package.json is that we use pnpm which has very sophisticated way of targeting packages with --filter (like "run tests from packages that had modification compared to master and all their transitively dependents packages" which is often exactly what you want to do)
Like yeah it's totally reasonable that they go that route, but please just let me pass a command that can be executed without having to wrap it in a package.json script
Most Deno tasks though, more so than a lot of npm scripts in my experience, tend to just be `deno run …` commands (the shebang line in the article) to a script in a directory like `_scripts/` rather than written as CLI commands.
TL;DR: Make is a very nice tool for gathering the "auxiliary" scripts needed for a project in a language-agnostic manner. It's better than setup.py and package.json precisely because it provides a single interface for projects of both kinds.
[1] Which is worth knowing so you can avoid both features like the plague.
Coming from a web background, my usual move is to put all scripts in the package.json, if present. I'd use make for everything, but it's overkill for a lot of stuff and is non-standard in a lot of the domains I work in.
Same!
Usual move used to put everything in Makefile, but after getting traumatized time and time again from ever-growing complexity, I've started to embrace Just (https://github.com/casey/just) which is basically just a simpler Make. I tend to work across teams a lot, and make/just seems easier for people to spot at a glance, than scripts inside of a package.json that mostly frontend/JavaScript/TypeScript people understand to take a look at.
But in the end I think it matters less specifically what you use, as long as you have one entrypoint that collects everything, could be a Makefile, Justfile or package.json, as long as everything gets under the same thing. Could be a .sh for all I care :)
Instead, I now swear by atuin.sh, which just remembers every command I've typed. It's sort of bad, since I never actually get nice scripts, just really long commands, but it gets you 50% of the way there with 0 effort. When leaving my last job, I even donated my (very long) atuin history to my successor, which I suspect was more useful than any document I wrote.
My only hot tip: atuin overrides the up-arrow by default, which is really annoying, so do `atuin init zsh --disable-up-arrow` to make it only run on Ctrl-R.
What about this instead: select any number of lines, in any file, and pass it through to the shell. You get convenience of text editing, file management, and shell’s straightforwardness.
(This approach was tried and cemented in Acme, a text editor from Bell Labs.)
Though, I generally run these scripts using bun (and the corresponding `$` in bun) - basically the same thing, but I just prefer bun over deno
If you work in powershell you can start out in the terminal, then when you've got whatever you need working you can grab the history (get-history) and write it to a file, which I've always referred to as a `sample`. Then when it becomes important enough that other people ask me about it regularly I refactor the `sample` into a true production grade `script`. It often doesn't start out with a clear direction and creating a separate file is just unnecessary ceremony when you can just tinker and export later when the `up-enter' pattern actually appears.
With your shell's vi mode, it's even better L -> k k k
Or search them with /
And if you are proficient with vim, you can edit your previous one-line really fast
(Remap/Swap CapsLock with Escape system-wide. It's just a gui setting on linux and MacOS and a registry key way on Windows)
Instead of juggling dashboards and collections of requests, or relying on your shell history as Matklad mentions, you have it in a file that you can commit and plug into CI. Win-win.
At some point, that testing shell script can be integrated into your codebase using your working language and build tooling.
People like Postman because it's easy to share credentials and config, and easy(ish) to give to less technical people, but the cliff for pulling that stuff into code is often annoying.
"Postman but actually it's a jupyter-style notebook with your credentials" would be cool, although I don't know exactly what that would look like.
Edit: zero-dependency Python.
Stopped using python for scripting for this reason
I've started using Python for many more tasks after I discovered this feature. I'm primarily a JS/TS developer, but the ability to write a "standalone" script that can pull in third-party dependencies without affecting your current project is a massive productivity boost.
That convenience you think boosts productivity is a short term thing, using comments for dependency management is an anti pattern imo
life() {
python3 << EOF
print(42)
EOF
}I don't understand why you wouldn't want your scripts in your Git - but I guess OP's context is different from mine.
Anyway, what kills this for me is the need to add await before every command.
Historically we had to use pip which was super janky. Uv solves most of pip's issues but you still do have to deal with venvs and one issue it doesn't solve is that you can't do imports by relative file path which is something you always end up wanting for ad-hoc scripting. You can use relative package paths but that's totally different.
Just add the targeted path to sys.path, or write your own importhandler. importlib might help there. But true, out of the box, imports in python3 are a bit wacky for more flexible usage.
No, they don't. Tooling is fine with those things.
I’m not sure about that. All those ‘await’s, parentheses really kill my mojo. Why do you find it better than Python?
I said already - the main reason is you can import files by relative file path.
You can get close to the Deno UX with uv and a script like this:
#!/usr/bin/env -S uv run --script
#
# /// script
# requires-python = ">=3.12"
# dependencies = ["httpx"]
# ///
import httpx
print(httpx.get("https://example.com"))
But you still have to deal with the venv e.g. for IDE support, linting and so on. It's just more janky than Deno.I wish someone would make a nice modern scripting language with arbitrary precision integers, static types, file path imports, third party dependencies in single files, etc. Deno is the closest thing I've found but in spite of how good Typescript is there are still a ton of Javascript warts you can't get away from (`var`, `==`, the number format, the prototype system, janky map/reduce design, etc.)
iwr https://example.com
You also have arbitrary precision integers and all the other stuff from .NET $b = [BigInt]::Parse('10000000000000000000000000000000000000000000000000')...this is the same sort of 'works for me' philosophy as in Matklads post though, it's so heavily opinionated and personalized that I don't expect other people to pick it up, but it makes my day-to-day work a lot easier (especially since I switch multiple times between macOS, Linux and Windows on a typical day).
I'm not sure if Bun can do it too, but the one great thing about Deno is that it can directly import without requiring a 'manifest file' (e.g. package.json or deno.json), e.g. you can do something like this right in the code:
import { Bla } from 'jsr:@floooh/bla^1';
This is just perfect for this type of command line tools.This article used Dax instead which also looks fine! Https://github.com/dsherret/dax
One of my projects actually use bun shell to call some rust binary in a website itself and I really liked this use case.
Then I realised how powerful it was that I could create tasks with dependencies (ie: when a task requires the user to have jq installed, you can add that to the mise.toml) which makes the tasks beautifully shareable across a team. The only tool they need to have installed is mise, and mise handles everything else for them.
...and that was also the one concrete example where it makes sense to have extra dependency and abstraction layer on top of a shell script:)
say you know TS and even if you walk back to where $ is defined, can you tell immediately why $`ls {dir}` gets executed and not just logged?
so how does it get executed?
unless it was just an example and you are supposed to switch in $ from some third party library... which is another dependency in addition to deno... and which can be shai-huluded anytime or you may be offline and cannot install it when you run the script?
No repetitive short sentences, no "Not X, just Y." patterns, and lots of opinionated statements, written confidently in the first person.
Please more of this.
Same, I'm caring less about "Yeah, I've learned something new" and more about "Yeah, this sounds like I'm reading the thoughts of a human, how refreshing" which is a sad state of affairs.
I've adopted my own writing style because of this too, used to be very careful about spelling and grammar, very nitpicky, but have now stopped doing that, because people started calling my perfectly spelled responses LLM-generated...
This is annoying to say the least, just because there is no "made with love by ChatGPT" stamp on LLM-produced stuff (which is far from being bad BTW)
nature? In the horizon line—the lightning-harrowed bough—the canyon's
pink striation—the pupil of the goat
— @ctrlcreep
https://twitter.com/ctrlcreep/status/1808321708627317061