tired: ls |wc -l
wired: c=0; for x in *; do ((c++)); done; echo $c
inspired: whatever @mirabilos suggests š
tired: ls |wc -l
wired: c=0; for x in *; do ((c++)); done; echo $c
inspired: whatever @mirabilos suggests š
Highest system load I've seen (while the system was still responsive, that is):
Thu Apr 16 05:02:25 PM CDT 2026 83.95 (6.995 per processor (12))
Was batch converting ~250 jpegs into avif, running 16 processes concurrently.
#UnixShell makes it stupidly easy. :D
(hint: ncpus=$(lscpu -bp |grep -c "^[0-9]"); while (( $(jobs) >= ncpus )); do sleep 1; done ;)
Something that I'd love to be able to do, but haven't figured out how, yet:
You can run somecommand 2>&1 > log.txt to get STDOUT and STDERR together, but you won't know which is which.
You can run somecommand 2>err.txt >log.txt to get STDOUT and STDERR separate, but you won't know the timing, or which error messages happened between which STDOUT messages.
I'd like to figure out some way to combine both, so you end up with a file like this:
1: this was a STDOUT message
1: this was a STDOUT message
1: this was a STDOUT message
2: this was a STDERR message
2: this was a STDERR message
1: this was a STDOUT message
2: this was a STDERR message
1: this was a STDOUT message
1: this was a STDOUT message
1: this was a STDOUT message
1: this was a STDOUT message
1: this was a STDOUT message
2: this was a STDERR message
2: this was a STDERR message
Any ideas? @mirabilos?
Wrote a #shell function without using ls inside of $( ), so my inner @mirabilos won't harass me. XD
#slightly easier wireguard command
function wg {
local dir file profile profiledir= parm=${1:-} statustext
#Find profile dir
for dir in {,/usr/local}/etc/wireguard; do
if [[ -d $dir ]]; then
profiledir=$dir
break
fi
done
#Find config file
if [[ -n $profiledir ]]; then
for file in $profiledir/*.conf; do
if [[ -e $file ]]; then
profile=${file//*\/}
profile=${profile/.conf}
break
fi
done
fi
[[ -n $profile ]] || profile=proton
statustext="wireguard profile $profile"
case ${parm,,} in
up|on) doas wg-quick up $profile;;
down|off) doas wg-quick down $profile;;
status) echo -en "$statustext _______\r"
echo -en "$statustext "
ifconfig |grep -q "^$profile:" && echo enabled || echo disabled;;
*) warn "wg usage: wg up|down|status";;
esac
}
Hmm, seems ${foo,,} for lower case conversion is #bash-only. I wonder if I should use tr instead.
I'm not saying you should all go out and set up aliases to enable your various typos and slip-ups, but I honestly don't know why I didn't do this 25 years ago.
alias cd..='cd ..'
Recursion usually scares me a bit, but it worked out nicely here:
#convert "cx"-style Esperanto notation to native accents (Ä)
function eaccent {
if [[ ${1:-} ]]; then
echo "$*" |eaccent
else
sed 's/cx/Ä/g; s/gx/Ä/g; s/hx/Ä„/g; s/jx/ĵ/g; s/sx/Å/g; s/ux/Å/g; s/C[xX]/Ä/g; s/G[xX]/Ä/g; s/H[xX]/Ĥ/g; s/J[xX]/Ä“/g; s/S[xX]/Å/g; s/U[xX]/Ŭ/g'
fi
}
#bash #unix #shell #scripts #scripting #UnixShell #ShellScripting #Esperanto
Quick tip if you ever need to add a linebreak to the end of a file manually for whatever reason, `echo >> filename` works.
When you're dealing with plaintext data in a suspicious context, consider using `cat -v` where you'd usually use `cat`.
```
-v, --show-nonprinting
use ^ and M- notation, except for LFD and TAB
```
File this under #shell #functions I should have written years ago:
function grepc {
#Do a grep -c, but skipping files with no results
grep -c "$@" |grep -v ':0$'
}
P.S., the body of the parent #toot was created by a simple #shell #function:
function apod {
#Today's NASA Astronomy Picture of the Day info-fetcher
curl -sL 'https://apod.nasa.gov/apod/archivepix.html' \
|grep -m1 "[0-9][0-9]:" \
|sed 's/^/Date: /;
s|: *<a href="|\nURL: https://apod.nasa.gov/apod/|;
s/">/\nTitle: /; s/<.*$//'
echo
echo "#NASA #Astronomy #PictureOfTheDay"
}
#bash #ksh #mksh #shellScripting #unix #UnixShell #WebScraping #Scraping #HTML