Print the first column from all rows:
awk '{print $1}'
Print the first and third columns from all rows:
awk '{print $1, $3}'
The comma between the column parameters in the previous command will put a space between the outputted columns. However, you can change this behavior and use your own formatting:
awk '{print $1 " --- " $3}'
By default, awk
will parse a row into columns using a space as the delimiter. The delimiter can be changed with the -F
command line switch.
For example, change the delimiter to a semicolon:
awk -F: '{print $2}'
The field delimiter could be anything such as an equal sign, -F=
, or a period, -F.
.
There are plenty of situations where you might need to print the last column from a given row, but you do not know how many columns are on that given row. The built-in variable NF
can be used to solve this.
awk '{print $NF}'
Another built-in variable is NR
which always contains the current row number. This can be used to do such things as printing the last column from only the first row:
awk 'NR==1 {print $NF}'
If you want to print particular columns only from rows that match certain conditions, you can pass a regular expression:
awk '/regular-expression-to-match/ {print $1}'
You can also invert your regular expression match by putting an exclamation mark outside of the search field:
awk '!/regular-expression-to-match/ {print $1}'
Grab only the HTTP headers, instead of the entire HTTP response and content, from a URL:
curl -I $URL
To ensure you see the entire request/response chain, add the -L
command line switch to follow redirects:
curl -IL $URL
If you maintain a lot of domain redirects, it is important to make sure they continually work. The following curl
command can be used to loop through a text file containing URLs to check. The output for each check will be the final HTTP status code and URL.
curl -sL -w "%{http_code} %{url_effective}\\n" "$URL" -o /dev/null
For example, to verify https://old-site.com redirects to https://new-site.com:
curl -sL -w "%{http_code} %{url_effective}\\n" "https://old-site.com" -o /dev/null
Will output:
200 https://new-site.com
If you do local web development, you probably have to frequently access your website or web application from localhost. In most situations this shouldn’t be a problem, but for those situations where it is (such as testing Apache VirtualHosts or nginx Server Blocks), you can easily change the Host header with curl to tell the website or web application where you want to go.
curl -i -H 'Host: example.com' $URL
Alternatively, temporarily add a record in your workstation’s /etc/hosts file pointing the domain name of the URL you are accessing to 127.0.0.1.
Instead of changing your /etc/hosts file, force a curl request to go through an IP address that differs from what the hostname resolves to in public DNS.
curl -IL $HOSTNAME --resolve $HOSTNAME:$PORT:$SPECIFIC-IP
Create file curlloop.sh
with the following content:
#!/bin/bash
url_path="${1}"
request_count=1
while true
do
echo -n "${request_count}: "
curl -sL -w "%{http_code} %{time_total} %header{cf-cache-status} %{url_effective}\\n" "${url_path}" -o /dev/null
sleep .5
((request_count++))
done
Set the executable permission:
chmod +x curlloop.sh
Run the shell script with the following command:
./curlloop.sh refcli.com
Example output:
curlloop.sh refcli.com
1: 200 0.341181 DYNAMIC https://refcli.com/
2: 200 0.158135 DYNAMIC https://refcli.com/
3: 200 0.169257 DYNAMIC https://refcli.com/
4: 200 0.179654 DYNAMIC https://refcli.com/
5: 200 0.185863 DYNAMIC https://refcli.com/
POST plain text data to URL:
curl -X POST -d 'plain-text' -H "Content-Type:text/plain" $URL
POST JSON data to URL:
curl -X POST -d '{"key1":"value1","key2":"value2"}' -H "Content-Type:application/json" $URL
Create a file called payload, put valid JSON in it, and reference it with the curl
command using the following command:
curl -d @./payload -H "X-Auth: $TOKEN" "https://api.example.com/api/query"
If you are using the curl
command within a shell script and want to pass JSON to it from a Bash variable in the same shell script, use the following method:
PAYLOAD='
[{
"auth": {
"identity": {
"methods": ["password"],
"password": {
"user": {
"name": "USERNAME",
"domain": {
"id": "default"
},
"password": "PASSWORD"
}
}
},
"scope": {
"project": {
"name": "PROJECT",
"domain": {
"id": "default"
}
}
}
}
}]'
RESULT=$(curl -s -d @- -H "X-Auth: $TOKEN" "https://api.example.com/api/query" <<< "$PAYLOAD")
Another method you can use is a HERE document. This eliminates the need for the $PAYLOAD Bash variable.
RESULT=$(curl -d @- -H "X-Auth: $TOKEN" "https://api.example.com/api/query" <<PAYLOAD
[{
"auth": {
"identity": {
"methods": ["password"],
"password": {
"user": {
"name": "USERNAME",
"domain": {
"id": "default"
},
"password": "PASSWORD"
}
}
},
"scope": {
"project": {
"name": "PROJECT",
"domain": {
"id": "default"
}
}
}
}
}]
PAYLOAD)
Many of the tools used today are built on top of some sort of API. Those tools abstract all of the granular details and information that comes through an API request and response. Sometimes it is useful to see what goes on behind the scenes, especially when it comes time to troubleshoot.
The following example uses the curl
command to query the OpenStack API to show the details about a particular hypervisor managed by OpenStack Nova.
Before you can do anything with any API, you need a token. With OpenStack it’s no different. OpenStack Keystone is the identity and authentication service that is used to generate tokens for end users and services.
The Keystone v2 API is quickly being deprecated. I have only included it for posterity’s sake, so jump to the v3 API below to generate a token:
curl -s \
-X POST https://10.240.0.100:5000/v2.0/tokens \
-H "Content-Type: application/json" \
-d '{"auth": {"tenantName": "TENANT", "passwordCredentials":{"username": "USERNAME", "password": "PASSWORD"}}}'
What follows is the Keystone v3 API. You will need a valid username and password already stored in OpenStack Keystone to generate a token:
curl -i \
-H "Content-Type: application/json" \
-d '
{ "auth": {
"identity": {
"methods": ["password"],
"password": {
"user": {
"name": "USERNAME",
"domain": { "id": "default" },
"password": "PASSWORD"
}
}
},
"scope": {
"project": {
"name": "PROJECT",
"domain": { "id": "default" }
}
}
}
}' \
https://10.240.0.100:5000/v3/auth/tokens
This will return a lot of JSON. Unfortunately, I did not make a copy of that JSON output when I originally created these notes. However, within that JSON will be a token that will be used in all of the subsequent commands. Let’s assume the token returned is 1234567890abcdefghijklmnopqrstuv.
Now that a token has been generated, I can begin querying the OpenStack Nova API for details on the particular hypervisor. For this example, I want details about compute01.local.lan, and I’m going to need its id:
curl -H "X-Auth-Token:1234567890abcdefghijklmnopqrstuv" http://10.240.0.100:8774/v3/os-hypervisors/compute01.local.lan/search | python -m json.tool
Take note, the curl
command is going to output JSON which can be difficult to read. I am piping the output to python -m json.tool
to make it easier to read.
The above command returns the following JSON:
{
"hypervisors": [
{
"hypervisor_hostname": "compute01.local.lan",
"id": 1,
"state": "up",
"status": "enabled"
}
]
}
Now that I have the id, I can query the details about compute01.local.lan:
curl -H "X-Auth-Token:1234567890abcdefghijklmnopqrstuv" http://10.240.0.100:8774/v3/os-hypervisors/1 | python -m json.tool
The above command returns the following JSON which provides all of the details for that particular hypervisor:
{
"hypervisor": {
"cpu_info": "{\"vendor\": \"Intel\", \"model\": \"SandyBridge\", \"arch\": \"x86_64\", \"features\": [\"ssse3\", \"pge\", \"avx\", \"clflush\", \"sep\", \"syscall\", \"vme\", \"dtes64\", \"tsc\", \"xsave\", \"vmx\", \"xtpr\", \"cmov\", \"pcid\", \"est\", \"pat\", \"monitor\", \"smx\", \"lm\", \"msr\", \"nx\", \"fxsr\", \"tm\", \"sse4.1\", \"pae\", \"sse4.2\", \"pclmuldq\", \"acpi\", \"tsc-deadline\", \"mmx\", \"osxsave\", \"cx8\", \"mce\", \"mtrr\", \"rdtscp\", \"ht\", \"dca\", \"lahf_lm\", \"pdcm\", \"mca\", \"pdpe1gb\", \"apic\", \"sse\", \"pse\", \"ds\", \"pni\", \"tm2\", \"aes\", \"sse2\", \"ss\", \"pbe\", \"de\", \"fpu\", \"cx16\", \"pse36\", \"ds_cpl\", \"popcnt\", \"x2apic\"], \"topology\": {\"cores\": 6, \"threads\": 2, \"sockets\": 2}}",
"current_workload": 0,
"disk_available_least": 752,
"free_disk_gb": 878,
"free_ram_mb": 117633,
"host_ip": "10.240.0.200",
"hypervisor_hostname": "compute01.local.lan",
"hypervisor_type": "QEMU",
"hypervisor_version": 2000000,
"id": 1,
"local_gb": 971,
"local_gb_used": 93,
"memory_mb": 128897,
"memory_mb_used": 11264,
"os-pci:pci_stats": [],
"running_vms": 5,
"service": {
"disabled_reason": null,
"host": "compute01",
"id": 25
},
"state": "up",
"status": "enabled",
"vcpus": 24,
"vcpus_used": 6
}
}
To interact with Google Cloud APIs using curl
, you have to pass a token in the Authorization request header. This is tedious to do manually every time and the token will eventually expire, so you can create a command line alias to make this process easier. The generated token will map to the Google account you currently have authenticated with gcloud (verify which Google account is authenticated with the gcloud config list
command).
The following command is copied from the Google Cloud Run Authenticating developers, services, and end users documentation:
alias gcurl='curl --header "Authorization: Bearer $(gcloud config config-helper --format=value\(credential.access_token\) --force-auth-refresh)"'
From OS X 10.9 Mavericks to macOS 14 Sonoma, flush the DNS cache then restart DNS responder:
sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder
If you are uploading a picture to a public website, it would be wise to scrub any exif properties - especially if there are GPS exif properties. You can do this with the following command:
exiftool -all= picture.jpg
If you are in a directory with many pictures that you want to scrub the exif data from, you can use a wildcard to process them all:
exiftool -all= *.jpg
You can quickly figure out if a picture is missing a particular exif property by running the following command. In this example, I want to see if my picture has the DateTimeOriginal exif property set:
exiftool -filename -r -if '(not $datetimeoriginal)' /path/to/picture.jpg
If you have a folder of pictures to check, or even a folder contaning even more folders of pictures, you can simply replace /path/to/picture.jpg with /path/to/picture/directory/:
exiftool -filename -r -if '(not $datetimeoriginal)' /path/to/picture/directory/
The picture does not have the DateTimeOriginal exif property if its file name is returned.
Ryan M. provides more insight into finding and fixing images with no exif dates.
If your picture was taken on June 29, 2007 at 1:38:55 PM, you can add the CreateDate exif property to your picture with the following command:
exiftool -createdate="2007:06:19 13:38:55" /path/to/picture.jpg
I had a situation where many of my pictures did not have the CreateDate exif property but they did have the DateTimeOriginal exif property. I wanted the CreateDate exif property to have the same value as the DateTimeOriginal exif property. ExifTool’s if functionality makes this easy to fix:
exiftool '-createdate<datetimeoriginal' -r -if '(not $createdate and $datetimeoriginal)' /path/to/picture/directory/
ExifTool will always copy the original picture and then make its modifications. If you want it to overwrite the original picture, add -overwrite_original_in_place
to the exiftool command line.
ffmpeg -i INPUT
Assuming the source video codec is compatible with the MP4 container, no re-encoding will be necessary. There won’t be any loss in quality.
ffmpeg -i INPUT.mov -c:v copy -c:a copy OUTPUT.mp4
ffmpeg -i INPUT -target ntsc-dvd OUTPUT.mp4
Adding faststart flags to a video file will allow the video file to begin playing as soon as possible instead of waiting for the entire video file to download in a compatible web browser.
ffmpeg -i INPUT -c:a libfaac -c:v copy -movflags faststart OUTPUT
ffmpeg -i INPUT -vn -c:a copy OUTPUT
ffmpeg -i INPUT -an -c:v copy OUTPUT
Refer to post: Correct Smartphone Video Orientation and How To Rotate iOS and Android Videos with ffmpeg.
find . -type f -exec chmod 644 {} \;
find . -type d -exec chmod 755 {} \;
find . -type f -mmin -1
find . -type f -atime -5 -exec ls -ltu {} \;
find . -type f -printf '%T@ %p\n' | sort -n | tail -10
The command will return an elongated UNIX timestamp for each file. You can see the human readable time the file was last modified by running the following command:
date -r "<ABSOLUTE PATH OF FILE>"
find . -type f -print0 | xargs -0 stat -f "%m %N" | sort -rn | head -10
The command will return a UNIX timestamp for each file. It can be converted to human readable time by using the following command:
date -r <UNIXTIMESTAMP>
git config --global user.name "Firstname Lastname"
git config --global user.email "you@example.com"
This is not a command you should run on a git repo being shared. Only run this command if you have a git repo that only you work in, and want to change the author and committer name and email on every commit. Every commit’s hash will be recalculated.
git filter-branch -f --env-filter "
GIT_AUTHOR_NAME='New Author Name'
GIT_AUTHOR_EMAIL='New Author Email'
GIT_COMMITTER_NAME='New Committer Name'
GIT_COMMITTER_EMAIL='New Committer Email'
" HEAD
git commit --amend
git reset HEAD~1
Use extreme caution running the following command:
git reset --hard HEAD~1
Normally you wouldn’t git commit with an empty message. But if you’re editing Gists from GitHub on your workstation, this is very useful:
git commit -a --allow-empty-message -m ''
If you deleted a commit using git reset --hard HEAD~1
and need to recover it, you can recover it with the following commands:
git fsck --lost-found
One to many dangling commits should be returned.
At the very least, you will need to know the first 7 digits of your deleted commit’s SHA hash. You should have this short SHA hash somewhere in your terminal scroll back. If you don’t have it, and you have many dangling commits, you can run git show <SHA HASH>
on each dangling commit to figure out the right one to recover.
Once you have the 7 digit short SHA or the entire SHA hash, merge it into your current branch with the following command:
git merge <SHA HASH>
git clean -f -d
git clean -f -x -d
If the commit is on a local tracking branch:
git branch --contains <COMMIT>
If the commit is on a remote tracking branch:
git branch -a --contains <COMMIT>
If the commit is in a tag:
git tag --contains <COMMIT>
If you have been developing in a branch that contains a lot of commits you would rather not merge into the master branch, you can merge and squash all those commits into one commit by using the following commands:
git checkout master
git merge --squash dev
git commit -a -m "Commit message"
git push origin --delete <BRANCH>
git log --oneline
git log --all --graph --decorate --oneline --abbrev-commit
git shortlog -sn
Run the following command to obtain the 7 character short SHA, perhaps to use for a Docker image tag, for a particular commit:
git rev-parse --short <SHA>
The -a
command line switch specifies the Array ID, in this example, All Array IDs.
MegaCli64 -PdList -aAll
The -a
command line switch specifies the Array ID, in this example, All Array IDs.
MegaCli64 -LDInfo -Lall -aAll
The -a
command line switch specifies the Array ID, in this example, All Array IDs.
MegaCli64 -LdGetNum -aAll
MegaCli64 -adpCount
The -a
command line switch specifies the Array ID, in this example, Array ID 0.
MegaCli64 -PdLocate -start -physdrv[<ENCLOSURE>:<DRIVE>] -a0
The -a
command line switch specifies the Array ID, in this example, Array ID 0.
MegaCli64 -PdLocate -stop -physdrv[<ENCLOSURE><DRIVE>] -a0
The following physical drive was Unconfigured(good):
Enclosure Device ID: 4
Slot Number: 16
Device Id: 154
Sequence Number: 1
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
Raw Size: 140014MB [0x11177328 Sectors]
Non Coerced Size: 139502MB [0x11077328 Sectors]
Coerced Size: 139392MB [0x11040000 Sectors]
Firmware state: Unconfigured(good)
SAS Address(0): 0x500000e117951c52
SAS Address(1): 0x0
Connected Port Number: 0(path0)
Inquiry Data: FUJITSU MBE2147RC D905D304PB30AAGJ
When looking at all of the physical drives by running MegaCli64 -PdList -aAll
, Array #: 0, 1, 2, 3, 4 were missing a hotspare.
The following command added the Unconfigured(good) physical drive above as a dedicated hotspare:
The -a
command line switch specifies the Array ID, in this example, Array ID 0.
MegaCli64 -PDHSP -Set -Dedicated -Array0,1,2,3,4 -PhysDrv [4:16] -a0
A macOS specific command to perform a network speed test from your terminal:
networkQuality -v
Scan a hostname endpoint to find the TLS ciphers it supports:
nmap --script ssl-enum-ciphers -p 443 $HOSTNAME
openssl x509 -text -noout -in cert.pem
View the SSL certificate for any protocol using SSL/TLS with the following command:
openssl s_client -showcerts -connect FQDN:PORT
To see more documentation on s_client run the following command:
man s_client
bundle.pem could contain Intermediate Certificate(s) and/or a Root Certificate provided by your Certificate Authority.
openssl verify -CAfile bundle.pem cert.pem
Output will be RSA key ok if the Private Key is valid.
openssl rsa -check -noout -in key.pem
If the hash outputted by each of the following commands match, then the Private Key signed the SSL certificate.
openssl x509 -modulus -noout -in cert.pem | openssl sha256
openssl rsa -modulus -noout -in key.pem | openssl sha256
Same commands but in a single line with string matching:
[[ "$(openssl x509 -modulus -noout -in cert.pem | openssl sha256)" == "$(openssl rsa -modulus -noout -in key.pem | openssl sha256)" ]] && echo "MATCH" || echo "NO MATCH"
If your server can serve different TLS certificates based on the cipher suites supported by the client, verify each type of TLS certificate is being served with the following commands.
Connect with TLS 1.2 and only send RSA cipher suites:
echo | openssl s_client -tls1_2 -cipher aRSA -servername "$HOSTNAME" -connect "$HOSTNAME":443 2>/dev/null
Connect with TLS 1.2 and only send ECDSA cipher suites:
echo | openssl s_client -tls1_2 -cipher aECDSA -servername "$HOSTNAME" -connect " $HOSTNAME":443 2>/dev/null
Generate a Private Key for the Root Certificate (you will be prompted to input a passphrase):
openssl genrsa -des3 -out myCA.key 2048
Generate a Root Certificate (you will be prompted for the passphrase of the previously created Private Key):
openssl req -x509 -new -nodes -key myCA.key -sha256 -days 1825 -out myCA.pem
Generate a Private Key for the Server CSR:
openssl genrsa -out server.key 2048
Create the Server CSR:
openssl req -new -key server.key -out server.csr
Create the Certificate Extension Config by creating file server.ext with the following contents:
subjectAltName = @alt_names
[alt_names]
DNS.1 = self-signed.example.com
Finally, create the Server Certificate with Extensions:
openssl x509 -req -in server.csr -CA myCA.pem -CAkey myCA.key -CAcreateserial -out server.crt -days 365 -sha256 -extfile server.ext
Read the Server Certificate to verify it was created as desired:
openssl x509 -text -noout -in server.crt
openssl pkcs12 -in file.pfx -nocerts -out key.pem
If the Private Key is password protected, remove the password with the following command:
openssl rsa -in key.pem -out key-nopass.pem
openssl pkcs12 -in file.pfx -nokeys -out certs.pem
Create file certcheck.sh with the following content:
#!/bin/bash
hostname="$1"
echo | openssl s_client -showcerts -servername "$hostname" -connect "$hostname":443 2>/dev/null | openssl x509 -serial -issuer -dates -subject -ext subjectAltName -noout
Set the executable permission:
chmod +x certcheck.sh
Run the script with the following command:
./certcheck.sh www.example.com
Print partition table in bytes:
parted /dev/sdb print
Print partition table in sectors:
parted /dev/sdb unit s print
Print partition table free space in bytes:
parted /dev/sdb print free
Print partition table free space in sectors:
parted /dev/sdb unit s print free
Unlike fdisk
, every parted
command executes in real time. This introduces much more room for human error that could cause data loss. I am not, nor is anyone else, responsible for any potential data loss when using parted
.
First, if needed, create a partition table label:
parted /dev/sdb mklabel gpt
Second, create the primary partition:
parted /dev/sdb mkpart primary 0 100%
After running the above command you will more than likely see the following warning message:
Warning: The resulting partition is not properly aligned for best performance.
To dig into why this occurs, and a possible solution, I suggest you read through how to align partitions for best performance using parted.
However, as suggested in the comments in that blog post, a quicker way to ensure parted
aligns the partition properly is to ensure the START and END parameters in the parted command use percentages instead of exact values.
parted /dev/sdb mkpart primary 0% 100%
Find and remove all instances of TEXT:
sed 's/TEXT//g'
Find and remove all spaces:
sed -e 's/ //g'
sed '/^$/d' <FILE>
sed 's/^[ \t]*//' <FILE>
sed 's/[ \t]*$//' <FILE>
sed 's/^[ \t]*//;s/[ \t]*$//' <FILE>
sed -i '/PATTERN/i REPLACEWITH' <FILE>
Change BRIDGE_HOTPLUG=yes to BRIDGE_HOTPLUG=no no matter what it is already set to:
sed '/BRIDGE_HOTPLUG=/ s/=.*/=no/' /etc/default/bridge-utils
Change PermitRootLogin no to PermitRootLogin yes no matter what it is already set to:
sed '/PermitRootLogin / s/ .*/ yes/' /etc/ssh/sshd_config
Change date=2015 to date=“2015”:
echo 'date=2015' | sed 's/\(=[[:blank:]]*\)\(.*\)/\1"\2"/'
sed -e :a -e '$!N;s/\n //;ta' -e 'P;D' <FILE>
sed -n 's/.*href="\([^"]*\).*/\1/p' <FILE.html>
A macOS specific command to print the current status of login and background items. The output will be similar to, but more detailed than, what is shown in System Settings > General > Login Items:
sfltool dumpbtm
A macOS specific command to convert images via the command line:
sips -s format jpeg -s formatOptions best <INPUT_IMAGE_FILE> --out <OUTPUT_IMAGE_FILE>
A macOS specific command to view details about devices connected to the USB bus (the same details are presented in System Report):
system_profiler SPUSBDataType
Filter tcpdump to display the packet containing the SNI header from the connecting client and make it human readable in the terminal output:
tcpdump -i <NETWORK_INTERFACE> -s 1500 '(tcp[((tcp[12:1] & 0xf0) >> 2)+5:1] = 0x01) and (tcp[((tcp[12:1] & 0xf0) >> 2):1] = 0x16)' -nnXSs0 -ttt
Read from file:
while read line; do
echo $line
done < temp.txt
Read from file but parse file before piping into while loop:
while read line; do
curl -sL -w "%{http_code} %{url_effective}\\n" "https://example.com$line" -o /dev/null
done < <(cut -d' ' -f1 static/_redirects)
Read from HERE doc:
while read line; do
echo $line
done << EOF
LINE-1
LINE-2
LINE-3
EOF
whois -h whois.cymru.com $IP_ADDRESS
Create a command alias in your preferred shell environment alias asn="whois -h whois.cymru.com"
and then use asn $IP_ADDRESS
instead.
Run a command in parallel using xargs
. In the following example, -n 1
specifies that one argument from the input should be used for each command, -P 5
specifies the max number of parallel processes, and urls.txt contains a line delimited list of URLs to use as arguments for each run of COMMAND
:
cat urls.txt | xargs -n 1 -P 5 COMMAND