Useful Bash One-Liners
Here’s a loosely-organized pile ‘o shell quickies I kept googling now and then and finally decided to write ’em down.
[accordion] [spoiler title=”Shady shit” style=”fancy”]Set custom mtime/atime on a file.
NOTE: Keep in mind, this does not affect ctime. To change ctime you may need to change system time first, or use some other trick. Here’s a cool one: “Faking the Uptime in Linux“.touch -t 201303120234 /tmp/oldfile touch -d '-1 year' /tmp/oldfile
Change a file’s mtime/atime based on its current timestamp
touch -r /tmp/oldfile -d '-1 year' /tmp/oldfile
Hide command from
w
(will still show up in ps
output, though)perl -e 'perl -e '${0} = "fake_command"; system("real_command");'= "fake_command"; system("real_command");'
Clear your Bash history
cat /dev/null > ~/.bash_history && history -c && exit
Share contents of current directory via HTTP:
cd /tmp ; python -m SimpleHTTPServer <port>
Create a tunnel from localhost:2001 to remotehost:80
ssh -f -N -L2001:localhost:80 remotehost
Create a tunnel from localhost:2001 to remotehost:80 via bridgehost
ssh -f -N -L2001:remotehost:80 bridgehost
Tunnel your SSH connection via intermediary:
ssh -t reachable_host ssh unreachable_host
Output your microphone to other computer’s speaker:
dd if=/dev/dsp | ssh username@host dd of=/dev/dsp
Securely delete files and directories:
yum -y install coreutils srm wipe shred -zvu -n 5 /tmp/dir1/secret_file wipe -rfi /tmp/dir1/* srm -vz /tmp/dir1/* for i in `seq 1 5`; do cat /dev/random > /tmp/dir1/secret_file; done && /bin/rm -f /tmp/1gb_file
Using Bash brace expansion to generate commands with no spaces
{nc,-v,-i1,-w1,google.com,443} Connection to google.com 443 port [tcp/https] succeeded!
[/spoiler] [spoiler title=”Awk/sed/tr stuff” style=”fancy”]
Replace newline with comma
sed ':a;N;$!ba;s/\n/ /g'
Remove leading spaces and tabs
sed 's/^[ \t]*//'
Remove single spaces only (leave multiple spaces):
sed 's/\(.\) //g'
Move first line to the end of list
sed '1,1{H;1h;d;};$G'
Show allocated disk space:
df -klP -t xfs -t ext2 -t ext3 -t ext4 -t reiserfs | grep -oE ' [0-9]{1,}( +[0-9]{1,})+' | awk '{sum_used += $1} END {printf "%.0f GB\n", sum_used/1024/1024}'
Show used disk space:
df -klP -t xfs -t ext2 -t ext3 -t ext4 -t reiserfs | grep -oE ' [0-9]{1,}( +[0-9]{1,})+' | awk '{sum_used += $2} END {printf "%.0f GB\n", sum_used/1024/1024}'
Summarizing line data with
awk
:# Sample data ID1,223 ID2,124 ID3,125 ID2,400 ID1,345 ID4,876 ID2,243 ID4,287 ID1,376 ID3,765 # Add up the values in the second column awk -F"," '{s+=$2}END{print s}' temp # Add up the values in the second column only for ID2 awk -F, '$1=="ID2"{s+=$2;}END{print s}' temp v="ID2"; awk -F, -v v="${v}" '$1==v{s+=$2;}END{print s}' temp # List unique values in the first column awk -F, '{a[$1];}END{for (i in a)print i;}' temp # Add up values in the second column for each ID awk -F, '{a[$1]+=$2;}END{for(i in a)print i", "a[i];}' temp # Add up values in the second column for each ID and print total awk -F, '{a[$1]+=$2;x+=$2}END{for(i in a)print i", "a[i];print "Total,"x}' temp # Print the maximum second-column value for each group awk -F, '{if (a[$1] < $2)a[$1]=$2;}END{for(i in a){print i,a[i];}}' OFS=, temp # Print the number of occurrences for each ID awk -F, '{a[$1]++;}END{for (i in a)print i, a[i];}' temp # Print the first entry for each ID awk -F, '!a[$1]++' temp # Concatenate values for each ID awk -F, '{if(a[$1])a[$1]=a[$1]":"$2; else a[$1]=$2;}END{for (i in a)print i, a[i];}' OFS=, temp
Extract URLs:
sed -n 's/.*href="\([^"]*\).*//p'
Preserve symlinks when using
sed -i
cd /etc/httpd/conf.d && sed -i --follow-symlinks 's/192.168.1/192.168.2/g' *.conf
Append each string with a consecutive number:
awk -vRS=string '{$0=n$0;ORS=RT}++n'
Flush
awk
buffers when piping from STDIN for continuous output:| awk {'print $1; fflush();'}
Print fields set in a Shell variable:
fields="1 3 4" command | awk -v fields="${fields}" 'BEGIN{ n = split(fields,f) } { for (i=1; i<=n; ++i) printf "%s%s", $f[i], (i<n?OFS:ORS) }'
Print lines if third field is unique (PPID in this example):
ps -ef | grep [s]plunk | awk '!seen[$3]++'
Similar to above, but print second field (PID) if third field (PPID) is unique:
ps -ef | grep [s]plunk | awk '!seen[$3]++ {print $2}'
Show the primary IP of a local machine:
ifconfig | sed -rn 's/127.0.0.1//;s/.*inet (addr:)?(([0-9]*\.){3}[0-9]*).*//p'
Verify that local machine’s IP matches DNS:
if [ "$(ifconfig | sed -rn 's/127.0.0.1//;s/.*inet (addr:)?(([0-9]*\.){3}[0-9]*).*//p')" == "$(dig +short $(host -TtA $(hostname -s) | grep "has address" | awk '{print $1}'))" ]; then echo 0 ; else echo 1 ; fi
Show primary NIC:
route | grep -m1 ^default | awk '{print $NF}'
Show prefix (netmask in CIDR notation):
ip addr show "$(route | grep -m1 ^default | awk '{print $NF}')" | grep -w inet | grep -v 127.0.0.1 | awk '{ print $2}' | cut -d "/" -f 2
Show broadcast address:
ip addr show "$(route | grep -m1 ^default | awk '{print $NF}')" | grep -w inet |grep -v 127.0.0.1|awk '{ print $4}'
Show local machine’s network in CIDR notation:
eval $(ipcalc -np $(ifconfig $(route | grep -m1 ^default | awk '{print $NF}') | sed -n "s/inet addr:\([^ ]*\).*Mask:\([^ ]*\).*/ /p")) ; echo $NETWORK/$PREFIX
Calculate sum from
stdout
and do math| awk '{ SUM += $1} END { print ( SUM/1024 )"MB" }'
Calculate allocated and used local filesystem storage
df -klP -t ext2 -t ext3 -t ext4 -t reiserfs | grep -oE ' [0-9]{1,}( +[0-9]{1,})+' | awk '{sum_alloc +=$1; sum_used += $2} END {printf "%.2f / %.2f (GB)\n", sum_alloc/1024/1024, sum_used/1024/1024}'
Find gaps is numerical sequences
awk '$1!=p+1{print p+1"-"$1-1}{p=$1}'
Grepping with
awk
echo "514/tcp open shell" | awk '{match($1,"^[0-9]+/[a-z]+") && match($2,"open")}END{print $1,$2,$3}'
Grepping with
awk
on a specific columnls -l | awk '$3 == "root"'
Grepping with
sed
and also printing the headers (first line)sed '1p;/pattern/!d'
Extract lines between unique tags using
sed
. Sample input file:cat /tmp/testfile.txt # Header 1 Line 11 Line 12 # Header 2 Line 21 Line 22 Line 23 # Header 3 Line 31 Line 32 Line 33
sed -n '/# Header 2/{:a;n;/# Header 3/b;p;ba}' /tmp/testfile.txt Line 21 Line 22 Line 23
Extract lines contained within the second set of
<header></header>
tags using sed
. Sample input file:cat /tmp/testfile2.txt <header> Line 11 Line 12 </header> <header> Line 21 Line 22 Line 23 </header> <header> Line 31 Line 32 Line 33 </header>
sed -n '\|<header>|{:n;\|</header>|!{N;bn};y|\n| |;p}' /tmp/testfile2.txt | sed -n '2{p;q}' <header> Line 21 Line 22 Line 23 </header>
Delete lines between two tags not including the tags:
sed "/<tag open>/,/<\/tag close>/{//!d}"
Delete line between two tags including the tags:
sed "/<tag open>/,/<\/tag close>/d"
Delete all lines after a tag not including the tag:
sed "/<\tag close>/,$d"
Delete lines 12 through 23:
sed "12,23d"
Remove dupes, spaces, extra semicolons from BASh PATH
PATH=$(xargs -d: -n1 <<<${PATH} | sed 's/ //g' | sort -u | xargs | sed 's/\b*//g;s/ /:/g')
Remove duplicate words in a line:
awk '{ while(++i<=NF) printf (!a[$i]++) ? $i FS : ""; i=split("",a); print "" }'
Remove duplicate lines in a file without sorting:
awk '!a[$0]++'
Print number of characters for each line in a file:
awk '{ print length($0)"\t"$0; }' file.txt
Insert a Unicode character into specific column position in a file:
sed -r -e 's/^.{15}/&\xe2\x86\x92\x0/' file.txt
Replace multiple newlines with a single newline
sed '/^$/N;/^\n$/D' file.txt
Preserve original search string and add to it
(Example: replace every [0-9]. with [0-9]..)
ls | sed -e 's/\([0-9]\.\)/\./g'
(Example 2: enclose every four-digit number followed by a dot in parentheses, i.e.
2014.
becomes (2014).
)| sed -e 's/\([0-9]\{4\}\)\./\(\)\./g')"
Merge every two adjacent lines (
sed
wins):awk 'NR%2{printf $0" ";next;}1' # or sed 'N;s/\n/ /'
Get hard drive model and size:
for i in `fdisk -l 2>/dev/null | egrep -o "/dev/sd[a-z]" | sort -u` ; do hdparm -I ${i} 2>/dev/null; done | egrep "Model|size.*1000" | awk -F: '{print $NF}' | awk 'NR%2{printf $0" ";next;}1'
Identify server’s primary IP address:
/sbin/ifconfig | sed -rn 's/127.0.0.1//;s/.*inet (addr:)?(([0-9]*\.){3}[0-9]*).*//p'
Print all fields but first:
awk '{$1=""; print $0}'
Print all fields but last:
awk '{$NF=""; print $0}'
Print all fields but last and preserve field delimiters:
awk -F'/' -v OFS='/' '{$NF=""; print $0}')"
Print all fields but the first two:
awk '{$1=$2=""; print $0}'
Print fields from 9th to last:
awk '{ s = ""; for (i = 9; i <= NF; i++) s = s $i " "; print s }'
Comment-out a line in a file containing a regex match:
sed -re '/REGEX/ s/^#*/#/' -i /tmp/file
Uncomment a file containing a
regex
match:sed -re '/REGEX/ s/^#*//' -i /tmp/file
Convert upper- to lower-case with
tr
and sed
:tr '[:upper:]' '[:lower:]' sed -e 's/\(.*\)/\L/'
Convert to “Title Case”:
sed 's/.*/\L&/; s/[a-z]*/\u&/g'
Insert “E” into string at position #3:
sed -r -e 's/^.{3}/&E/'
Print text between the first occurrence of tag “foo” and the last occurrence of tag “bar”:
sed -n '/foo/{:a;N;/^\n/s/^\n//;/bar/{p;s/.*//;};ba};'
Prepend a shell variable to a string using
awk
:| awk -v var="${shell_var}" '{print var$0}'
Roundup number to the nearest multiple of 10:
awk '{print sprintf("%.0f",$0/10)*10}'
[/spoiler] [spoiler title=”Sequences and Combinations” style=”fancy”]
# Brace expansion echo {a,c}{a,c} aa ac ca cc echo {a..c}{a..c} aa ab ac ba bb bc ca cb cc echo {'word1 ','word2 '}{'word1, ','word2, '} word1 word1, word1 word2, word2 word1, word2 word2, # For loop with brace expansion charset={a,b}; group=2; rep=; for ((i=0; i<${group}; i++)); do rep="${charset}${rep}"; done; eval echo ${rep} aa ab ba bb charset={1..3}; group=2; rep=; for ((i=0; i<${group}; i++)); do rep="${charset}${rep}"; done; eval echo ${rep} 11 12 13 21 22 23 31 32 33 charset=({a..c} {A,Z} {0..2}) permute(){ (($1 == 0)) && { echo "$2"; return; } for i in "${charset[@]}" do permute "$(( - 1 ))" "${i}" done } permute 3|tail -5 22A 22Z 220 221 222 # Crunch word list generator v="3.6" && wget -O /tmp/crunch-${v}.tgz https://downloads.sourceforge.net/project/crunch-wordlist/crunch-wordlist/crunch-${v}.tgz && \ tar xvfz crunch-${v}.tgz && cd /tmp/crunch-${v} && make && make install crunch 3 3 ab 2>/dev/null aaa aab aba abb baa bab bba bbb crunch 0 0 -p abc 2>/dev/null abc acb bac bca cab cba # With Unicode characters echo | crunch 0 0 -p яйца 2>/dev/null айця айяц ацйя ацяй аяйц
[/spoiler] [spoiler title=”xargs & parallel” style=”fancy”]
SSH to prdweb001
through prdweb007
and lookup the OS version and the number of CPUs. The number of parallel processes for xargs
is set to the number of CPU cores. The ts
command comes from the moreutils
package.
seq 1 7 | xargs -P$(grep -c proc /proc/cpuinfo) -I% bash -c "ssh -qtT -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ConnectTimeout=3 -o BatchMode=yes prdweb00% 'head -1 /etc/issue;grep -c proc /proc/cpuinfo' | ts prdweb00%:" | sort prdweb001: 8 prdweb001: CentOS release 5.10 (Final) prdweb002: 2 prdweb002: CentOS release 5.10 (Final) ...
Query name servers
ns1.krazyworks.com
through ns8.krazyworks.com
for krazyworks.com
SOA record, grab the 10-digit serial and verify that all name servers have the same DNS zone serial number. This can be useful for identifying name servers that are not updating in a timely fashion.seq 1 8 | xargs -P8 -I% dig +nsid -t SOA krazyworks.com @ns%.krazyworks.com | grep -oP "[0-9]{10}" | sort -u | wc -l
Similar to above, but the names of the name servers are non-sequential. The number of
xargs
threads is set to the number of array element.declare -a nsarray=('sojwiu01' 'kwjsiu01' 'sljhuw01' 'hwikdj01' 'lskwid01' 'ldhuwy01' 'sjducn01' 'vfjqod01') printf '%s\n' ${nsarray[@]} | xargs -P$(printf '%s\n' ${#nsarray[@]}) -I% dig +nsid -t SOA krazyworks.com @%.krazyworks.com.local | grep -oP "[0-9]{10}" | sort -u | wc -l
[/spoiler] [spoiler title=”wget & curl” style=”fancy”]
Download tar.gz and uncompress with a single command:
wget -q https://domain.com/archive.tar.gz -O - | tar xz
Download tar.bz2 and uncompress with a single command:
wget -q https://domain.com/archive.tar.bz2 -O - | tar xj
Download in background, limit bandwidth to 200KBps, do not ascend to parent URL, download only newer files, do not create new directories, download only htm*,php and, pdf, set 5-second timeout per link.
wget -b --limit-rate=200k -np -N -m -nd --accept=htm,html,php,pdf --wait=5 "${url}"
Download recursively, span multiple hosts, convert links to local, limit recursion level to 4, fake “mozilla” user agent, ignore “robots” directives.
wget -r -H --convert-links --level=4 --user-agent=Mozilla "${url}" -e robots=off
Generate a list of broken links:
wget --spider -o broken_links.log --wait 2 -r -p "${url}" -e robots=off
Download new PDFs from a list of URLs:
wget -r --level=1 -H --timeout=2 -nd -N -np --accept=pdf -e robots=off -i urls.txt
Save and use authentication cookie:
wget -O ~/.domain_cookie_tmp "https://domain.com/login.cgi?login=${username}&password=${password}" grep "^cookie" ~/.domain_cookie_tmp | awk -F'=' '{print $2}' > ~/.domain_cookie wget -c --no-cookies --header="Cookie: enc=`cat ~/.domain_cookie`" -i "${url_file}" -nc
Use
wget
with anonymous proxy:export http_proxy=proxy_server:port wget -Y -O /tmp/yahoo.htm "http://www.yahoo.com"
Use
wget
with authorized proxy:export http_proxy=proxy_server:port wget -Y --proxy-user=${username} --proxy-passwd=${password} \ -O /tmp/yahoo.htm "http://www.yahoo.com"
Make a local mirror of a Web site:
wget -U Mozilla -m -k -D ${domain} --follow-ftp \ --limit-rate=50k --wait=5 --random-wait -np "${url}" -e robots=off
Download images from a Web site:
wget -r -l 0 -U Mozilla -t 1 -nd -D ${domain} \ -A jpg,jpeg,gif,png "${url}" -e robots=off
Download and run a remote script:
bash <(curl -s0 http://remote_server.domain.com/test.sh)
Same as above, but start a
python
Web server instance on the remote server first:# On the remote_server: d=/var/adm/bin && f=${d}/test.sh && echo -e "#\!/bin/bash\necho 'This is a test'" > ${f} && chmod 755 ${f} cd ${d} && python -m SimpleHTTPServer 81 # On the local_server bash <(curl -s0 http://remote_server.domain.com:81/test.sh) This is a test
[/spoiler] [spoiler title=”Shell arrays” style=”fancy”]
Declare and populate array manually:
declare -a a=('first element' 'second element' 'fifth element')
Store output of shell commands in an array
IFS=$'\n' ; a=($(command1 | command2)) ; unset IFS
Output contents of an array one element per line
printf '%s\n' ${a[@]}
Output contents of an array one element per line when elements contains spaces
for ((i = 0; i < ${#a[@]}; i++)) ; do echo "${a[$i]}" ; done
Read contents of a file into an array
old_IFS=$IFS ; IFS=$'\n' ; a=($(grep ERROR /var/log/messages)) printf '%s\n' \"${a[@]}\" IFS=$old_IFS
[/spoiler] [spoiler title=”Sorting and somesuch” style=”fancy”]
Sort by IP address
sort -n -t . -k 1,1 -k 2,2 -k 3,3 -k 4,4
Sort while ignoring the header line
# one-line header; data file (head -n 1; sort) < file.txt # one-line header; read from pipe command | (read -r; printf "%s\n" "$REPLY"; sort) # three-line header; read from pipe command | (for i in $(seq 3); do read -r; printf "%s\n" "$REPLY"; done; sort)
Find missing lines between two files
diff --new-line-format="" --unchanged-line-format="" <(sort file1) <(sort file2)
See if
file1
is different from file2
:diff -q file1 file2 Files file1 and file2 differ
See difference between two files side-by-side:
diff --side-by-side file1 file2 654 < 987 < 123 < 321 321 > 987 789 789 > 456 > 123
Similar to above but sort the files first and compare only the unique lines:
diff --side-by-side <(sort -u file1) <(sort -u file2) 123 123 321 321 654 | 456 789 789 987 987
Find lines in two files containing matching fields:
awk 'FNR==NR{a[$1];next}($1 in a){print}' file2 file1
Print every line from
file1
that is also in file2
(requires moreutils
package):combine file1 and file2
Print lines from
file1
that are not in file2
:combine file1 not file2
Same as above but reading from
STDIN
:cat file1 | combine - not file2
Same as above but using
grep
:grep -Fxv -f file2 file1
Print line that are unique to
file1
or file2
:combine file1 xor file2
List unique words in a file and count their frequency:
tr -c a-zA-Z '\n' < /file.txt | sed '/^$/d' | sort | uniq -i -c | sort -rn
Print text from the last occurrence of a tag line to the end of file
tac ${yourfile} | grep "${yourtag}" -m 1 -B 9999 | tac
[/spoiler] [spoiler title=”Comparing apples and oysters” style=”fancy”]
Check if variable is an integer:
[[ ${var} =~ ^-?[0-9]+$ ]]
Check if variable is an integer or a decimal:
[[ "${var}" =~ ^-?[0-9]+(\.[0-9]+)?$ ]]
Check if a file contains non-ASCII characters:
if LC_ALL=C grep -q '[^[:print:][:space:]]' ${f}; then echo "non-ASCII"; fi
Setting a default value for a shell variable:
# if is null, set var="/tmp" var=${1:-"/tmp"}
If variable not set, show error message and exit:
var=${1:?"Missing argument"}
If variable not set, show error message, run a command, and exit
var=${1:?"Missing argument" $(date)}
Find the length of a variable:
l=${#var}
Remove patterns from the variable’s value:
var="/dir1/dir2/file.txt" && echo "${var#/dir1/dir2/}" && echo "${var##*/}" file.txt file.txt var="/dir1/dir2/file.txt" && echo "${var%/*}" && echo "${var%%/file*}" /dir1/dir2 /dir1/dir2
Find/replace strings in the value of a variable:
var="kitty cat" && echo "${var/kitty/taco}" taco cat
Extract a substring from the value of a variable:
var="taco cat" && echo "${var:0:4}" taco
Check if the value of a variable contain a string that matches a REGEX (Bash v3):
var="taco cat" && [[ ${var} =~ ^[acot]{4} ]] && echo yes || echo no yes
Rename multiple files using patterns:
[root@ncc1701:/tmp/poi] # ls file_01_2013_archive.tgz file_03_2013_archive.tgz file_05_2013_archive.tgz file_07_2013_archive.tgz file_09_2013_archive.tgz file_02_2013_archive.tgz file_04_2013_archive.tgz file_06_2013_archive.tgz file_08_2013_archive.tgz file_10_2013_archive.tgz [root@ncc1701:/tmp/poi] # mmv "*_2013_*" '#1_2017_#2' [root@ncc1701:/tmp/poi] # ls file_01_2017_archive.tgz file_03_2017_archive.tgz file_05_2017_archive.tgz file_07_2017_archive.tgz file_09_2017_archive.tgz file_02_2017_archive.tgz file_04_2017_archive.tgz file_06_2017_archive.tgz file_08_2017_archive.tgz file_10_2017_archive.tgz
Compare two tarballs:
diff <(tar -tvf file1.tgz | sort) <(tar -tvf file2.tgz | sort)
See if there’s anything in a tarball that’s missing from the server:
tar -dz --file=file1.tgz -C /base_path
[/spoiler] [spoiler title=”Looking for things” style=”fancy”]
Identify first and last occurrence of an error message in /var/log/logname*
zgrep -h "error message" `find /var/log/ -type f -name "logname*" | sort -V | sed '1,1{H;1h;d;};$G'` | sed -n '1p;$p'
List and search tarball contents:
tar -ztvf file1.tgz '*pattern*'
Show OS release:
grep -m1 -h [0-9] /etc/{*elease,issue} 2>/dev/null | head -1
Find ASCII files
find . -type f -exec grep -Iq . {} \; -and -print
Find ASCII files and extract IP addresses
find . -type f -exec grep -Iq . {} \; -exec grep -oE "(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)" {} /dev/null \;
Find broken symlinks
find . -type l -xtype l
Find files containing specified keywords:
grep -m1 -lrnw "/path/name/" -E -e "keyword1|keyword2" 2>/dev/null | sed -r 's/(^Binary file | matches$)//g' | sort -u # Example: IFS=''; grep -m1 -lrnw "/etc/sysconfig/" -E -e "`hostname -s`" 2>/dev/null | sed -r 's/(^Binary file | matches$)//g' | sort -u | while read line ; do file "${line}" ; done | column -s: -t /etc/sysconfig/network ASCII text /etc/sysconfig/networking/profiles/default/hosts ASCII English text /etc/sysconfig/rhn/systemid XML document text /etc/sysconfig/rhn/systemid.old XML document text /etc/sysconfig/rhn/systemid.save XML document text
Use
find
command with regexfind "$(pwd)" -type f -regextype posix-extended -regex '^.*\.(mkv|avi|mp4|mov|qt|wmv|mng|webm|flv|vob|ogg|ogv|rm|mpg|mpeg|ts4)$'
Limit find to current mount and exclude NFS
find . -mount ! -fstype nfs
Find files modified on a specific date:
find /etc/ -newermt 2016-03-04 ! -newermt 2016-03-05 -ls
Find files modified on a specific date using older version of
find
:touch /tmp/mark.start -d "2017-01-26 23:59" touch /tmp/mark.end -d "2017-01-27 23:59" find /etc -newer /tmp/mark.start ! -newer /tmp/mark.end
Fuzzy matching with
fzf
:git clone --depth 1 https://github.com/junegunn/fzf.git ~/.fzf ~/.fzf/install find /etc -type f | fzf
Find ten largest open files:
lsof / | awk '{ if($7 > 1048576) print $7/1048576 "MB" " " $9 " " $1 }' | sort -n -u | tail
Find ten largest files without crossing mountpoints:
find / -xdev -type f | xargs du | sort -r -n -k 1 | head -n 10 | awk '{ split( "KB MB GB" , v ); s=1; while( $1>1024 ){ $1/=1024; s++ } print int($1) v[s]"\t"substr($0, index($0,$2))}'
Show the size of all subfolders in the current directory:
du -h --max-depth=1
Find ten largest files in a directory:
du -sh /var/log/* | sort -hr | head -10
Find ten largest files owned by
oracle
user, modified in the past five days, don’t cross mountpoints, don’t search NFS:for i in \ $(find /opt -mount ! -fstype nfs -type f -user oracle -mtime -5 -printf '%s %p\n' 2>/dev/null | sort -nr | head -10 | \ awk '{ s = ""; for (i = 2; i <= NF; i++) s = s $i " "; print s }') do ls -alh "${i}" done
Find the latest file in a filesystem
find . -type f -printf '%T@ %p\n' | sort -n | tail -1 | cut -f2- -d" "
Continuous grep on I/O redirection
netstat -T -tupac | grep --line-buffered "1699/java"
Another method for continuous grep (and other commands) using
unbuffer
(yum -y install expect
)unbuffer netstat -T -tupac | grep "1699/java"
Grep with a non-capturing group
Extract a four-digit number in parentheses from string `2013 Monkeys in 1999 (2014).txt`
echo "2013 Monkeys in 1999 (2014).txt" | grep -oP "(?<=\()[0-9]{4}(?=\))"
Count occurrences of multiple patterns with a single
grep
grep -EIho "pattern1|pattern2|pattern3" | sort | uniq -c
Grep with a pattern file:
grep -v -f pattern_file
Count word frequency in the English translation of Tolstoy’s “War and Peace”, excluding 5000 most common English words:
i=5000 t=/tmp/tolstoy_war_peace.txt p=/tmp/count_1w.txt wget -q -O ${t} http://www.gutenberg.org/cache/epub/2600/pg2600.txt wget -q -O ${p} http://norvig.com/ngrams/count_1w.txt head -${i} ${p} | awk '{print " "$1"$"}' > ${p}_${i} tr -c a-zA-Z '\n' < ${t} | egrep -v "^(ll|ve)$" | sed '/^$/d' | \ sort | uniq -ic | sort -rn | grep -iv -f ${p}_${i} | more
A follow-up to the above: get a Wikipedia definition for the top 10 most common words in “War and Peace”:
d=/var/adm/bin if [ ! -d ${d} ] ; then mkdir -p ${d} ; fi wget -q -O ${d}/wped.php https://raw.githubusercontent.com/mevdschee/wped/master/wped.php sed -i "s/'limit'=>3,/'limit'=>1,/g" ${d}/wped.php chmod 755 ${d}/wped.php ln -s ${d}/wped.php /usr/bin/wped for w in `tr -c a-zA-Z '\n' < ${t} | egrep -v "^(ll|ve)$" | sed '/^$/d' | sort | \ uniq -ic | sort -rn | grep -iv -f ${p}_${i} | awk '{print $NF}' | head -10` ; do \ wped ${w} ; done | grep -v "Search results"
Find 20 most frequently-used shell commands:
tr "\|\;" "\n" < ~/.bash_history | sed -e "s/^ //g" | cut -d " " -f 1 | sort | uniq -c | sort -rn | head -20
Checking if file is older than so many seconds:
if [ `expr $(date +%s) - $(stat -c %Y ${testfile})` -gt ${threshold_seconds} ]
Running diff on files from remote servers:
diff <(ssh -qtT server01 "sudo su - root -c 'fdisk -l 2>/dev/null | grep ^Disk'") <(ssh -qtT server02 "sudo su - root -c 'fdisk -l 2>/dev/null | grep ^Disk'")
[/spoiler] [spoiler title=”Formatting output” style=”fancy”]
Prepend comma-separated stdout with a header and arrange in columns
| (echo "COLUMN1_HEADER COLUMN2_HEADER COLUMN3_HEADER" && cat) | column -s',' -t
Print a horizontal line:
rule () { printf -v _hr "%*s" $(tput cols) && echo ${_hr// /${1--}} } rule -
Print a horizontal line with a message:
rulem () { if [ $# -eq 0 ]; then echo "Usage: rulem MESSAGE [RULE_CHARACTER]" return 1 fi printf -v _hr "%*s" $(tput cols) && echo -en ${_hr// /${2--}} && echo -e "\r3[2C$1" }
Right-align text:
alias right="printf '%*s' $(tput cols)"
[/spoiler] [spoiler title=”Process Control” style=”fancy”]
Background and disown a foreground process:
CTRL-Z; disown -h %1; bg 1; logout
Background and disown any process, including another user’s:
kill -TSTP $PID && kill -CONT $PID
[/spoiler] [spoiler title=”System checks” style=”fancy”]
Find dead system services
for i in $(chkconfig --list | grep "`runlevel | awk '{print $NF}'`:on" | awk '{print $1}' | sort) ; do /sbin/service ${i} status 2>&1 | egrep "not [a-z]{1,}ing|[kpsd][ea]d\b"; done
Find active system services that shouldn’t be running (i.e. were started manually)
for i in $(chkconfig --list | grep "`runlevel | awk '{print $NF}'`:off" | awk '{print $1}' | sort) ; do /sbin/service ${i} status 2>&1 | egrep "is running"; done
[/spoiler] [spoiler title=”Network stuff and SSH” style=”fancy”]
Nmap subnet scan
nmap -sn 192.168.122.0/24 -oG - | awk '$4=="Status:" && $5=="Up" {print $2}'
Basic nmap port scan
nmap -oG -T4 -F 192.168.122.112 | grep "\bopen\b"
Check host port access using only Bash:
s="$(cat 2>/dev/null < /dev/null > /dev/tcp/${target_ip}/${target_port} & WPID=$!; sleep 3 && kill $! >/dev/null 2>&1 & KPID=$!; wait $WPID && echo 1)" ; s="${s:-0}"; echo "${s}" | sed 's/0/2/;s/1/0/;s/2/1/'
mtr – traceroute and ping combined:
mtr google.com
Mount a remote folder through SSH:
sshfs name@server:/path/to/folder /path/to/mount/point
To unmount the previous:
fusermount -u /path/to/mount/point
Download a website recursively with wget:
wget --random-wait -r -p -e robots=off -U Mozilla www.example.com
Start an SMTP server:
python -m smtpd -n -c DebuggingServer localhost:1025
Shutdown a Windows machine
net rpc shutdown -I IP_ADDRESS -U username%password
Run a local script/complex command on remote machines via SSH with passwordless
sudo
for i in 1 2 3; do ssh -qTt host0$i "echo $(base64 -w0 /tmp/script01.sh) | base64 -d 2>/dev/null| sudo bash"; done
[/spoiler] [spoiler title=”MySQL tricks” style=”fancy”]
Non-locking queries
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED ; SELECT * FROM some_table; COMMIT;
Select distinct with count
SELECT column1, COUNT(*) AS column1_count FROM some_table GROUP BY column1 ORDER BY column1_count DESC
Evade “myisamchk: error: myisam_sort_buffer_size is too small”
myisamchk --sort_buffer_size=2G -r -f table_name.MYI
Select via shell script when column names have spaces
SELECT \`Column One\`,\`Column Two\`,\`Column Three\`
Select and replace spaces in values with underscores
SELECT REPLACE(column1, ' ', '_'),REPLACE(column2, ' ', '_')
Correct MySQL
grant
syntaxmysql -u${user} -p${passwd} CREATE DATABASE ${tbl_name} ; RANT ALL PRIVILEGES ON ${tbl_name}.* to ${user}@'%' IDENTIFIED BY 'password' WITH GRANT OPTION ; quit
[/spoiler] [spoiler title=”Sync and Backup” style=”fancy”]
Backup a remote folder to the local machine with TAR/SSH:
ssh username@hostname tar czf - /folder/ > /target/hostname_folder.tgz
Backup a local folder to a remote server with TAR/SSH:
tar zcvf - /folder | ssh username@hostname "cat > /target/folder.tgz"
Restore remote backup to the local machine with TAR/SSH:
cd / && ssh username@hostname "cat /target/folder.tgz" | tar zxvf -
[/spoiler] [spoiler title=”Hotkeys” style=”fancy”]
ALT-. # Previous command's final parameter CTRL-r # History reverse incremental serch TAB # Complete unambiguous command TAB TAB # List all possible completions ALT-* # Insert all possible completions CTRL-ALT-e # Inline alias, history, and shell expansion CTRL-x CTRL-e # Load current command into default text editor CTRL-a # Move cursor to the beginning of the line CTRL-e # Move cursor to the end of the line CTRL-w # Delete previous word CTRL-k # Delete to the end of the line CTRL-u # Delete to the beginning of the line CTRL-y # Paste last deleted command ALT-y # Cycle through last deleted commands and paste
[/spoiler] [spoiler title=”Productivity shortcuts” style=”fancy”]
Run the last command as root:
sudo !!
Save changes to a read-only file in vi:
:w !sudo tee %
Change to the previous working directory:
cd -
Run the previous command replacing first “foo” with “bar”:
^foo^bar^
Run the previous command replacing all “foo” with “bar”:
!!:gs/foo/bar
Quickly backup or copy a file:
cp -p file.txt{,_`date +'%Y-%m-%d_%H%M%S'`}
Find config files in /etc and make compressed backup tarball in /etc/backup
t=/etc/backup ; d=`date +'%Y-%m-%d_%H%M%S'` ; mkdir -p ${t}/${d} find /etc -not -path "${t}/${d}*" -type f \( -name "*\.conf*" -o -name "*\.cgf" -o -name "*\.cnf" \) -execdir /bin/cp -pf {} /etc/backup/${d}/{} \; cd ${t} ; tar cfz ${d}.tgz ${d} ; /bin/rm -rf ${d}
Find the last command that begins with “foo”, but don’t run it:
!foo:p
Capture video of a Linux desktop:
ffmpeg -f x11grab -s wxga -r 25 -i :0.0 -sameq /tmp/out.mpg
Empty a file or create a new empty file:
> file.txt
Tweet from the shell:
curl -u user:pass -d status='Tweeting from the shell' http://twitter.com/statuses/update.xml
Quickly access ASCII table:
man 7 ascii
Manual timer:
time read
Execute a command in a sub-shell
(cd /tmp && ls)
List 10 most often used commands:
history | awk '{a[$2]++}END{for(i in a){print a[i] " " i}}' | sort -rn | head
[/spoiler] [spoiler title=”Package Management” style=”fancy”]
List all files contained in all packages that have “httpd” in their name:
rpm -ql $(rpm -qa | grep httpd)
Execute a command at midnight:
echo cmd | at midnight
Compare a remote file with a local file:
ssh user@host cat /path/to/remotefile | diff /path/to/localfile -
Display currently mounted file systems nicely:
mount | column -t
[/spoiler] [spoiler title=”Pipes and Redirects” style=”fancy”]
Get individual exit code of each piped command in a chain
ls /var/www/icons | grep gif | ls -als >/dev/null ; echo ${PIPESTATUS[*]} 0 0 0
Redirect SDTOUT to multiple commands:
echo "something anything" | tee >(sed 's/some/any/g') >(sed 's/thing/one/g') >(sed 's/any/some/g') something anything anything anything someone anyone something something
Similar to above, but using
pee
(requires moreutils
package):echo "something anything" | pee "sed 's/some/any/g'" "sed 's/thing/one/g'" "sed 's/any/some/g'" anything anything someone anyone something something
[/spoiler] [spoiler title=”Clever Loops” style=”fancy”]
Get individual exit code of each piped command in a chain
ls /var/www/icons | grep gif | ls -als >/dev/null ; echo ${PIPESTATUS[*]} 0 0 0
Dynamic variables with variable names
i=1 ; eval "$(echo var${i})"=value ; eval echo $(echo $`eval echo "var${i}"`)
[/spoiler] [spoiler title=”System performance” style=”fancy”]
Create and mount a temporary RAM partition:
mount -t tmpfs -o size=1024m tmpfs /mnt
Top for files:
watch -d -n 1 'df; ls -FlAt /path'
Display the top ten running processes sorted by memory usage:
ps aux | sort -nk +4 | tail # or ps aux | awk '{if ($5 != 0 ) print $2,$5,$6,$11}' | sort -k2rn | head -10 | column -t
Free memory pagecache, dentries, and inodes:
free -m && sync && echo 3 > /proc/sys/vm/drop_caches && free -m
Find processes constantly in wait state:
for i in `seq 1 1 10`; do ps -eo state,pid,cmd | grep "^D"; echo "----"; sleep 5; done
[/spoiler] [spoiler title=”Fun stuff” style=”fancy”]
Watch Star-Wars via telnet:
telnet towel.blinkenlights.nl
Steam Locomotive
yum -y install sl >/dev/null 2>&1; sl
Read your fortune
yum -y install fortune >/dev/null 2>&1; fortune
Read things in reverse
yum install util-linux-ng >/dev/null 2>&1; date | rev
Prime factorization
yum -y install coreutils >/dev/null 2>&1; echo 5 10 10234 | factor
Talking cow
yum -y install cowsay >/dev/null 2>&1; cowsay `date`
Echo string over and over until CTRL-C
yes I am awesome
Matrix-like effect for the terminal screen
if [ `yum install cmatrix >/dev/null 2>&1; echo $?` -ne 0 ] then cd /tmp wget -q http://www.asty.org/cmatrix/dist/cmatrix-1.2a.tar.gz tar xzf cmatrix-1.2a.tar.gz cd cmatrix-1.2a ./configure make install fi cmatrix
[/spoiler] [/accordion]
No Comment »
1 Pingbacks »
[…] is a collection of simple but useful SQL queries and MySQL configuration options. Also, see my Useful Bash One-Liners and the Regex […]