Normal view

There are new articles available, click to refresh the page.
Before yesterdayUncategorized

Infosec Tools

7 June 2019 at 16:55

A list of information security tools I use for assessments, investigations and other cybersecurity tasks.

Also worth checking out is CISA’s list of free cybersecurity services and tools.

Jump to Section


OSINT / Reconnaissance

Network Tools (IP, DNS, WHOIS)

Breaches, Incidents & Leaks

FININT (Financial Intelligence)

  • GSA eLibrary - Source for the latest GSA contract award information

GEOINT (Geographical Intelligence)

HUMINT (Human & Corporate Intelligence)

  • No-Nonsense Intel - List of keywords which you can use to screen for adverse media, military links, political connections, sources of wealth, asset tracing etc
  • CheckUser - Check desired usernames across social network sites
  • CorporationWiki - Find and explore relationships between people and companies
  • Crunchbase - Discover innovative companies and the people behind them
  • Find Email - Find email addresses from any company
  • Info Sniper - Search property owners, deeds & more
  • Library of Leaks - Search documents, companies and people
  • LittleSis - Who-knows-who at the heights of business and government
  • NAMINT - Shows possible name and login search patterns
  • OpenCorporates - Legal-entity database
  • That’s Them - Find addresses, phones, emails and much more
  • TruePeopleSearch - People search service
  • WhatsMyName - Enumerate usernames across many websites
  • Whitepages - Find people, contact info & background checks

IMINT (Imagery/Maps Intelligence)

MASINT (Measurement and Signature Intelligence)

SOCMINT (Social Media Intelligence)

Email

Code Search

  • grep.app - Search across a half million git repos
  • PublicWWW - Find any alphanumeric snippet, signature or keyword in the web pages HTML, JS and CSS code
  • searchcode - Search 75 billion lines of code from 40 million projects

Scanning / Enumeration / Attack Surface


Offensive Security

Exploits

  • Bug Bounty Hunting Search Engine - Search for writeups, payloads, bug bounty tips, and more…
  • BugBounty.zip - Your all-in-one solution for domain operations
  • CP-R Evasion Techniques
  • CVExploits - Comprehensive database for CVE exploits
  • DROPS - Dynamic CheatSheet/Command Generator
  • Exploit Notes - Hacking techniques and tools for penetration testings, bug bounty, CTFs
  • ExploitDB - Huge repository of exploits from Offensive Security
  • files.ninja - Upload any file and find similar files
  • Google Hacking Database (GHDB) - A list of Google search queries used in the OSINT phase of penetration testing
  • GTFOArgs - Curated list of Unix binaries that can be manipulated for argument injection
  • GTFOBins - Curated list of Unix binaries that can be used to bypass local security restrictions in misconfigured systems
  • Hijack Libs - Curated list of DLL Hijacking candidates
  • Living Off the Living Off the Land - A great collection of resources to thrive off the land
  • Living Off the Pipeline - CI/CD lolbin
  • Living Off Trusted Sites (LOTS) Project - Repository of popular, legitimate domains that can be used to conduct phishing, C2, exfiltration & tool downloading while evading detection
  • LOFLCAB - Living off the Foreign Land Cmdlets and Binaries
  • LoFP - Living off the False Positive
  • LOLBAS - Curated list of Windows binaries that can be used to bypass local security restrictions in misconfigured systems
  • LOLC2 - Collection of C2 frameworks that leverage legitimate services to evade detection
  • LOLESXi - Living Off The Land ESXi
  • LOLOL - A great collection of resources to thrive off the land
  • LOLRMM - Remote Monitoring and Management (RMM) tools that could potentially be abused by threat actors
  • LOOBins - Living Off the Orchard: macOS Binaries (LOOBins) is designed to provide detailed information on various built-in macOS binaries and how they can be used by threat actors for malicious purposes
  • LOTTunnels - Living Off The Tunnels
  • Microsoft Patch Tuesday Countdown
  • offsec.tools - A vast collection of security tools
  • Shodan Exploits
  • SPLOITUS - Exploit search database
  • VulnCheck XDB - An index of exploit proof of concept code in git repositories
  • XSSed - Information on and an archive of Cross-Site-Scripting (XSS) attacks

Red Team

  • ArgFuscator - Generates obfuscated command lines for common system tools
  • ARTToolkit - Interactive cheat sheet, containing a useful list of offensive security tools and their respective commands/payloads, to be used in red teaming exercises
  • Atomic Red Team - A library of simple, focused tests mapped to the MITRE ATT&CK matrix
  • C2 Matrix - Select the best C2 framework for your needs based on your adversary emulation plan and the target environment
  • ExpiredDomains.net - Expired domain name search engine
  • Living Off The Land Drivers - Curated list of Windows drivers used by adversaries to bypass security controls and carry out attacks
  • Unprotect Project - Search Evasion Techniques
  • WADComs - Curated list of offensive security tools and their respective commands, to be used against Windows/AD environments

Web Security

  • Invisible JavaScript - Execute invisible JavaScript by abusing Hangul filler characters
  • INVISIBLE.js - A super compact (116-byte) bootstrap that hides JavaScript using a Proxy trap to run code

Security Advisories

  • CISA Alerts - Providing information on current security issues, vulnerabilities and exploits
  • ICS Advisory Project - DHS CISA ICS Advisories data visualized as a Dashboard and in Comma Separated Value (CSV) format to support vulnerability analysis for the OT/ICS community

Attack Libraries

A more comprehensive list of Attack Libraries can be found here.

  • ATLAS - Adversarial Threat Landscape for Artificial-Intelligence Systems is a knowledge base of adversary tactics and techniques based on real-world attack observations and realistic demonstrations from AI red teams and security groups
  • ATT&CK
  • Risk Explorer for Software Supply Chains - A taxonomy of known attacks and techniques to inject malicious code into open-source software projects.

Vulnerability Catalogs & Tools

Risk Assessment Models

A more comprehensive list of Risk Assessment Models and tools can be found here.


Blue Team

CTI & IoCs

  • Alien Vault OTX - Open threat intelligence community
  • BAD GUIDs EXPLORER
  • Binary Edge - Real-time threat intelligence streams
  • Cloud Threat Landscape - A comprehensive threat intelligence database of cloud security incidents, actors, tools and techniques. Powered by Wiz Research
  • CTI AI Toolbox - AI-assisted CTI tooling
  • CTI.fyi - Content shamelessly scraped from ransomwatch
  • CyberOwl - Stay informed on the latest cyber threats
  • Dangerous Domains - Curated list of malicious domains
  • HudsonRock Threat Intelligence Tools - Cybercrime intelligence tools
  • InQuest Labs - Indicator Lookup
  • IOCParser - Extract Indicators of Compromise (IOCs) from different data sources
  • Malpuse - Scan, Track, Secure: Proactive C&C Infrastructure Monitoring Across the Web
  • ORKL - Library of collective past achievements in the realm of CTI reporting.
  • Pivot Atlas - Educational pivoting handbook for cyber threat intelligence analysts
  • Pulsedive - Threat intelligence
  • ThreatBook TI - Search for IP address, domain
  • threatfeeds.io - Free and open-source threat intelligence feeds
  • ThreatMiner - Data mining for threat intelligence
  • TrailDiscover - Repository of CloudTrail events with detailed descriptions, MITRE ATT&CK insights, real-world incidents references, other research references and security implications
  • URLAbuse - Open URL abuse blacklist feed
  • urlquery.net - Free URL scanner that performs analysis for web-based malware

URL Analysis

Static / File Analysis

  • badfiles - Enumerate bad, malicious, or potentially dangerous file extensions
  • CyberChef - The cyber swiss army knife
  • DocGuard - Static scanner and has brought a unique perspective to static analysis, structural analysis
  • dogbolt.org - Decompiler Explorer
  • EchoTrail - Threat hunting resource used to search for a Windows filename or hash
  • filescan.io - File and URL scanning to identify IOCs
  • filesec.io - Latest file extensions being used by attackers
  • Kaspersky TIP
  • Manalyzer - Static analysis on PE executables to detect undesirable behavior
  • PolySwarm - Scan Files or URLs for threats
  • VirusTotal - Analyze suspicious files and URLs to detect malware

Dynamic / Malware Analysis

Forensics

  • DFIQ - Digital Forensics Investigative Questions and the approaches to answering them

Phishing / Email Security


Assembly / Reverse Engineering


OS / Scripting / Programming

Regex


Password


AI

  • OWASP AI Exchange - Comprehensive guidance and alignment on how to protect AI against security threats

Assorted

OpSec / Privacy

  • Awesome Privacy - Find and compare privacy-respecting alternatives to popular software and services
  • Device Info - A web browser security testing, privacy testing, and troubleshooting tool
  • Digital Defense (Security List) - Your guide to securing your digital life and protecting your privacy
  • DNS Leak Test
  • EFF | Tools from EFF’s Tech Team - Solutions to the problems of sneaky tracking, inconsistent encryption, and more
  • Privacy Guides - Non-profit, socially motivated website that provides information for protecting your data security and privacy
  • Privacy.Sexy - Privacy related configurations, scripts, improvements for your device
  • PrivacyTests.org - Open-source tests of web browser privacy
  • switching.software - Ethical, easy-to-use and privacy-conscious alternatives to well-known software
  • What’s My IP Address? - A number of interesting tools including port scanners, traceroute, ping, whois, DNS, IP identification and more
  • WHOER - Get your IP

Jobs

  • infosec-jobs - Find awesome jobs and talents in InfoSec / Cybersecurity

Conferences / Meetups

Infosec / Cybersecurity Research & Blogs

Funny

Walls of Shame

  • Audit Logs Wall of Shame - A list of vendors that don’t prioritize high-quality, widely-available audit logs for security and operations teams
  • Dumb Password Rules - A compilation of sites with dumb password rules
  • The SSO Wall of Shame - A list of vendors that treat single sign-on as a luxury feature, not a core security requirement
  • ssotax.org - A list of vendors that have SSO locked up in an subscription tier that is more than 10% more expensive than the standard price
  • Why No IPv6? - Wall of shame for IPv6 support

Other

Dynamization of Jekyll

25 August 2022 at 20:23

Jekyll is a framework for creating websites/blogs using static plain-text files. Jekyll is used by GitHub Pages, which is also the current hosting provider for Shellsharks.com. I’ve been using Git Pages since the inception of my site and for the most part have no complaints. With that said, a purely static site has some limitations in terms of the types of content one can publish/expose.

I recently got the idea to create a dashboard-like page which could display interesting quantitative data points (and other information) related to the site. Examples of these statistic include, total number of posts, the age of my site, when my blog was last updated, overall word count across all posts, etc… Out of the box, Jekyll is limited in its ability to generate this information in a dynamic fashion. The Jekyll-infused GitHub pages engine generates the site via an inherent pages-build-deployment Git Action (more on this later) upon commit. The site will then stay static until the next build. As such, it has limited native ability to update content in-between builds/manual-commits.

To solve for this issue, I’ve started using a variety of techniques/technologies (listed below) to introduce more dynamic functionality to my site (and more specificially, the aforementioned statboard).

Jekyll Liquid

Though not truly “dynamic”, Liquid* templating language is an easy, Jekyll-native way to generate static content in a quasi-dynamic way at site build time. As an example, if I wanted to denote the exact date and time that a blog post was published I might first try to use the Liquid template {{ site.time }}. What this actually ends up giving me is a time stamp for when the site was built (e.g. 2025-05-04 08:05:27 -0400), rather than the last updated date of the post itself. So instead, I can harness the posts custom front matter, such as “updated:”, and access that value using the tag {{ page.updated }} (so we get, __).

One component on the (existing) Shellsharks statboard calculates the age of the site using the last updated date of the site (maintained in the change log), minus the publish date of the first-ever Shellsharks post. Since a static, Jekyll-based, GitHub Pages site is only built (and thus only updated) when I actually physically commit an update, this component will be out of date if I do not commit atleast daily. So how did I solve for this? Enter Git Actions.

* Learn more about the tags, filters and other capabilities of Liquid here.

JavaScript & jQuery

Before we dive into the power of Git Actions, it’s worth mentioning the ability to add dynamism by simply dropping straight up, in-line JavaScript directly into the page/post Markdown (.md) files. Remember, Jekyll produces .html files directly from static, text-based files (like Markdown). So the inclusion of raw JS syntax will translate into embdedded, executable JS code in the final, generated HTML files. The usual rules for in-page JS apply here.

One component idea I had for the statboard was to have a counter of named vulnerabilities. So how could I grab that value from the page? At first, I tried fetching the DOM element with the id in which the count was exposed. However this failed because fetching that element alone meant not fetching the JS and other HTML content that was used to actually generate that count. To solve for this, I used jQuery to load the entire page into a temporary <div> tag, then iterated through the list (<li>) elements within that div (similar to how I calculate it on the origin page), and then finally set the dashboard component to the calculated count!

$('<div></div>').load('/infosec-blogs', function () {
  var blogs = $("li",this).length;
  $("#iblogs").html(blogs);
});
Additional notes on the use of JS and jQuery
  • I used Google’s Hosted Libraries to reference jQuery <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.6.0/jquery.min.js"></script>.
  • Be weary of adding JS comments // in Markdown files as I noticed the Jekyll parsing engine doesn’t do a great job of new-lining, and thus everything after a comment will end up being commented.
  • When using Liquid tags in in-line JS, ensure quotes (‘’,””) are added around the templates so that the JS code will recognize those values as strings (where applicable).
  • The ability to add raw, arbitrary JS means there is a lot of untapped capability to add dynamic content to an otherwise static page. Keep in mind though that JS code is client-side, so you are still limited in that typical server-side functionality is not available in this context.

Git Actions

Thanks to the scenario I detailed in the Jekyll Liquid section, I was introduced to the world of Git Actions. Essentially, I needed a way to force an update / regeneration of my site such that one of my staticly generated Liquid tags would update at some minimum frequency (in this case, at least daily). After some Googling, I came across this action which allowed me to do just that! Essentially, it forces a blank build using a user-defined schedule as the trigger.

# File: .github/workflows/refresh.yml
name: Refresh

on:
  schedule:
    - cron:  '0 3 * * *' # Runs every day at 3am

jobs:
  refresh:
    runs-on: ubuntu-latest
    steps:
      - name: Trigger GitHub pages rebuild
        run: |
          curl --fail --request POST \
            --url https://api.github.com/repos/${{ github.repository }}/pages/builds \
            --header "Authorization: Bearer $USER_TOKEN"
        env:
          # You must create a personal token with repo access as GitHub does
          # not yet support server-to-server page builds.
          USER_TOKEN: ${{ secrets.USER_TOKEN }}

In order to get this Action going, follow these steps…

  1. Log into your GitHub account and go to Settings (in the top right) –> Developer settings –> Personal access tokens.
  2. Generate new token and give it full repo access scope (More on OAuth scopes). I set mine to never expire, but you can choose what works best for you.
  3. Navigate to your GitHub Pages site repo, ***.github.io –> Settings –> Secrets –> Actions section. Here you can add a New repository secret where you give it a unique name and set the value to the personal access token generated earlier.
  4. In the root of your local site repository, create a .github/workflows/ folder (if one doesn’t already exist).
  5. Create a <name of your choice>.yml file where you will have the actual Action code (like what was provided above).
  6. Commit this Action file and you should be able to see run details in your repo –> Actions section within GitHub.
Additional Considerations for GitHub Actions
  • When using the Liquid tag {{ site.time }} with a Git Action triggered build, understand that it will use the time of the server which is generating the HTML, in this case the GitHub servers themselves, which means the date will be in UTC (Conversion help).
  • Check out this reference for informaton on how to specify the time zone in the front matter of a page or within the Jekyll config file.
  • GitHub Actions are awesome and powerful, but their are limitations to be aware of. Notably, it is important to understand the billing considerations. Free tier accounts get 2,000 minutes/month while Pro tier accounts (priced at about $44/user/year) get 3,000.
  • For reference, the refresh action (provided above) was running (for me) at about 13 seconds per trigger. This means you could run that action over 9,000 times without exceeding the minute cap for a Free-tier account.
  • With the above said, also consider that the default pages-build-deployment Action used by GitHub Pages to actually generate and deploy your site upon commit will also consume those allocated minutes. Upon looking at my Actions pane, I am seeing about 1m run-times for each build-and-deploy action trigger.

What’s Next

I’ve only just started to scratch the surface of how I can further extend and dynamize my Jekyll-based site. In future updates to this guide (or in future posts), I plan to cover more advanced GitHub Action capabilities as well as how else to add server-side functionality (maybe through serverless!) to the site. Stay tuned!

Yesterday — 3 May 2025Uncategorized

Creamy Emacs

By: mms
3 May 2025 at 22:00

Some two, maybe three years ago, I've stopped using the so called 'dark mode' on my computers. Having a strong light (aka The Sun) from behind made the monitor with a dark backgrounds a mirror, so it was more a case of light mode being forced me on that anything else. But it it grew on me, and now I see no reason to use a dark theme anywhere. It looks better, it feels better, it is better.

Then I've discovered joshua stain and his monochrome color theme. It looked cool, and why not? There was a ready emacs theme - almost mono. And again, the experiment was a success: bolds, italics are all I need to not feel lost. Having a full rainbow on the screen was yet another distraction I didn't need. So I used the white variant of almost-mono, since the cream variant was not to my linking.

The colors used in that theme became a nuisance. Why should my string be green? Why is notmuch a rainbow? Time to go my own way. I didn't want to copy jcs's theme 1 to 1, so I tried to create a new theme from scratch. I ended with a cream color scheme, so task successfully failed

Emacs makes making themes a breeze: just M-x customize-create-theme and you can modify the faces with a few clicks:

Since I used only two colors:

  • #f2ebd6 as the light color, and
  • #352f19 as the dark one

There was very little decisions to be made. I just went though the faces I've encountered and set them to use those colors. If I see a new face, I can easily add it to the theme and voila - one more place with less chaos in my life.

All hail the cream. All hail the cremacs. Guess I'll update color scheme of this site sometime soon.

FreshRSS 1.26.2

By: Alkarex
3 May 2025 at 22:27

This is a security-focussed release for FreshRSS 1.26.x, addressing several CVEs (thanks @Inverle) 🛡

A few highlights ✨:

  • Implement JSON string concatenation with & operator
  • Support multiple JSON fragments in HTML+XPath+JSON mode (e.g. JSON-LD)
  • Multiple security fixes with CVEs
  • Bug fixes

Notes ℹ:

  • Favicons will be reconstructed automatically when feeds gets refreshed. After that, you may need to refresh your Web browser as well.

This release has been made by @Alkarex, @Frenzie, @hkcomori, @loviuz, @math-GH
and newcomers @dezponia, @glyn, @Inverle, @Machou, @mikropsoft

Full changelog:

Today — 4 May 2025Uncategorized

Notes - Setting up my Debian desktops

3 May 2025 at 02:00

I have been following Zak’s posts about setting up their Debian desktop on a fresh install and I thought I would join in with mine. y This is how I setup a new Debian desktop with the base packages, apps, and configurations which will give me a fully functional desktop in about 20 minutes. I will then need to make some minor changes once everything is installed to tweak things I haven’t scripted yet. In total it takes me about an hour to have a brand new desktop fully setup and configured the way I like it.

Below are some of my raw notes on how I get the base setup and ready to use.

Step one: Install dotfile manager

I use chezmoi to manage all my dotfiles (config files). I have one repo for desktops and another for server. In the script above I have installing chezmoi via snap and then pulling the desktop repo (URL redacted) so all of my customizations and configurations are automatically applied.

⚠️ ⚠️ ⚠️ Since I self-host Forgejo, I am not concerned about sending sensitive files to my dotfile repo since it is not accessible outside of my LAN. This means I choose to use chezmoi to pull ssh keys and other files with private information. I would never push these files to a repo on Github or similar publicly hosted services. If someone has hacked into my LAN, I got bigger problems than my dotfiles.

I install this using snap. So, I have to install snapd first, then chezmoi.

sudo apt install snapd git
sudo snap install chezmoi --classic

Pulling the dotifles is simple:

chezmoi init --apply https://[REDACTED]

I have chezmoi sync the dotfiles for:

  • ~/.config
    • fdroidcl
    • helix
    • lazydocker
    • lazygit
    • fish
    • glow
    • tilda
    • shaarli
    • stew
    • termscp
    • tmux
    • yazi
  • ~/.fonts
  • ~/.icons
  • ~/.themes
  • ~/spotdl
  • ~/scripts
  • ~/.ssh
  • ~/.local/bin
  • ~/.bashrc

Since it pulls my configuration for fish, I also have all my aliases which point some of the programs to their custom configuration file.

Stew

Some programs I choose to run the binary straight from the devs Github repo and stew makes it simple to find, install, and keep them updated.

Using chezmoi I can sync the configuration for stew so I don’t have to manually find and install the binaries for each install. Then, I sync $HOME/.local/bin because that is the location of the actual binaries. Doing this means when I sync my dotfiles, it also pulls down the actual binaries and they are ready to run on the new PC install.

Step two: Run the installs script

For a while I was running this script manually. But, thanks to all around awesome individual Robert, AKA IrgndSonDepp on Mastodon, I now have it run after chezmoi pulls my dotfiles.

By adding this script to the dotfiles repo and naming it run_once_<script_name> it will automatically be executed after the dotfiles are pulled.

What about Ansible or NixOS?

I know there are other ways to accomplish this, including Ansible or NixOS.

If I can accomplish the same thing with Ansible, why change? I am clearly doing it just fine with chezmoi.

As far as NixOS, I’m not ready for that. I like Debian. I like how I install packages. I’m not interested in a declarative desktop. I don’t want to manage everything with a declarative file. It is not for me and I like it that way.

The installs script

After the script is finished I still need to login into services and do some configurations that can’t be scripted in some GUI apps.

This is the run_once script I have:

#!/bin/bash

# Create directories in user's home directory
mkdir -p ~/{bin,tmp,remote_systems,apps} 
mkdir -p ~/.local/bin

# Update package lists
sudo apt update

# Install packages
sudo apt install -y fish wget curl nfs-common cifs-utils unzip libvirt-daemon bridge-utils virtinst libvirt-daemon-system libguestfs-tools libosinfo-bin qemu-system virt-manager iperf3 gdu tmux gdebi xz-utils rsync speedtest-cli podman ufw wakeonlan flameshot tilda baobab gnome-disk-utility neofetch terminator nmap bat ncat pandoc tig ack asciidoctor catimg highlight ffmpeg sshfs btop duf smartmontools aptly file ffmpegthumbnailer unar jq poppler-utils fd-find ripgrep tilda fdroidcl 

# Install flatpak
sudo apt install -y flatpak
# Enable the flathub repo.
flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
# Allow offline flatpak installs.
flatpak remote-modify --collection-id=org.flathub.Stable flathub

# Flatpak installs
flatpak install -y --noninteractive flathub \
    org.videolan.VLC \
    io.mpv.Mpv \
    org.geany.Geany \
    com.github.tchx84.Flatseal \
    io.github.flattool.Warehouse \
    md.obsidian.Obsidian \
    org.kde.kdenlive \
    org.gimp.GIMP \
    org.audacityteam.Audacity \
    fr.handbrake.ghb \
    org.keepassxc.KeePassXC \
    com.github.paolostivanin.OTPClient \
    org.mozilla.Thunderbird \
    net.minetest.Minetest \
    io.freetubeapp.FreeTube \
    org.mozilla.firefox \
    com.makemkv.MakeMKV \
    com.transmissionbt.Transmission \
    com.github.zocker_160.SyncThingy \
    com.bitwarden.desktop \
    com.tomjwatson.Emote \
    org.gnome.PowerStats \
    org.localsend.localsend_app \
    xyz.armcord.ArmCord \
    org.kiwix.desktop \
    org.cryptomator.Cryptomator \
    io.github.mpobaschnig.Vaults \
    org.raspberrypi.rpi-imager \
    com.discordapp.Discord \
    org.kde.okular

# Python3-pip and pipx
sudo apt install python3-pip pipx
pipx ensurepath

# Glow for markdown rendering in the terminal.
sudo mkdir -p /etc/apt/keyrings

curl -fsSL https://repo.charm.sh/apt/gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/charm.gpg

echo "deb [signed-by=/etc/apt/keyrings/charm.gpg] https://repo.charm.sh/apt/ * *" | sudo tee /etc/apt/sources.list.d/charm.list

sudo apt update && sudo apt install glow

# gping for ping with graphs.
echo "deb [signed-by=/usr/share/keyrings/azlux-archive-keyring.gpg] http://packages.azlux.fr/debian/ stable main" | sudo tee /etc/apt/sources.list.d/azlux.list

sudo wget -O /usr/share/keyrings/azlux-archive-keyring.gpg  https://azlux.fr/repo.gpg

sudo apt install gping -y

# Install timeshift for root backups.
sudo apt install -y timeshift

# Install deja-dup for $HOME backups.
sudo apt install -y deja-dup

# Set fish as default shell, add to $PATH, and update completions.
chsh -s /usr/bin/fish
fish -c 'fish_add_path ~/.local/bin; fish_add_path ~/bin; fish_update_completions' &

# Install Distrobox
wget https://raw.githubusercontent.com/89luca89/distrobox/main/install | sudo sh

# Install Docker from their script
cd ~/tmp && curl -fsSL https://get.docker.com -o get-docker.sh && sudo sh get-docker.sh && sudo usermod -aG docker $USER

# HiSHtory setup
curl https://hishtory.dev/install.py | python3 -
hishtory init [REDACTED]

# Install snap packages
sudo snap refresh
sudo snap install helix --classic
sudo snap install marksman
sudo snap install bash-language-server --classic
sudo snap install fast
sudo snap install bottom

# Configure bottom
sudo snap connect bottom:mount-observe && sudo snap connect bottom:hardware-observe && sudo snap connect bottom:system-observe && sudo snap connect bottom:process-control

# Misc.
# Shaarli CLI client to PUT/GET bookmarks from the terminal.
pipx install shaarli-client
# Terminal effects when running a command or ssh connections. 
pipx install terminaltexteffects
# Download Spotify playlists from YouTube.
pipx install spotdl
# YT-DLP install and config. 
sudo curl -L https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp -o /usr/local/bin/yt-dlp
sudo chmod a+rx /usr/local/bin/yt-dlp
# Install Tailscale 
curl -fsSL https://tailscale.com/install.sh | sh

# Enable UFW firewall and allow traffic from my LAN.
sudo ufw enable
sudo ufw allow from 192.168.1.0/24
sudo ufw reload

# Add entries to /etc/hosts
echo "Adding custom entries to /etc/hosts..."
cat <<EOF | sudo tee -a /etc/hosts
[REDACTED]

EOF

if [ $? -eq 0 ]; then
    echo "Successfully added entries to /etc/hosts."
else
    echo "Failed to add entries to /etc/hosts. This may require sudo privileges."
fi

# The deed is done.
echo "Installation complete!"
echo "Fish shell is now set as the default shell."
echo "Log out and log back in to use Docker without sudo."
echo "UFW firewall has been enabled."

After the base is ready

Once this is all done, I need to go around and log into apps and make some desktop appearance changes.

To be honest, I am a heavy user of Clonezilla and I have a clone of my desktop on a 120GB drive. When I have a new desktop to setup, I’ll use Clonezilla to write that clone to the new HDD, reboot, and then use GParted Live to expand the partitions appropriately.

I like doing it this way because then all I have to do after is change the static IP on the box and I’m completely setup.

However, I went through the process of a dotfile repo and an install script for when my clone is too far out of date or I want to start completely over. Both ways work fine and I just pick which one is best for the scenario.

- - - - -

Thank you for reading! If you would like to comment on this post you can start a conversation on the Fediverse. Message me on Mastodon at @cinimodev@masto.ctms.me. Or, you may email me at blog.discourse904@8alias.com. This is an intentionally masked email address that will be forwarded to the correct inbox.

Thank you for following this blog with RSS. Keep supporting the open web! RSS puts you in charge of how to collect, read, and archive information.

Installing postmarketOS with full disk encryption on a OnePlus 6

4 May 2025 at 09:50

tl;dr: I installed postmarketOS, an “alternative” Linux-based operating system, on a OnePlus 6 phone, and it rocks.

What is postmarketOS

postmarketOS is a fantastic, much-needed, project.

One one level, postmarketOS is a Linux-based replacement for the default operating system on some phones. (Note: unlike LineageOS, and other similar alternative Android installations, postmarketOS is not Android.)

But, more importantly, postmarketOS is a way of reducing eWaste, and giving new life to devices which have been rendered unsupported by their manufacturers.

We believe that hardware should be used sustainably and not thrown away just because its creator no longer supports it. Your device should not track you or help others collect and sell your data, and it should not constantly demand your attention in order to show you advertisements. Your device is yours, and you should be able to use it as you wish. These are the goals that drive postmarketOS.

We can’t afford to churn through technology for the next shiniest thing, and a project which helps keep devices useful is vital.

Long live postmarketOS, and major applause for every one of you who works on, or has worked on, it.

Why a OnePlus 6?

I first used postmarketOS a few years ago, with an old tablet, and I’ve been yearning for an opportunity to try it again, but on more modern, more powerful, hardware. And the OnePlus 6 is well supported.

A fedi friend - thanks Justine! (Justine’s snac instance) - kindly sold me her OnePlus 6, so that I could give this a go.

Installing postmarketOS on a OnePlus 6 without full disk encryption

When I first got the device, I just wanted to give postmarketOS a try, so I used the pmOS WebUSB in-browser installer with Chromium (since it doesn’t work with Firefox).

It was incredibly slick. Very, very easy, and it resulted in a working postmarketOS installation with virtually no effort from me.

Genuinely, this is superb, and makes postmarketOS accessible to a much broader range of people, who may not be comfortable with a Linux terminal.

The default password is 147147.

While I didn’t want an installation without full disk encryption, this experience was enough to convince me to try the seemingly more complicated installation to get full disk encryption working.

If you don’t want full disk encryption, and you have a supported device, this really is the way to go (IMHO).

Installing postmarketOS on a OnePlus 6 with full disk encryption

One cannot use the browser-based installer to an installation with full disk encryption, and I wanted full disk encryption.

To do that, one needs to use pmbootstrap.

This was… more challenging.

On the first evening that I tried, I got postmarketOS installed, but I had terrible trouble with Unl0kr and full disk encryption - duplicate virtual keyboards sending multiple sets of keypresses, and, even when I had worked out that I didn’t need to use the button at the top of the screen to enable a keyboard, I could just use the invisible keyboard already on screen (but not showing), I could enter the passphrase, but it didn’t do anything.

Figuring that that couldn’t be the answer, I tried again on another evening and that time, it worked.

How I got there was informed by the wiki, but not by following it exactly. My feeling - and I could be wrong - is that the wiki might be more of an information dump than a proper guide. At least, it seems to contain a lot of steps which I did not need.

Or perhaps I just installed it wrongly and it will come back to bite me :)

Here’s what I did:

I used Power + Volume Up to enter fastboot.

Then, I just double-checked that it was unlocked, even though that seemed obvious from the fact that the browser-based installation worked. (As an aside, I’m not sure if there is a way of locking the bootloader again afterwards, for security.)

I ran

fastboot oem unlock

And it said that it was unlocked. Excellent.

I already had pmbootstrap installed from a previous project with an aging Galaxy Tab (postmarketOS worked, but it was very slow, to the point of not being usable for my needs, sadly).

But I wiped that installation, and removed the config file (~/.config/pmbootstrap.cfg), then reinstalled itpmbootstrap` again, from git. That way, I knew I’d have an up to date version.

To start the configuration, I used

pmbootstrap init

This was quite straightforward, with the on-screen instructions. I chose the edge release, for oneplus enchilada, and I picked gnome-mobile.

At this point, I found it a little hard to follow the instructions to work out what I should do next.

I used

pmbootstrap install --fde

That gave me the option to set the user password and disk encryption password (two separate passwords).

The output of that told me to next run

pmbootstrap flasher flash_rootfs

The first time I did that, I got an error:

"Sending 'userdata' (2330166 KB)                    FAILED (remote: 'Requested download size is more than max allowed"

I restarted the bootloader (i.e. back into fastboot), then I ran the command again. (It turns out that this is covered in the wiki. I wish I’d noticed that sooner :))

It completed successfully:

Sending sparse 'userdata' 1/4 (733013 KB)          OKAY [ 16.708s]
Writing 'userdata'                                 OKAY [  0.001s]
Sending sparse 'userdata' 2/4 (782236 KB)          OKAY [ 25.289s]
Writing 'userdata'                                 OKAY [  0.001s]
Sending sparse 'userdata' 3/4 (762948 KB)          OKAY [ 19.869s]
Writing 'userdata'                                 OKAY [  0.001s]
Sending sparse 'userdata' 4/4 (51968 KB)           OKAY [ 32.629s]
Writing 'userdata'                                 OKAY [  0.001s]
Finished. Total time: 94.531s

I then ran

pmbootstrap flasher flash_kernel

Again, I got an error, but a different one:

"fastboot: error: Failed to identify current slot"

Again, I restarted the bootloader (i.e. back into fastboot), then I ran the command again.

Same error.

I ran the command again without rebooting. This time it got further - it started to send the boot files - and then hung.

I ran the command again. This time, it showed “waiting for device”. So I restarted the bootloader, and it completed rapidly.

This felt like progress!

I rebooted the device:

fastboot reboot

noting that the wiki expressly said to reboot using fastboot rather than the device buttons, to ensure that everything was saved.

I then pressed a key on the device to reboot… and it booted into postmarketOS, with a working Unl0kr setup (no invisible / duplicate keyboards!), to enter my full disk encryption password.

Success! It worked.

I had a working postmarketOS installation, with full disk encryption, which I could decrypt.

Next steps

I spent some time yesterday evening setting it up.

It connected to Wi-Fi immediately, and I started installing some software.

I sent my first toot from it.

I’m so impressed.

I’m going to keep on playing with it, and using it, before I write up much more but, in a few days, I hope to add some thoughts about how I am getting on with it.

Even with full disk encryption, I don’t know enough about postmarketOS security to feel comfortable swapping my main phone (running GrapheneOS) to it yet. Perhaps I’ll learn more about it, and get comfortable, but right now, I’m not in that place.

For now, I am chuffed to bits that I’ve got it work - thank you so much to everyone who has spent time on the postmarketOS project - and I am very much looking forward to getting to grips with it properly.

This isn’t just a fun technical project - although it is a fun technical project - it is part of an important movement of recognising and mitigating the impact of electronic waste and excess consumerism, and giving perfectly functional hardware a new lease of life, without ads or trackers.

Now wouldn’t it be nice if there was a postmarketOS for “smart” TVs…

❌
❌