Creating a Secure WordPress Website to resist censorship attempts

As a queer person on the internet, an issue I often see happen to our community is people manipulating the anti-abuse processes on social media websites to silence and censor people they don’t like (sites like Twitter most often are automated and will suspend you based on the number of reports and a few other easily manipulated metrics). I’ve found the best way to keep producing content that won’t disappear forever and to help people find me again is to create a personal website that is secure from take-down attempts and tell my followers to bookmark it. This said if you aren’t careful with how you create, manage, and host your website it is at risk of being censored by people engaging in targeted harassment and defeating the purpose of it in the first place. This post has some tips on how to create a website that’s safe from targeted harassment.

Why WordPress?

WordPress is the most popular publishing software on the internet. It is easy to use, has a huge amount of premade themes and plugins for any possible use case, and a gigantic community. It does have various issues like any software but they can be worked around with proper website management.

Follow basic Cybersecurity advice

Use a password manager and have unique passwords for every account. This reduces the risk that someone can guess one of your passwords and break into all of your accounts. Setup 2-factor authentication whenever it is available to you. Keep your computer and phone’s software up to date to prevent software exploits and virus attacks. Before worrying about the security of your WordPress website protecting your personal devices should be your priority.

I recommend a .IS Domain Name

The .IS registry ISNIC operates the most secure domain name registry on the internet. As long as you create an account directly with them your domain names are very resist against targeted attacks. They have some of the most reasonable policies to abuse management and usually will not get involved for content reasons.

A domain name like registrar like GoDaddy is not as safe to use as they are more likely to be manipulated into suspending a zone for content reasons or if their support staff are harassed enough.

If a .IS Domain Name will not work for your use case consider a .COM or .ORG domain name purchased from Cloudflare or EasyDNS. Both domain name registrars in my experience are better equipped to handle actual abuse cases while not being harassed into suspending random peoples zones.

Pick your DNS Provider carefully

Your DNS Provider (the name-servers for your website) is an equally important decision to your domain name and the associated registrar. In the event they have an issue it can take between two days to a week for your domain name registrar to update the NS records on their end to a new provider. Many hosting providers (for example WordPress.com and Kinsta) offer DNS hosting, if yours does not you will need to choose a reliable provider.

You’ll need to do your own research. Services I’ve looked at that seem promising are DNSMadeEasy, NSOne, and Amazon Route 53 although all of these services do incur an additional monthly fee and aren’t good for all use cases.

Use a Managed WordPress Hosting Provider

Unless you are an expert and are ready to deal with the full time job of managing WordPress Security I would recommend using a Managed WordPress Hosting Provider. WordPress.com is another good choice if plugin access is not as important to you.

By using a Managed WordPress Hosting Provider you ensure that the people hosting your website are experts in WordPress and are able to help you when something goes wrong. While you could just setup your own server running WordPress the benefit to you is that you have their support and security teams ready to help you and to protect your website. The drawback is that this type of hosting is considerably more expensive. The benefits easily make up for the additional cost.

Research the companies you are using ahead of time

Beyond specific product or platform suggestions, just doing your own research is probably the best advice I can give. You’ll want to make sure that whatever company you choose to use has a good reputation in hosting WordPress websites. You’ll want to research how they’ve handled abuse issues in the past as a reference for whether you’ll be treated fairly. By taking this step now you protect yourself from potential attacks in the future.

Proposed change to the HTML Standard for anchor tags

This blog post is a proposed change to the HTML Standard for anchor tags. I’m not sure how to write or submit an RFC to the Internet Engineering Task Force and/or whoever manages the HTML Living Standard. Feel free to email me if you’d like to discuss this with me.

Problem: The HTML Standard for anchor tags lets you show one URL but send users to another

Currently you can write a tag like <a href="https://accounts.googlee.co/signin">https://accounts.google.com/signin</a> where you display a legitimate URL and then send users to a look-alike URL. Many users will just see the google.com part of the display text and trust that the link is safe. This is a drawback to trusting the display text of a link. I believe it does more harm than good to allow website designers to write an anchor tag which displays a different URL than it links to.

Proposed change

I propose that when displaying an anchor tag if the display text includes the http:// OR https:// or includes a . AND a / the display text should be ignored and instead the URL should be displayed. It may be ideal to display the URL if non-allow listed special characters are used as special charsets could be used to circumvent this change and threat actors are creative. I’ll leave this decision to security professionals.

Risks of the proposed change

This may show URLs intended to be hidden from users for example Twitter uses t.co links but then displays the actual link in a Tweet. Twitter would need to modify their platform to either track link clicks in another way (for example a JavaScript callback function that runs on clicking the link to submit telemetry data or have something like https://twitter.com/redirect=https://google.com&tweet_id=123456 to accommodate for a change to the anchor tag specification. It could similarly affect other short-link/click-tracking services.

Benefits of proposed change

The benefit is that display text in links could not be used as a malware or phishing vectors and users would be able to trust that the link they see is legitimate. Internet users should be able to trust that the link they see is the link they go to. While a website can always use its own redirector to hide the final destination it is in the best interest of internet users to not be able to show a fake url.

Conclusion

I hope web designers, developers, and software vendors discuss this change and see how changes to the anchor tag html standard could improve web security and reduce phishing and malware attacks.

Exfiltrating MyBB Attachments

Recently I accidentally deleted an important SSH key and lost access to a server running MyBB that I manage for a friend. As a result I could not make new backups to download attachments, avatars, and other files I needed from the server. Unfortunately the automated backups to Wasabi Cloud stopped after an API key was deleted from the account, additionally due to bucket lifecycle rules (enforced retention + autodelete), old backups were gone. It was a very bad situation and had I not acted quickly enough I would’ve lost everything. I logged into the MyBB admin panel and downloaded a database backup. I just happened to be learning Python recently and rather than engaging in an expensive data recovery exercise that may or may not find the private key I am looking for I decided to write a program to try and exfiltrate the attachments. When I started this project I had no idea if or how I would pull this off. The alternative was losing everything so I gave it a try.

The VPS Provider’s support would not help me

Unfortunately due to maintenance, a broken user (and likely admin) control panel, and possibly internal privacy policies, the VPS provider was unable to provide me with a disk image or to attach an SSH key. Needless to say after a few hours of chatting back and forth, it seemed to be a lost cause, I was on my own and a sitting duck until I found out a solution or they fixed their tools (if they fix them).

What data needs to be pulled from the VPS before I can delete it and create a new one?

To restore the site I need at a minimum the following data. With the exception of the theme images (which we get from MyBB.com) and a database backup (which we can get from the admin panel), everything has to be downloaded over SSH as the admin panel does not give you a way to download it. As I am locked out of SSH there would be significant data loss if I had taken the database backup and gave up. Here’s a list of what data I needed to get from the server.

  • A copy of MyBB and any plugins (we can get all of these from MyBB.com)
  • A database backup (we have this)
  • All custom images for the theme provided by the developer (we can get those https://community.mybb.com/mods.php?action=view&pid=56)
  • All custom images we added
  • All user attachments (this is split into a .attach file and a thumbnail where applicable)
  • All user avatars
  • All user group star images
  • All custom similes (custom images that act like Emoji)

How was I going to pull this off?

When I noticed became aware of the issue my first attempts were to use the VPS provider’s password reset tool then open the VNC console. Since they’re doing maintenance their panel is broken I had no way to attach a new SSH key. Also Debian package updates ran automatically and OpenSSH was configured to only allow login by SSH key. There is a vsftpd service running, if I could guess the username (it hasn’t been used in years) I might have be able to guess the password programmatically assuming I used Tor or something to get around fail2ban. I could try to generate SSH keys over and over and see if I generate the same key however with our current understanding of the laws of physics cracking an RSA key is impossible to do within my lifetime. This is a case where my own security had backfired. I was locked out and there was nothing I could do to get back in… right? But if there’s a will well there’s Catgirl sitting at their computer screen finding a solution to seemingly impossible problems. OpenSSH and VSFTPD are probably not viable attack surfaces so how do I get the data I need out? I decided to attack what I know the most about, web applications. I decided to look for weaknesses in MyBB and the server configuration and exploit that until I get every last bit of data I need.

To start, I logged into the admin panel and checked the current MyBB version and any plugins we were using and unfortunately all are the newest versions. My initial thoughts were to exploit a security vulnerability in MyBB and install a PHP script that let’s me download files from the server and write to normally restricted locations. Could I find a new vulnerability that’s not public yet? It’s possible but I am on a time crunch and don’t have time to play. So I went ahead and downloaded the database backup incase there was anything of interest there I could pull.

At my workstation I imported the database backup onto a local install of the database software MySQL. This allowed me to freely query a copy of the database. I started looking around and found a few tables of interest.

Table nameContent of interest
mybb_usersFile location of avatar images
mybb_usergroupsFile location of user group star images
mybb_smilesFile location of custom smile images
mybb_attachmentsFile location of attachments (the .attach file) and the file location of the corresponding thumbnail file.

I looked at the structure of each table and found different file paths. Due to how they were inserted and what placeholders existed some parsing of each string would be necessary. Based on row counts were about 9000 attachments and 1000 avatars as well as 5 user group star images and 25 custom similes I needed to download.

I checked if I could access attachment files directly to determine next steps. I found out that even though I was locked out of the server, I could download any file directly if I knew the file location and name and my database backup happens to have this information. Because I can download attachments directly if I know their file paths I do not need to find a Local File Inclusion vulnerability in MyBB to download them. Normally users download a file through attachment.php?aid=123 so they never know the actual path. The MyBB developers didn’t decide to restrict direct access to .attach files which saves me a lot of time. At that point I opened up my text editor and started writing a program to automate the process.

Design decisions

To get a new backup as soon as possible I choose to use Python it would be fast to write. I used the requests library to have an easy way to pull files and the mysql.connector library to have an easy way to directly query the database. Some of the database code is tricky to read as it returns an array of tuples so you’ll have to keep track of the order you requested data in. That said it was fast to write and served it’s purpose without having to do much research. I was able to copy, paste, and edit example MySQL SELECT Statements from a Python MySQL Tutorial without reading it too much. This is a case where speed mattered more than elegance. I needed a backup as soon as humanly possible. After a quick Google search I found an answer on StackOverflow which explained how to create a file and directory if the location and file does not exist. I didn’t see the Python 3.2 solution or I would’ve used it instead as it’s shorter and easier to read. I choose to store each file inside a folder called backup/ and to mirror the layout of the root directory for simplicity. At the end everything was successfully backed up.

The final program

After about four hours of programming my Python program was complete, including a test run, and I was able to pull all of the data I need. It is not perfectly elegant yet but it is reasonably error safe and has user configurable counters in the even that the program freezes during execution. It was enough for my purposes and was written on a time crunch. In it’s current state, it does not support MyBB installations in a subdirectory without a few manual edits.

import mysql.connector
import requests
import os
import errno

def backup_attachments_and_thumbnails():
    db.execute("SELECT aid, attachname, thumbnail FROM " + db_prefix + "attachments")

    myresult = db.fetchall()

    for x in myresult:
        # Download Attachment
        if x[0] > last_attachment:
            filename = "./backup/uploads/" + x[1]
            if not os.path.exists(os.path.dirname(filename)):
                try:
                    os.makedirs(os.path.dirname(filename))
                except OSError as exc:  # Guard against race condition
                    if exc.errno != errno.EEXIST:
                        raise
            url = "https://" + forum_name + "/uploads/" + x[1]
            r = requests.get(url)
            print("Writing file: " + str(x[0]) + " at: " + x[1])
            with open(filename, 'wb') as f:
                f.write(r.content)
                f.close()
        # Download Thumbnail
        if x[0] > last_attachment:
            filename = "./backup/uploads/" + x[2]
            if x[2] != "SMALL" and x[2] != "":
                if not os.path.exists(os.path.dirname(filename)):
                    try:
                        os.makedirs(os.path.dirname(filename))
                    except OSError as exc:  # Guard against race condition
                        if exc.errno != errno.EEXIST:
                            raise
                url = "https://" + forum_name + "/uploads/" + x[2]
                r = requests.get(url)
                print("Writing thumbnail: " + str(x[0]) + " at: " + x[2])
                with open(filename, 'wb') as f:
                    f.write(r.content)
                    f.close()


def backup_avatars():
    db.execute("SELECT uid, avatar FROM " + db_prefix + "users")

    myresult = db.fetchall()

    for x in myresult:
        # Download Avatars
        if x[0] > last_avatar and x[1].startswith("./"):
            # The avatar is stored like "./uploads/avatars/avatar_803.jpg?dateline=1603426821" in SQL
            filename = "./backup/" + x[1][10:].split("?")[0]  # Remove the ?dateline= from filenames
            if not os.path.exists(os.path.dirname(filename)):
                try:
                    os.makedirs(os.path.dirname(filename))
                except OSError as exc:  # Guard against race condition
                    if exc.errno != errno.EEXIST:
                        raise
            url = "https://" + forum_name + x[1][1:]
            r = requests.get(url)
            print("Writing avatar: " + str(x[0]) + " at: " + filename)
            with open(filename, 'wb') as f:
                f.write(r.content)
                f.close()

def backup_smilies():
    db.execute("SELECT sid, image FROM " + db_prefix + "smilies")

    myresult = db.fetchall()

    for x in myresult:
        # Download Smilies
        if x[0] > last_smile:
            filename = "./backup/" + x[1]
            if not os.path.exists(os.path.dirname(filename)):
                try:
                    os.makedirs(os.path.dirname(filename))
                except OSError as exc:  # Guard against race condition
                    if exc.errno != errno.EEXIST:
                        raise
            url = "https://" + forum_name + "/" + x[1]
            r = requests.get(url)
            print("Writing smile: " + str(x[0]) + " at: " + filename)
            with open(filename, 'wb') as f:
                f.write(r.content)
                f.close()

def backup_usergroup_images():
    db.execute("SELECT gid, starimage FROM " + db_prefix + "usergroups")

    myresult = db.fetchall()

    for x in myresult:
        # Download User Group Images
        if x[0] > last_usergroup_image and x[1] != "":
            filename = "./backup/" + x[1]
            if not os.path.exists(os.path.dirname(filename)):
                try:
                    os.makedirs(os.path.dirname(filename))
                except OSError as exc:  # Guard against race condition
                    if exc.errno != errno.EEXIST:
                        raise
            url = "https://" + forum_name + "/" + x[1]
            r = requests.get(url)
            print("Writing usergroup image: " + str(x[0]) + " at: " + filename)
            with open(filename, 'wb') as f:
                f.write(r.content)
                f.close()

if __name__ == '__main__':
    # Setup the database connection
    mybb = mysql.connector.connect(
        host="localhost",
        user="root",
        database="mybb"
    )
    db = mybb.cursor()

    # Domain name and the database prefix
    db_prefix = "mybb_"
    forum_name = "example.com"

    # If for some reason downloading fails you can edit these with the last downloaded ID and restart the program without having to start over
    last_attachment = 0
    last_avatar = 0
    last_usergroup_image = 0
    last_smile = 0

    # Run backup procedures
    backup_attachments_and_thumbnails()
    backup_avatars()
    backup_smilies()
    backup_usergroup_images()

Conclusions

Never give up. There’s usually a way to solve a problem. Even the best of systems have their flaws and if you discover them you can use it to your advantage to solve the most difficult challenges.

How to persist Redux State after closing and reopening a React application

The default behavior after closing a React behavior or any website for that matter is for the page’s local state to be lost. In a Single Page Application framework such as React this could mean you are logged out or certain setting changes are lost. This blog post explains how to persist state using the npm package redux-persist, it assumes that you have a React application which manages it’s state with Redux.

Install the package into your application

Luckily there is an npm package for this problem called redux-persist this package. This package works by taking your Redux reducer and persisting it’s data into storage of some form. This could be an API response (for advanced users) or by default to the browser’s local storage. You will need to install the package before you can start using it. Use the package manager you prefer, I use npm so will run npm install redux-persist.

Update your React root component

For most people this file is called App.js you’ll need to update it to use redux-persist‘s PersistGate. This is a wrapper component which prevents the rest of the application from loading until state is restored from storage. For those who load it from an API you can provide an optional loading component. For simplicity I did not include react-router in my example, however it should be nested inside of the PersistGate if you need to use it. Your React root component should look something like this.

import React from 'react';
import ReactDOM from 'react-dom';
import * as serviceWorker from './serviceWorker';
import {Provider} from 'react-redux';
import { PersistGate } from 'redux-persist/integration/react';
import { store, persistor} from './store';
import './style/App.css';

ReactDOM.render(
    <Provider store={store}>
      <PersistGate loading={null} persistor={persistor}>
        <App />
      </PersistGate>
    </Provider>,
  document.getElementById('root')
);

// If you want your app to work offline and load faster, you can change
// unregister() to register() below. Note this comes with some pitfalls.
// Learn more about service workers: https://bit.ly/CRA-PWA
serviceWorker.unregister();

Update your reducer

You will need to use redux-persist‘s persistedReducer() method to load in your reducer, this allows it load in your state after loading in the initial state. As recommended by the Redux Developers I use the Redux Toolkit, if you haven’t used it before I highly recommend checking it out. As a brief overview the slice is an object managed by the Redux Toolkit which exports your actions and your reducer. redux-persist does not care where the reducer is, only that you load it through redux-persist‘s persistedReducer() method. The end result will look something like this.

import { configureStore } from '@reduxjs/toolkit';
import { persistStore, persistReducer } from 'redux-persist'
import storage from 'redux-persist/lib/storage'
import  slice  from "./slice";

const persistConfig = {
    key: 'root',
    storage,
    blacklist: [
        'featureFlags`,
        'temporaryDataSuchAsPasswords',
    ]
};

const reducer = slice.reducer;
const actions = slice.actions;

const persistedReducer = persistReducer(persistConfig, reducer);

export const store = configureStore({ reducer: persistedReducer });
export const persistor = persistStore(store);

export const { actionNamesGoHere } = actions;

Complete

Once your application loads through the PersistGate component and the reducer is loaded through redux-persist‘s resistReducer() method you’re all done and your app will now persist its Redux state into local storage. For more information check out the README file which explains storage and persistor options for advanced use cases.

What to do if you lose your Mastodon instance’s environment variables file

One of the most important files loaded when running a Mastodon instance is your environment variables file commonly named .env.production. This file includes several secrets that if lost will break your ability to easily migrate your instance (assuming you have a recent files/database backup). This post explains how to recover your instance so you don’t have to start over and upset your users.

Follow all of the normal steps to migrate over to a new server

I’m assuming that if you’ve lost this file you are restoring from backups. Follow the steps in the Mastodon Documentation to reinstall Mastodon and import the database/files. Before you can start any sort of recovery effort you’ll need to cover your bases. This will be the easiest part of disaster recovery.

Disable two-factor-auth for all users

Since you’ve lost your cryptographic secrets two-factor-auth tokens are now invalid. You will have to tell your users what happened and ask them to remove their account from their authenticator apps and to add it back by enabling two-factor-auth again in account settings. Unfortunately there is not an easy command to do this. Rather you will have to drop into a postgresql console and overwrite the users table. Be careful with this console and take a backup first. If anything goes wrong you’ll want to be able to recover.

sudo -u postgres psql mastodon_production

UPDATE users SET encrypted_opt_secret='';
UPDATE users SET encrypted_otp_secret_iv='';
UPDATE users SET encrypted_otp_secret_salt='';
UPDATE users SET otp_required_for_login=false;

Backup the environment variables file

Once your service is recovered make sure to backup your environment variables file so you don’t have this problem again. Within a few hours your instance should be back up and running as normal without any major disruptions again.

LGBTQIA.is outage has been resolved (mostly)

Hi everyone so I have good news. The outage on LGBTQIA.is has been resolved and the site is almost fully operational again. This post describes the recovery progress / steps.

Steps Taken

So far I’ve been able to reinstall Mastodon, and restore the postgresql database, regenerate feeds, and reconnect Wasabi S3 for image/video uploads. Additionally I can message accounts on other instances without issue (federation works again).

Somethings are temporarily broken

Unfortunately due to a misconfiguration in the backups system the .env.production file was lost. As a result certain data is permanently lost as I no longer have the encryption keys to decrypt it. This means that you will need to re-enable two-factor-auth. I have cleared two-factor-auth data from the database. This approach worked for my account and has been applied to all accounts in the database. Additionally I need to generate a new set of SMTP Credentials to fix account confirmation and password reset emails. Finally status lengths are limited to 500 until I can reapply my changes to the limit.

Things will get better over the next week

Unfortunately everything won’t immediately be back to normal and a lot of debugging and fixing is still going to occur at random intervals. Please expect temporary outages as I make configuration/code changes. I’m working to quickly resolve the situation and will do my best to get things back to normal ASAP. Meanwhile enjoy the instance and know that your data remains safe and secure.

Still having issues?

If you are still having issues accessing your account for some reason please send an email to me@lunorian.is and I will get back to you as soon as possible to assist in recovering your account.

LGBTQIA.is Temporary Outage

Hey everyone, I’m posting this on my Twitter and redirecting the main site to it shortly but I wanted to give an explanation on why the instance is down for the next day or two.

What happened?

A former friend of mine who hosted the site on their colocation in Chicago no longer wishes to associate with me or host my services. They requested I move the site elsewhere. For those who know who this person is please do not harass them, they’re not obligated to provide me with free services and I would not expect anyone to do so. It’s sad how things turned out with them but I’m moving on. I already have a new server purchased and do not need additional donations to continue running the instance.

Within the next few days I will restore the database backup I made to a new server although given the nature of the instance it may take a few days to get everything back up and running. The service configuration is extremely complex and will take several hours for this task alone.

Was there any data loss?

All media content is stored on my personal Wasabi Cloud Account, this includes uploaded images, videos, as well as daily database backups. None of your data is at risk and a command was run to erase the data from the now old server.

What’s your current estimate for when the service is restored?

I am hoping that within a day or two things can go back to normal with the instance and users can be happy again. Until then stay safe and apologies for the inconvenience. Things will be fixed as soon as possible.

When to use React’s built-in state management or a state management library

When writing data in React, state must be passed from the top downwards, with methods passed down if you need to change data above a component. This approach can become difficult when you need to pass data several components down, often through components that don’t need them. This post contains my thoughts on state management and state management libraries in React.

React’s built-in state management is often all you need

Before using a statement management library you should carefully consider whether you need one. Often the built-in state management React provides is enough for your application and you don’t need a state management library. Until you need features such as state change tracking, the ability to share data across the application to a deeply nested component, and more advanced state debugging tools, React’s built-in state management is usually enough for most small applications.

What is Redux?

What many React developers call Redux is actually Redux with React-Redux the official redux bindings for React. Redux is a predictable state container consisting of a single central application state and a set of predefined actions commonly referred to as the reducer. It is a complex approach to managing application state, and while it’s been made easier with the official Redux Toolkit the developers of Redux have the following to say on when you should use Redux: “In general, use Redux when you have reasonable amounts of data changing over time, you need a single source of truth, and you find that approaches like keeping everything in a top-level React component’s state are no longer sufficient.” (https://redux.js.org/faq/general#when-should-i-use-redux). Redux solves two problems for React developers: A lack of a global application state, and providing a way to track and debug state changes through the Redux Dev Tools and it’s time travel debugger a tool that lets you watch the application change a little at a time to track down bugs. However to solve one problem you must accept another. There is additional boilerplate code you have to add to your application, and Redux is hard to learn due to its many difficult to grasp concepts. As React continues to grow and mature, developers are given more options than React State and Redux. Wait to use Redux until you know you need it.

What is React Context?

React Context is another way to store state within React that needs to be passed across the component tree without using the prop drilling approach. Similar to Redux you have a provider element at the top of your component which passes data down to the rest of your application and is accessed by adding additional boilerplate code to your components. Many developers will find it is a simpler approach than Redux and opt for it when they need a global state. It does not require you to add additional NPM packages to your application and gives the developers the potential of greater performance. There are tradeoffs however, you cannot use the Redux Dev Tools with React Context, meaning you give up time travel debugging as well as libraries built to work with Redux. For example, Redux Persist lets you cache application state in local storage however this library would not work with React Context. That said a similar library could be written and may already exist.

Which should I use?

This depends on the needs of your application, and your needs as a developer. I’ve personally used Redux on smaller applications such as my Password Generator and had great success with it. The answer to this question is unique to the application that you are building and to you as a developer.

How to setup a private Tor Bridge with OBFS4Proxy

Incognito Icon
Incognito Icon

The Tor Network is a powerful tool for browsing the internet anonymously and evading online censorship and web filters. Unfortunately some organizations have developed technology that makes it difficult to access the Tor Network. This tutorial explains how to create your own private Tor Bridge to bypass these restrictions.

This tutorial focuses on configuring a tor bridge server, it does not explain the basics of running a Linux Server and it assumes that you have basic Linux shell knowledge.

If you have not used the Linux shell before I recommend completing Codecademy’s Command Line Tutorial. Consider checking out the DigitalOcean Community for more tutorials.

Step 1: Leasing a server…

Before you can setup a Tor Bridge you will need to lease a server from somewhere. Ideally this server is located in a democracy with strong digital privacy laws. The United States and most European countries are a good fit for this.

I can’t afford a server, they sound expensive, what can I do about this?

You do not have to pay for an expensive server. Cloud computing providers such as Google Cloud, Microsoft Azure, and Amazon Web Services all offer a generous free tier. These free tiers include a small Virtual Machine and some bandwidth. It’s not enough to create a high traffic relay but should be enough for a small bridge for you and a few friends.

Why should I set up my own bridge instead of using a working public one?

If the public bridges distributed through BridgeDB or Email work for you then use those. There are more people using them and depending on the observer you are more likely to blend in with the crowd than when you are the only user connecting to an IP Address.

As a drawback the public bridges are slower. Some powerful censors have the majority of public bridges blocked off. China, for example, as been extremely effective at this.

You might not want to take away precious limited bandwidth from users whom a public bridge is their only option to connect to the Tor Network.

Step 2: Considerations

Your Tor Bridge should draw as little attention as possible from outside observers. Below are a few things you should consider during setup.

  1. Use a public cloud provider such as Google Compute Engine or Microsoft Azure. While WHOIS data reveals the IP Address is a cloud customer and not the provider themselves, there are too many legitimate websites running on public cloud providers to justify blocking the entire provider. It would cause too much collateral damage.
  2. The OBFS4 protocol is designed to look like an encrypted TLS Stream. TLS over TCP Port 443 (used for HTTPS) is one of the most common use cases. Therefore I’ve chosen to access the bridge through port 443, a random four-digit port number might draw undesired attention from an active network observer. The main challenge here is not running Tor or the OBFS4 process as root while allowing it to use a privileged port number below 1024. I choose to use iptables for port forwarding.
  3. By setting up a Tor Bridge on a public cloud provider, you give them the ability to create a complete history of every time you connect to the Tor Network and how much data you transferred. The cloud provider cannot see what you did while connected to Tor, however, you are still sharing timestamps of every time you connect to Tor as well as the amount of data transferred while connected to Tor. Everyone’s threat model is different, consider this when deciding if a public cloud provider is safe for you to use. Is this type of metadata worth protecting? What is the cost to protect it? How severe are the consequences if you fail to protect it?
  4. There are a few adversaries who will block an entire network (Russia blocked Google & AWS IP Addresses in an effort to try and block telegram) just to block one app. Whenever possible setup a few bridges across different networks with a diverse range of IP Addresses, locations, and subnets instead of just one. This again depends on how much cash you have to burn to prevent your adversary from knowing you use Tor and stopping the connections. For most adversaries, a single obfs4 bridge at any ISP should suffice.
  5. There are some theoretical attacks where someone controlling both your middle node and exit node, could create a fingerprint of which bridges you use, and attempt to correlate traffic with behavioral information.

Step 3: Spin-up your server

Create your server, I recommend using Debian as the operating system, and connect to it over SSH.

Step 4: Initial configuration checklist

Before doing anything else you should make sure that the following tasks are taken care of:

  1. All system updates are installed.
  2. SSH Key Authentication is configured, Password Authentication should be disabled.
  3. Look over the various suggestions at https://www.digitalocean.com/community/questions/best-practices-for-hardening-new-sever-in-2017 and apply the ones you feel are relevant to you. Remember less can be more – decide what’s best for you.
  4. It might be worth whitelisting IP Addresses allowed to connect to SSH.

Step 5: Install any necessary software packages

If you are following along with this tutorial’s suggestions and are using Debian you only need to run sudo apt-get install tor obfs4proxy, on other service providers you may also need to run sudo apt-get install iptables which we’ll use later in the tutorial for some port forwarding shenanigans.

For additional security you may wish to configure the Official Tor Project Debian Repos and compile obfs4proxy from source.

Step 6: Configure the bridge

First things first let’s move the default torrc to a sample file so it’s not interfering with anything we configure. Run sudo mv /etc/tor/torrc /etc/tor/torrc.sample in your SSH Shell.

Now run sudo nano /etc/tor/torrc and start writing out your torrc. The torrc is one of the most simple configuration files I’ve seen. It’s pretty straight forward. Take a look at the following example:

##
# OBFS4 Tor Bridge Configuration
##
ExitPolicy reject *:*
RunAsDaemon 1
ORPort xxxx
BridgeRelay 1
PublishServerDescriptor 0
ServerTransportPlugin obfs3,obfs4 exec /usr/bin/obfs4proxy
ServerTransportListenAddr obfs4 0.0.0.0:xxxx
ExtORPort auto
ContactInfo xxxx
Nickname xxxx

You need to pick random port numbers for ORPort and OBFS4 Port, along with setting a Nickname and providing a contact email address. You can safely leave the nickname and contact email address as xxxx since it’s a private bridge if you were running a public bridge you would want to set a recognizable nickname and email address so people can contact you if something isn’t working quite right on your bridge.

Something to consider: To minimize the enumeration risks of running a bridge I recommend picking completely random port numbers from your ORPort and OBFS4 port. While it’s not perfect and a full port scan could still reveal that you are running a bridge, the risk of detection by your adversary drops.

Your final torrc file will look something like the following:

##
# OBFS4 Tor Bridge Configuration
##
ExitPolicy reject *:*
RunAsDaemon 1
ORPort 8817
BridgeRelay 1
PublishServerDescriptor 0
ServerTransportPlugin obfs3,obfs4 exec /usr/bin/obfs4proxy
ServerTransportListenAddr obfs4 0.0.0.0:2888
ExtORPort auto
ContactInfo Nathaniel Suchy <me@nsuchy.me>
Nickname nsuchy

Next up we will need to add a few firewall rules to allow you to access the bridge from port 443.

sudo iptables -I INPUT 1 -p tcp --dport 443 -j ACCEPT
sudo iptables -I INPUT 1 -p tcp --dport 2888 -j ACCEPT
sudo iptables -A PREROUTING -t nat -i eth0 -p tcp --dport 443 -j REDIRECT --to-port 2888

Finally, run sudo service tor restart in your SSH Terminal and you’re ready to configure your client (Tor Browser).

Step 7: Configure your client (Tor Browser)

Finally, it’s time to configure Tor Browser to connect to your bridge. First things first open Tor Browser and open “Tor Network Settings”, check the box “Tor is censored in my country”, click “Provide a bridge I know” and paste your bridge line.

What’s my bridge line?

To get your bridge line run sudo cat /var/lib/tor/pt_state/obfs4_bridgeline.txt in your SSH Terminal. Your response should look like the following:

Bridge obfs4 <IP ADDRESS>:<PORT> <FINGERPRINT> cert=<CERT INCLUDED HERE> iat-mode=0

<IP Address> will be the IP Address provided to you by your service provider. <PORT> will be 443. <FINGERPRINT> can be found by running sudo cat /var/lib/tor/fingerprint in your SSH Terminal – the response will be your bridge’s nickname followed by a space and a string of text, your bridge line should only include that string of text (leave out the nickname and space). Finally, <CERT INCLUDED HERE> was already provided when getting your bridge line, no further action is required here. Your final bridge line will look like the following:

Bridge obfs4 1.2.3.4:443 A1B2C3D4E5F6G7H8I9JK0 cert=A1B2C3D4E5F6G7H8I9JK0 iat-mode=0

You can now give Tor Browser your bridge line and connect to the Tor Network unrestricted.

My bridge is stuck on connecting…

For security reasons most large cloud providers have a strict default set of firewall rules (AWS calls them “security groups”, check your providers documentation for details). You will need to allow traffic on TCP:443 for your Tor Bridge to work.

I’m still stuck

For more detailed information on configuring Tor Bridges, check the following resources…

If you are still stuck and can’t get your bridge working, consider joining Tor’s IRC Chat #tor at irc.oftc.net and someone there will help you.

Did changes in Chromium version 80 weaken cookie and password encryption?

This post elaborates on my question on the Information Security Stack Exchange and further lists information of my concern. In Chromium version 80 and up, rather than passing cookies to Windows Data Protection API (DPAPI) directly they’re encrypted with a stronger encryption algorithm and only the encryption key is protected through the API. This post is additionally notes of mine and what I’ve found so far.

How cookie encryption in Chromium version 80 and up works…

A stronger encryption algorithm is used and Windows Data Protection API encrypts the key that’s stored in the local state file.

Starting Chrome 80 version, cookies are encrypted using the AES256-GCM algorithm, and the AES encryption key is encrypted with the DPAPI encryption system, and the encrypted key is stored inside the ‘Local State’ file.

Arun (https://stackoverflow.com/questions/60230456/dpapi-fails-with-cryptographicexception-when-trying-to-decrypt-chrome-cookies/60611673#60611673)

Based on my testing and what I have read (Encrypted cookies in Chrome) (DPAPI fails with CryptographicException when trying to decrypt Chrome cookies), the protection scope appears to of changed from CurrentUser to LocalMachine. My concern here is that another user on the machine, if they were to either bypass file system permissions, or simply take out the hard drive and copy another user’s Chrome Profile Folder, they’d be able to use their Windows Credentials and access to DPAPI to access another users cookie and password storage. My blog post How to read encrypted Google Chrome cookies in C# shows the process of decryption of cookies in Chromium 80 as well as how it contrasted in version 79 and lower.

How the Windows Data Protection API works with scopes…

The Windows Data Protection API (DPAPI) takes a byte array and encrypts it using a key derived from your Windows Credentials. You can pass the byte array back to DPAPI later on when you need to access the encrypted contents. As the data is encrypted, another user on the system (or someone who pulled the hard drive from your computer) cannot access your encrypted cookie and password data. There are two scopes of note. The CurrentUser scope is your account, meaning that only your account has permission to decrypt data. The LocalMachine scope is more open, any account on your computer has permission to decrypt the data. (See DataProtectionScope Enum)

The Microsoft Windows API Docs has the following to say about how the Windows Data Protection API treats data protection scopes.

Typically, only a user with logon credentials that match those of the user who encrypted the data can decrypt the data. In addition, decryption usually can only be done on the computer where the data was encrypted. However, a user with a roaming profile can decrypt the data from another computer on the network. If the CRYPTPROTECT_LOCAL_MACHINE flag is set when the data is encrypted, any user on the computer where the encryption was done can decrypt the data. The function creates a session key to perform the encryption. The session key is derived again when the data is to be decrypted.

https://docs.microsoft.com/en-us/windows/win32/api/dpapi/nf-dpapi-cryptprotectdata#return-value

Why the encryption process was changed in Chromium version 80…

The Chromium team published a design documented titled DPAPI inside the sandbox, in this document they outline the issue that they’re unable to access DPAPI from within the Chromium sandbox and need an improved solution to keep user data secure. This document outlined their plan on what to change, the risks it poses, and how they would implement it. It’s well worth a read.

After a discussion on Twitter with developers of Brave Browser and a member of the Chromium Security Team I was sent a link to a commit: Rework os_crypt on Windows to not always need access to DPAPI which shows the exact changes made to Chromium (note: you’ll need an understanding of basic programming concepts and C++ to read this commit).

I don’t think the protection scope was intentionally changed but I could be wrong…

If the protection scope was not changed (or not intentionally changed), why did DPAPI require a CurrentUser scope to decrypt data previously but now the LocalMachine scope is used? I do not see anything in that commit which would indicate an intentional change. Implementation details are tricky and I am not a C++ programmer so I could be reading the changes wrong. (Searches with Ctrl+F of the commit for terms such as “current”, “user”, “local”, “machine” didn’t find anything of interest). It’s unclear why it worked the way it did before and I’m still looking for answers.

Further areas of research…

My BraveCookieReaderDemo was only the start of my research. My next steps include the following:

  • Setup a Virtual Machine with two restricted users running Chromium 79. Take the profiles and attempt to decrypt each other’s data through Windows Data Protection API. Record the testing and results.
  • Setup a Virtual Machine with two restricted users running Chromium 80. Take the profiles and attempt to decrypt each other’s data through Windows Data Protection API.
  • Compare differences between cookie and password encryption, also compare when a Google account is vs is not active. Passwords have different treatment and might not have the same issues.
  • Put together public code demos that demonstrate risks with encrypted cookies and passwords.