No Google isn’t banning adblockers in Google Chrome

Over the past year there has been panic that Google plans on banning the use of ad blockers in Google Chrome. This is largely caused by misleading blog posts indicating that Google Chrome’s manifest v3 limits the deprecates the webRequest API to enterprise applications which require it. Adblockers today use this API to detect every incoming response body and remove the parts that contain the advertisement’s HTML code so Google Chrome does not load or render the advertisement. This approach has worked well for the past several years by extensions such as AdBlock Plus and uBlock Origin. The problem is that by this API existing without strict limits, a Chrome extension can abuse the API to access and steal data such as credit card numbers and passwords from a page due to the privileged access it has. Google Chrome could put strict limits on which extensions are still allowed to access the API although that puts a limit on competition, and smaller developers won’t be able to take a new approach to adblocking without jumping huge hurdles. It’s also possible that Google will allow a limited set of allowlisted extensions to continue using the webRequest API for the time being. It’s not going away it’s just be limited so Google could make an exception for specific extensions although I do not agree with this approach and believe it would do more harm than good to allowlist the big adblockers. Google seeks to improve the security of Google Chrome, these changes are not about adblockers but have resulted in a controversy.

Google controls the distribution platform

While it’s tempting to blame Google’s Business Team and claim they want to remove the ability to use an adblocker in Google Chrome, while they may want that, they do not need to remove an API to do so. Google controls the Chrome Web Store and can just stop signing updates, remotely disable, and stop distributing ad-blocking browser extensions. Why would Google go through the trouble of removing an API when they can easily ban adblockers from the Chrome Web Store and delete existing ones from user’s Chrome installs? The ability to instantly ban adblocking on Google Chrome exists and Google has not used this power.

A new approach to adblocking

Google is building a new API for extensions to use which allow them to pass a list of content to block and Google Chrome itself will perform the blocking without allowing the adblocker to view page content. This is similar to the Apple Content Blocker API which does the same thing. The current proof of concept had some limitations such as a limit to the amount of rules an extension could ad which need to be addressed before these changes take effect. There is some concern that Google will limit the ability to block Google’s ads but this is unlikely. Even if they did, forks of Google Chrome (such as Brave Browser and Microsoft Edge) exist which have stated they will not disable the webRequest API.

Adblocking will probably be faster and safer in the end

There are a lot of dooms day predictions where Google ends the ability to use adblockers but these situations are unlikely. Rather I predict that adblocking on Google Chrome will be faster and safer for users. Remember that adblockers are a security product and they should encourage changes to browser architecture and APIs that protect users even if it requires significant changes to their product. Google has provided over a year of advanced notice so this change is not going to suddenly destroy adblockers as long as an update over the next year or two is prepared to work with the new API. I think there’s a lot to look forward to with the future of adblockers on Google Chrome.

How to persist Redux State after closing and reopening a React application

The default behavior after closing a React behavior or any website for that matter is for the page’s local state to be lost. In a Single Page Application framework such as React this could mean you are logged out or certain setting changes are lost. This blog post explains how to persist state using the npm package redux-persist, it assumes that you have a React application which manages it’s state with Redux.

Install the package into your application

Luckily there is an npm package for this problem called redux-persist this package. This package works by taking your Redux reducer and persisting it’s data into storage of some form. This could be an API response (for advanced users) or by default to the browser’s local storage. You will need to install the package before you can start using it. Use the package manager you prefer, I use npm so will run npm install redux-persist.

Update your React root component

For most people this file is called App.js you’ll need to update it to use redux-persist‘s PersistGate. This is a wrapper component which prevents the rest of the application from loading until state is restored from storage. For those who load it from an API you can provide an optional loading component. For simplicity I did not include react-router in my example, however it should be nested inside of the PersistGate if you need to use it. Your React root component should look something like this.

import React from 'react';
import ReactDOM from 'react-dom';
import * as serviceWorker from './serviceWorker';
import {Provider} from 'react-redux';
import { PersistGate } from 'redux-persist/integration/react';
import { store, persistor} from './store';
import './style/App.css';

ReactDOM.render(
    <Provider store={store}>
      <PersistGate loading={null} persistor={persistor}>
        <App />
      </PersistGate>
    </Provider>,
  document.getElementById('root')
);

// If you want your app to work offline and load faster, you can change
// unregister() to register() below. Note this comes with some pitfalls.
// Learn more about service workers: https://bit.ly/CRA-PWA
serviceWorker.unregister();

Update your reducer

You will need to use redux-persist‘s persistedReducer() method to load in your reducer, this allows it load in your state after loading in the initial state. As recommended by the Redux Developers I use the Redux Toolkit, if you haven’t used it before I highly recommend checking it out. As a brief overview the slice is an object managed by the Redux Toolkit which exports your actions and your reducer. redux-persist does not care where the reducer is, only that you load it through redux-persist‘s persistedReducer() method. The end result will look something like this.

import { configureStore } from '@reduxjs/toolkit';
import { persistStore, persistReducer } from 'redux-persist'
import storage from 'redux-persist/lib/storage'
import  slice  from "./slice";

const persistConfig = {
    key: 'root',
    storage,
    blacklist: [
        'featureFlags`,
        'temporaryDataSuchAsPasswords',
    ]
};

const reducer = slice.reducer;
const actions = slice.actions;

const persistedReducer = persistReducer(persistConfig, reducer);

export const store = configureStore({ reducer: persistedReducer });
export const persistor = persistStore(store);

export const { actionNamesGoHere } = actions;

Complete

Once your application loads through the PersistGate component and the reducer is loaded through redux-persist‘s resistReducer() method you’re all done and your app will now persist its Redux state into local storage. For more information check out the README file which explains storage and persistor options for advanced use cases.

What to do if you lose your Mastodon instance’s environment variables file

One of the most important files loaded when running a Mastodon instance is your environment variables file commonly named .env.production. This file includes several secrets that if lost will break your ability to easily migrate your instance (assuming you have a recent files/database backup). This post explains how to recover your instance so you don’t have to start over and upset your users.

Follow all of the normal steps to migrate over to a new server

I’m assuming that if you’ve lost this file you are restoring from backups. Follow the steps in the Mastodon Documentation to reinstall Mastodon and import the database/files. Before you can start any sort of recovery effort you’ll need to cover your bases. This will be the easiest part of disaster recovery.

Disable two-factor-auth for all users

Since you’ve lost your cryptographic secrets two-factor-auth tokens are now invalid. You will have to tell your users what happened and ask them to remove their account from their authenticator apps and to add it back by enabling two-factor-auth again in account settings. Unfortunately there is not an easy command to do this. Rather you will have to drop into a postgresql console and overwrite the users table. Be careful with this console and take a backup first. If anything goes wrong you’ll want to be able to recover.

sudo -u postgres psql mastodon_production

UPDATE users SET encrypted_opt_secret='';
UPDATE users SET encrypted_otp_secret_iv='';
UPDATE users SET encrypted_otp_secret_salt='';
UPDATE users SET otp_required_for_login=false;

Backup the environment variables file

Once your service is recovered make sure to backup your environment variables file so you don’t have this problem again. Within a few hours your instance should be back up and running as normal without any major disruptions again.

LGBTQIA.is outage has been resolved (mostly)

Hi everyone so I have good news. The outage on LGBTQIA.is has been resolved and the site is almost fully operational again. This post describes the recovery progress / steps.

Steps Taken

So far I’ve been able to reinstall Mastodon, and restore the postgresql database, regenerate feeds, and reconnect Wasabi S3 for image/video uploads. Additionally I can message accounts on other instances without issue (federation works again).

Somethings are temporarily broken

Unfortunately due to a misconfiguration in the backups system the .env.production file was lost. As a result certain data is permanently lost as I no longer have the encryption keys to decrypt it. This means that you will need to re-enable two-factor-auth. I have cleared two-factor-auth data from the database. This approach worked for my account and has been applied to all accounts in the database. Additionally I need to generate a new set of SMTP Credentials to fix account confirmation and password reset emails. Finally status lengths are limited to 500 until I can reapply my changes to the limit.

Things will get better over the next week

Unfortunately everything won’t immediately be back to normal and a lot of debugging and fixing is still going to occur at random intervals. Please expect temporary outages as I make configuration/code changes. I’m working to quickly resolve the situation and will do my best to get things back to normal ASAP. Meanwhile enjoy the instance and know that your data remains safe and secure.

Still having issues?

If you are still having issues accessing your account for some reason please send an email to me@lunorian.is and I will get back to you as soon as possible to assist in recovering your account.

LGBTQIA.is Temporary Outage

Hey everyone, I’m posting this on my Twitter and redirecting the main site to it shortly but I wanted to give an explanation on why the instance is down for the next day or two.

What happened?

A former friend of mine who hosted the site on their colocation in Chicago no longer wishes to associate with me or host my services. They requested I move the site elsewhere. For those who know who this person is please do not harass them, they’re not obligated to provide me with free services and I would not expect anyone to do so. It’s sad how things turned out with them but I’m moving on. I already have a new server purchased and do not need additional donations to continue running the instance.

Within the next few days I will restore the database backup I made to a new server although given the nature of the instance it may take a few days to get everything back up and running. The service configuration is extremely complex and will take several hours for this task alone.

Was there any data loss?

All media content is stored on my personal Wasabi Cloud Account, this includes uploaded images, videos, as well as daily database backups. None of your data is at risk and a command was run to erase the data from the now old server.

What’s your current estimate for when the service is restored?

I am hoping that within a day or two things can go back to normal with the instance and users can be happy again. Until then stay safe and apologies for the inconvenience. Things will be fixed as soon as possible.

“If you go Mac you never go back” Apple’s biggest lie…

If you’re an Apple user or have friends who do you’ve probably heard the line “If you go Mac you never go back” at some point. I am going to discuss that argument and explain some of the reasons consumers leave Apple’s ecosystem everyday.

FOMA: Fear of missing out

This statement has been spread by consumers alone to create the fear of missing out. People will buy Apple products because they do not want to miss out on experiences with their friends and family. From the iPhone, to the iPad, to the Mac, Apple creates something that hardware alone cannot do unique experiences through software. Whether these are good or bad for consumers is a topic of debate. It’s created a cult-like mindset amongst its users which is not productive or healthy. It also attacks the rest of the industry for the sole-purpose of switching everyone to the Mac ecosystem.

Apple gives users limited hardware choices that cannot meet all use cases

I’m what you may call a power-user. I write software for a living and as a result many of the tools I use do not run well on the Mac’s limited hardware. Aside from buying a Mac Pro, you have low-performance Intel processors designed to give consumers a good battery life over raw compute power. This is a balanced approach which is great for users who want to use their devices for social media, video streaming, and simple office tasks. It begins to fail when you have more complex tasks that require more powerful processors and graphics cards such as video editing and rendering. And while it’s improved over the past several years the Mac line of computers by design are not ready to provide a pleasant gaming experience.

Unfortunate misconceptions that consumers have

The most common misconception I here when recommending that people buy a Microsoft Windows device is that it’s vulnerable to viruses. While it is true that Microsoft Windows is a more popular target for malware developers, you can get a virus on a Mac. Here’s a helpful list of recent viruses targeting the Mac ecosystem of products. There have been several recent advances to Microsoft Windows that improve it’s security and protect its users from malware. There’s also the misconception that Windows computers are very slow. Older ones can be when too much is running but modern processors can handle larger workloads and Windows users can buy more expensive processors to handle larger more compute intensive workloads.

Users do and will go back away from Mac

Due to growing needs I use Windows computers more than I use Mac computers. It’s sad to see that consumers have created what equates to a stigma when Mac users consider going back to Windows machines. From video production, to gaming, to systems programming, there are a lot of tasks Mac is not an ideal platform for and people leave the Mac all the time for platforms that more meet their needs. Saying “you never go back” isn’t true, as often you do or at least adopt an additional platform for certain tasks.

Why address this line? Why not leave Apple users who say this be?

I write this article as I heard today someone say “If you go Mac you never go back” today, the line itself creates misconceptions, is wrong, and is somewhat infuriating. People should be free to use the devices they love and no one should stop them, this includes Apple users wanting to use Apple devices. This said: I want to see peer pressure for people to switch platforms to go away.