Did changes in Chromium version 80 weaken cookie and password encryption?

This post elaborates on my question on the Information Security Stack Exchange and further lists information of my concern. In Chromium version 80 and up, rather than passing cookies to Windows Data Protection API (DPAPI) directly they’re encrypted with a stronger encryption algorithm and only the encryption key is protected through the API. This post is additionally notes of mine and what I’ve found so far.

How cookie encryption in Chromium version 80 and up works…

A stronger encryption algorithm is used and Windows Data Protection API encrypts the key that’s stored in the local state file.

Starting Chrome 80 version, cookies are encrypted using the AES256-GCM algorithm, and the AES encryption key is encrypted with the DPAPI encryption system, and the encrypted key is stored inside the ‘Local State’ file.

Arun (https://stackoverflow.com/questions/60230456/dpapi-fails-with-cryptographicexception-when-trying-to-decrypt-chrome-cookies/60611673#60611673)

Based on my testing and what I have read (Encrypted cookies in Chrome) (DPAPI fails with CryptographicException when trying to decrypt Chrome cookies), the protection scope appears to of changed from CurrentUser to LocalMachine. My concern here is that another user on the machine, if they were to either bypass file system permissions, or simply take out the hard drive and copy another user’s Chrome Profile Folder, they’d be able to use their Windows Credentials and access to DPAPI to access another users cookie and password storage. My blog post How to read encrypted Google Chrome cookies in C# shows the process of decryption of cookies in Chromium 80 as well as how it contrasted in version 79 and lower.

How the Windows Data Protection API works with scopes…

The Windows Data Protection API (DPAPI) takes a byte array and encrypts it using a key derived from your Windows Credentials. You can pass the byte array back to DPAPI later on when you need to access the encrypted contents. As the data is encrypted, another user on the system (or someone who pulled the hard drive from your computer) cannot access your encrypted cookie and password data. There are two scopes of note. The CurrentUser scope is your account, meaning that only your account has permission to decrypt data. The LocalMachine scope is more open, any account on your computer has permission to decrypt the data. (See DataProtectionScope Enum)

The Microsoft Windows API Docs has the following to say about how the Windows Data Protection API treats data protection scopes.

Typically, only a user with logon credentials that match those of the user who encrypted the data can decrypt the data. In addition, decryption usually can only be done on the computer where the data was encrypted. However, a user with a roaming profile can decrypt the data from another computer on the network. If the CRYPTPROTECT_LOCAL_MACHINE flag is set when the data is encrypted, any user on the computer where the encryption was done can decrypt the data. The function creates a session key to perform the encryption. The session key is derived again when the data is to be decrypted.

https://docs.microsoft.com/en-us/windows/win32/api/dpapi/nf-dpapi-cryptprotectdata#return-value

Why the encryption process was changed in Chromium version 80…

The Chromium team published a design documented titled DPAPI inside the sandbox, in this document they outline the issue that they’re unable to access DPAPI from within the Chromium sandbox and need an improved solution to keep user data secure. This document outlined their plan on what to change, the risks it poses, and how they would implement it. It’s well worth a read.

After a discussion on Twitter with developers of Brave Browser and a member of the Chromium Security Team I was sent a link to a commit: Rework os_crypt on Windows to not always need access to DPAPI which shows the exact changes made to Chromium (note: you’ll need an understanding of basic programming concepts and C++ to read this commit).

I don’t think the protection scope was intentionally changed but I could be wrong…

If the protection scope was not changed (or not intentionally changed), why did DPAPI require a CurrentUser scope to decrypt data previously but now the LocalMachine scope is used? I do not see anything in that commit which would indicate an intentional change. Implementation details are tricky and I am not a C++ programmer so I could be reading the changes wrong. (Searches with Ctrl+F of the commit for terms such as “current”, “user”, “local”, “machine” didn’t find anything of interest). It’s unclear why it worked the way it did before and I’m still looking for answers.

Further areas of research…

My BraveCookieReaderDemo was only the start of my research. My next steps include the following:

  • Setup a Virtual Machine with two restricted users running Chromium 79. Take the profiles and attempt to decrypt each other’s data through Windows Data Protection API. Record the testing and results.
  • Setup a Virtual Machine with two restricted users running Chromium 80. Take the profiles and attempt to decrypt each other’s data through Windows Data Protection API.
  • Compare differences between cookie and password encryption, also compare when a Google account is vs is not active. Passwords have different treatment and might not have the same issues.
  • Put together public code demos that demonstrate risks with encrypted cookies and passwords.

How to read encrypted Google Chrome cookies in C#

A web browser with several tabs and icons is displayed.

Recently at work I needed to write a few bots/scrapers for websites that do not have an official API or bot support. Emulating browser-based logins without triggering anti-bot checks is challenging, to get around this issue, we login from a web browser on the Windows Server and copy its cookies from the SQLite Database storing them. This blog post explains how to read encrypted Google Chrome cookies in C# programs.

Reading cookies from Google Chrome (or other web browsers installed on the system) is controversial in some programming communities given the risk of this knowledge being used in malware. As a result some communities are not inclined to provide an answer to this question on ethical grounds. I see the knowledge as a tool and how you use it is your decision.

System Requirements

This blog post is written assuming you have Google Chrome (or a fork such as Brave Browser) installed built on Chromium 80 or newer. Additionally it’s written for Windows 10 users as the Windows Data Protection API is used to protect cookies. While it’s written with .NET core in mind, you probably won’t be able to run this code on macOS or Linux. I’ve only tested this code on Windows 10 and Windows Server 2019 and significant changes will probably be needed to use it on those platforms. I did most of my testing on Brave (then switched the paths to Google Chrome on the server).

How cookies were encrypted in Chrome version 79 and lower

Prior to the release of Google Chrome version 80, the software relied directly on the Windows Data Protection API to encrypt and decrypt the value of cookies. Any time you needed to encrypt or decrypt a cookie, you would pass the value to the Windows Data Protection API and await its response.

The encryption is designed to prevent other users on the same computer from copying your cookies and using them to access your online accounts. Your Windows password and some other local data is used to derive a key for use with the Windows Data Protection API (DPAPI) and without your Windows password only a local administrator could access data protected with DPAPI.

You were able to use the following code snippet to decrypt a cookie in Chrome 79 and lower. You’ll of course need to fetch it from SQLite although that’s outside the scope of this blog post.

using System.Security.Cryptography;
...
ProtectedData.Unprotect(cookie.EncryptedValue, null, DataProtectionScope.CurrentUser);
...

How Google Chrome version 80 changes the cookie encryption process

According to Arun on StackOverflow: “Starting Chrome 80 version, cookies are encrypted using the AES256-GCM algorithm, and the AES encryption key is encrypted with the DPAPI encryption system, and the encrypted key is stored inside the ‘Local State’ file.”.

This means that passing a cookie to DPAPI directly will not work anymore. Instead only the encryption key is encrypted using DPAPI. To decrypt a cookie’s encrypted value you will need to get the encryption key from the ‘Local State’ file, decrypt it with DPAPI, and then use other tools to run AES256-GCM decryption. These changes were made to improve the security of the Chromium platform although are breaking to many third party tools that rely on data from Chromium databases.

C# has a thriving package ecosystem and finding packages to do this for me was an easy process. Your resulting code should look something like the following…

using System;
using System.Collections.Generic;
using System.IO;
using System.Text;
using System.Linq;
using Microsoft.EntityFrameworkCore;
using System.Security.Cryptography;
using Newtonsoft.Json.Linq;
using Org.BouncyCastle.Crypto;
using Org.BouncyCastle.Crypto.Engines;
using Org.BouncyCastle.Crypto.Modes;
using Org.BouncyCastle.Crypto.Parameters;

namespace BraveBrowserCookieReaderDemo
{
    public class BraveCookieReader
    {
        public IEnumerable<Tuple<string, string>> ReadCookies(string hostName)
        {
            if (hostName == null) throw new ArgumentNullException("hostName");

            using var context = new BraveCookieDbContext();

            var cookies = context
                .Cookies
                .Where(c => c.HostKey.Equals(hostName))
                .AsNoTracking();

            // Big thanks to https://stackoverflow.com/a/60611673/6481581 for answering how Chrome 80 and up changed the way cookies are encrypted.

            string encKey = File.ReadAllText(System.Environment.GetEnvironmentVariable("LOCALAPPDATA") + @"\BraveSoftware\Brave-Browser\User Data\Local State");
            encKey = JObject.Parse(encKey)["os_crypt"]["encrypted_key"].ToString();
            var decodedKey = System.Security.Cryptography.ProtectedData.Unprotect(Convert.FromBase64String(encKey).Skip(5).ToArray(), null, System.Security.Cryptography.DataProtectionScope.LocalMachine);

            foreach (var cookie in cookies)
            {

                var data = cookie.EncryptedValue;

                var decodedValue = _decryptWithKey(data, decodedKey, 3);


                yield return Tuple.Create(cookie.Name, decodedValue);
            }
        }


        private string _decryptWithKey(byte[] message, byte[] key, int nonSecretPayloadLength)
        {
            const int KEY_BIT_SIZE = 256;
            const int MAC_BIT_SIZE = 128;
            const int NONCE_BIT_SIZE = 96;

            if (key == null || key.Length != KEY_BIT_SIZE / 8)
                throw new ArgumentException(String.Format("Key needs to be {0} bit!", KEY_BIT_SIZE), "key");
            if (message == null || message.Length == 0)
                throw new ArgumentException("Message required!", "message");

            using (var cipherStream = new MemoryStream(message))
            using (var cipherReader = new BinaryReader(cipherStream))
            {
                var nonSecretPayload = cipherReader.ReadBytes(nonSecretPayloadLength);
                var nonce = cipherReader.ReadBytes(NONCE_BIT_SIZE / 8);
                var cipher = new GcmBlockCipher(new AesEngine());
                var parameters = new AeadParameters(new KeyParameter(key), MAC_BIT_SIZE, nonce);
                cipher.Init(false, parameters);
                var cipherText = cipherReader.ReadBytes(message.Length);
                var plainText = new byte[cipher.GetOutputSize(cipherText.Length)];
                try
                {
                    var len = cipher.ProcessBytes(cipherText, 0, cipherText.Length, plainText, 0);
                    cipher.DoFinal(plainText, len);
                }
                catch (InvalidCipherTextException)
                {
                    return null;
                }
                return Encoding.Default.GetString(plainText);
            }
        }
    }
}

Solution

You can access my full & final solution on GitHub (@irlcatgirl/BraveCookieReaderDemo) where I used the techniques discussed in this post to write a full application which reads cookies and their encrypted values from Brave Browser (a privacy friendly fork of Google Chrome). It includes the unexplained things (such as using EF Core to access Google Chrome’s SQLite database and how to create a temp copy). I hope you found this post informative and helpful.

References

This blog post would not of been possible without help from the following resources and individuals.

Easily paginating your EntityFramework Core Queries with C# Generics

Recently I had the challenge of paginating a web application I wrote as the tables displaying data were getting quite long and I needed a way to display things more cleanly. This post details how I solved the problem using C# Generics. It includes plenty of code snippets so you can follow along in your own application.

The code shown in this post was written for my client Universal Layer and is released by them under a BSD 3-Clause License.

What is pagination?

Pagination is a process of taking a collection of objects and putting them into pages. For example you might have a book which contains hundreds of pages but only fifty words per page. You cannot put an entire book on a single piece of paper and you should not attempt to do the same in computer software. Rather you should put a set amount of words on each page, create a list of the pages, and have a way to easily switch between pages. In the human world this works by binding the pages of a book together, pagination in computer software works similarly by binding the pages of data together into an easy to use object.

How to paginate in computer software

Pagination requires three pieces of information: You need a collection of data to paginate, the number of results per page (a limiter), and to get the correct data you need to request a specific page. Object-oriented languages such as C# make this easy. You can pass the data to an object’s constructor and have it run the calculations on your behalf and make the results available as read only properties of the object.

The properties generated by the constructor are as follows:

  • Item Count: Number of items in our collection. When passing an IQueryable this means the number of rows in a database table.
  • Page Count: When you divide the number of items by the number of results per page you get the Page Count or number of pages.
  • Skip: Number of items to skip in our SQL query
  • Take: Number of items to select in our SQL query
  • The page of the selected results
  • Number of First Page
  • Number of Last Page
  • Number of Next Page
  • Number of Current Page
  • Number of Previous Page

Security Considerations for User Configurable Pagination

Some developers may allow users to choose the number of results per page in a table, api, etc. Be sure to set a reasonable maximum number of results per page and enforce this on your backend code. Failure to add a safe maximum limit could result in large queries that overwhelm the database server and result in a denial of service vulnerability.

Solution

I decided that the best way to go about solving this problem was to pass the necessary data to the constructor and have the constructor do all of the math and fill in prosperities.

From there getting data from the object’s properties is much easier than calculating on each controller and results in shorter code.

The final result of my efforts was a generic class, the code for it is below this paragraph.

PagedResults.cs

// This code is released by Universal Layer under a BSD 3-Clause License
// https://github.com/ulayer/PagedResults.cs
using System;
using System.Collections;
using System.Collections.Generic;
using System.Linq;

namespace Your.Application.Models
{
    public class PagedResults<T>
    {
        public int ItemCount { get; }
        public int PageCount { get; }
        
        public int Skip { get; }
        public int Take { get; }
        
        public IEnumerable<T> PageOfResults { get; }
        
        public int FirstPage { get; }
        public int LastPage { get; }
        
        public int NextPage { get; }
        public int CurrentPage { get; }
        public int PreviousPage { get; }

        public PagedResults(IQueryable<T> results, int pageNumber, int resultsPerPage)
        {
            ItemCount = results.Count();
            PageCount = (int) Math.Ceiling((double) ItemCount / resultsPerPage);
            
            Skip = (pageNumber - 1) * resultsPerPage;
            Take = resultsPerPage;
            
            PageOfResults = results.Skip(Skip).Take(Take).ToList();
            
            FirstPage = 1;
            LastPage = LastPage = PageCount == 0 ? 1 : PageCount;
            
            NextPage = Math.Min(pageNumber + 1, LastPage);
            CurrentPage = pageNumber;
            PreviousPage = Math.Max(pageNumber - 1, FirstPage);
        }
    }
}

I also worked on a few examples for this post of how you could take advantage of this class. I hope you find them useful when integrating this class into your applications.

Within a service

It is a well accepted design pattern to call EF Core from within a service. It’s still possible to get an IQueryable by injecting EF Core’s Context into your page but you shouldn’t do this when services exist. By asking for data from a database through a service you keep your Razorpages code-behind area more organized. If you have to call several methods on your EF Core Context to get back the desired data it does not clutter your code-behind area.

Where initially we created a generic class we can now use it as a PagedResults<Customer> class to paginate data from the Customer class. The Customer class can come from anywhere, the important thing is that the collection of Customers is passed to our PagedResults<Customer> class as an IQueryable<T> (EF Core does this for you). The helpful thing about using a generic class is that we can change the type as we need paged results for new data types without having to code an additional PagedResults class to handle that new data type.

public PagedResults<Customer> GetPagedResults(int pageNumber, int resultsPerPage)
        {
            return new PagedResults<Customer>
            (_Context.Customers,
                pageNumber,
                resultsPerPage);
        }

Passing the results to a RazorPages Partial View

I came up with the following partial view for use in Razorpages. It requires that the Model of the calling Razorpages have a property called PagedResults. The partial views calls the model for data dynamically so if the data doesn’t exist in the expected model your program will throw an exception. As long as PagedResults exists in the page’s model you can inject the pagination anywhere in your view as <partial name="Shared/_Pagination" />.

It’s important to note that this is a partial view not a full razorpage. To simplify the file Shared/_Pagination.cshtml exists but not Shared/_Pagination.cshtml.cs.

<nav aria-label="Page navigation example">
    <ul class="pagination">
        @if (@Model.PagedResults.CurrentPage != 1) // Show a link to the first page as well as previous page as long as we are not on the first page.
        {
            <li class="page-item"><a href="./@Model.PagedResults.FirstPage" class="page-link">First</a></li>
            <li class="page-item">
                <a href="./@Model.PagedResults.PreviousPage" class="page-link">
                    <span aria-hidden="true">&laquo;</span>
                    <span class="sr-only">Previous</span>
                </a></li>
        }
        
        @{ var pageCount = @Model.PagedResults.PageCount; }
        
        @for (int i = 1; i <= pageCount && i < 10; i++)
        {
            var currentPage = @Model.PagedResults.CurrentPage;
            if (pageCount > 10)
            {
                var activePage = ((currentPage - 5) + i);
                var active = activePage == currentPage ? "active" : string.Empty;
                if (activePage <= (pageCount - 1) && (activePage > 0))
                {
                    <li class="page-item @active"><a href="./@activePage" class="page-link">@activePage</a></li>
                }
            }
            else
            {
                var active = i == currentPage ? "active" : string.Empty;
                <li class="page-item @active"><a href="./@i" class="page-link">@i</a></li>
            }
        }
        
        
        
        @if (@Model.PagedResults.CurrentPage != @Model.PagedResults.LastPage)
        {
            <li class="page-item"><a href="./@Model.PagedResults.NextPage" class="page-link">
                <span aria-hidden="true">&raquo;</span>
                <span class="sr-only">Next</span>
            </a></li>
            <li class="page-item"><a href="./@Model.PagedResults.LastPage" class="page-link">Last</a></li>
        }
    </ul>
</nav>

References

How to use Tor as your System DNS Resolver

Recently I posted criticism of Mozilla’s new DNS over HTTPS feature given they disabled they primary security functionality of it. The user isn’t even warned and can be secretly spied on. This blog post details how to use Tor as your System DNS resolver and has instructions for each operating system plus instructions for disabling Firefox’s dangerous DNS over HTTPS implementation. If you’d like to read why Firefox’s implementation of DNS over HTTPS is harmful, you may read my previous blog post.

Note for Firefox Users

By default Mozilla has DNS over HTTPS enabled on networks that do not request the feature to be disabled. Visit about:config and set network.trr.mode to 5 to completely turn off the feature. I do not trust Mozilla’s implementation and you shouldn’t either.

Why not use Tor Browser?

Where possible you should download Tor Browser and use it instead. Unfortunately, many websites block the Tor network or show them a large number of CAPTCHAs (imagine having to check “I’m not a robot” every few minutes, that’s the reality for many Tor Browser users).

This alternative solution at least doesn’t disable DNS Security when network administrators are uncomfortable and website owners can still see your real IP Address reducing the amount of CAPTCHAs you will see as a result of using this feature. I will emphasize that it is not as private as the Tor Browser Bundle, please keep this in mind if you use this approach.

How to use Tor as your System DNS Resolver on Windows 10

At this time the tooling available on Windows 10 is not in a state where I’m comfortable writing steps out for as I am unsure on several of the security implications. As a temporary workaround I would recommend buying a Raspberry Pi, setting up Linux and a DNS resolver on it and following the steps below for using Tor on Linxu.

How to use Tor as your System DNS Resolver on macOS

Step 0) Install the Homebrew Package Manager

Open the terminal app on macOS and run the following command /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" follow the prompts and let the package manager install itself. This may take a few minutes to download and configure everything as Homebrew relies on Xcode developer tools which can be quite large.

Step 1) Install the Tor and DNSMasq Homebrew Packages

To get started you will need to run the following two commands: brew install tor and brew install dnsmasq. This will install special packages for Tor and DNSMasq (a special DNS Proxy)

Step 2) Enable Tor’s DNS Resolver

Open /usr/local/etc/torrc with a text editor of your choice. I recommend running nano as root to avoid any permission issues. So run sudo nano /usr/local/etc/torrc and add the line DNSPort 9053 to the bottom. Then run brew services restart tor to restart the Tor service and reload the configuration. This will also make sure the resolver is enabled.

Step 3) Configure DNSMasq

You will need to configure DNSMasq to send your DNS Queries to the Tor DNS Resolver as it runs on a non-standard port. To do this run nano /usr/local/etc/dnsmasq/dnsmasq.conf and add the following lines to the bottom of the file. no-resolv to disable fetching DNS Servers from /etc/resolv.conf and /etc/hosts and server=127.0.0.1#9053. Save the file and run sudo brew services restart dnsmasq (since dnsmasq runs on a privileged port (a port below 1024), it must be run as root or a user with special permissions, this is the standard configuration for dnsmasq on macOS Systems).

How to use Tor as your System DNS Resolver on Linux

Configuring Tor as your System DNS Resolver on Linux is a bit complex. These instructions only have Debian and Ubuntu in mind. If you use a different Linux distribution you’ll need to do your own research to get things working.

Install Tor

For security reasons you should always download Tor from the official repositories. The version that Ubuntu/Debian apt repos have is outdated at best. To install and configure Tor please run the following commands:

  • Finally Open /etc/torrc with a text editor of your choice. I recommend running nano as root to avoid any permission issues. So run sudo nano /etc/torrc and add the line DNSPort 9053 to the bottom. Then run sudo service tor restart to restart the Tor service and reload the configuration. This will also make sure the resolver is enabled.

Install dnsmasq to accept requests and forward them to the Tor DNS Resolver

You will need to configure DNSMasq to send your DNS Queries to the Tor DNS Resolver as it runs on a non-standard port. To do this run nano /etc/dnsmasq/dnsmasq.conf and add the following lines to the bottom of the file. no-resolv to disable fetching DNS Servers from /etc/resolv.conf and /etc/hosts and server=127.0.0.1#9053. Save the file and run sudo service dnsmasq restart. I recommend binding to sepcific interfaces and using the IP Address “127.0.0.54 to avoid conflicts with other services running on your machine.

Remove systemd-resolved and have network manager use dnsmasq instead

Newer versions of Ubuntu have integrated systemd-resolved a built in caching DNS Resolver into systemd. This can cause problems with our DNS setup so it’s best to disable it where possible. These instructions are adapted from an answer on AskUbuntu. I’ve tested them on my personal computer but didn’t write/research them. Be aware that this will break some corporate VPN clients (see LaunchPad issue).

  • Run sudo systemctl disable systemd-resolved and sudo systemctl stop systemd-resolved in a terminal.
  • Next run sudo nano /etc/NetworkManager/NetworkManager.conf and add the following line after the [main] section: dns=default.
  • Run rm /etc/resolv.conf and then sudo systemctl restart NetworkManager. Don’t worry as this will create a new resolv.conf file.

Final Steps

Be sure to go in network settings and set your DNS Resolver to 127.0.0.54 and then things will work as expected.