Algebra

Fundamentals

    Binary Exponentiation
    Factoring Exponentiation
    Euclidean algorithm for computing the greatest common divisor
    Extended Euclidean Algorithm
    Linear Diophantine Equations

Fibonacci Numbers

Math behind why greedy works

Among all resolution with the minimum number of Fibonacci numbers, we are to find the lexicographically largest one.

Facts about shortest sequence:

  • uses each Fibonacci number at most once fibo[i] * 2 = fibo[i - 2] + fibo[i + 1]
  • never uses two consecutive Fibonacci numbers

Then: If no dup, no adjacent, we must take the biggest.

fibo[0] + fibo[2] + fibo[4] + ... + fibo[2n] = fibo[2n + 1] - 1

fibo[1] + fibo[3] + fibo[5] + .... + fibo[2n-1] = fibo[2n] - 1

Prime numbers

Sieve of Eratosthenes

sieve = [False,False]+[True]*(n-1)
for i in range(2,int(r**0.5)+2):
    for j in range(i*i,n+1,i):
        sieve[j] = False
primes = [i for i,p in enumerate(primes) if p]
# O(nlog(logn))
  • find all primes less or equal to n
primes = []
for d in range(2,n+1):
    for p in primes:
        if d%p == 0: break
    else:
        primes.append(d)
    Linear Sieve
    Primality tests

Integer factorization

  • trial division

'smallest divisor should be prime'

# O(sqrt(n))
def factorization(n):
    res = []
    for d in range(2,int(n**0.5)+1):
        while n%d == 0:
            res.append(d)
            n //= d
    return res
  • simple sieve of eratosthenes + dp
@cache
def factorization(num):
    if num == 1: return [1]
    if sieve[num]: return [num]
    for p in range(2,num+1):
        if sieve[p] and num%p==0: return [p]+factorization(num//p)

sieve = [False,False]+[True]*(n-1)
for i in range(2,int(n**0.5)+2):
    for j in range(i*i,n+1,i):
        sieve[j] = False

iterative implementation:


sieve = [False,False]+[True]*(n-1)
for i in range(2,int(n**0.5)+2):
    for j in range(i*i,n+1,i):
        sieve[j] = False
primes = [p for p in range(len(sieve)) if sieve[p]]
res = []
for d in primes:
    if d * d > n: break
    while n % d == 0:
        res.append(d)
        n //= d
if n > 1:
    res.append(n) # n-prime
  • Fermat's factorization method
  • Pollard's  p1p - 1  method
  • Pollard's rho algorithm
  • Floyd's cycle-finding algorithm
  • Brent's algorithm

Number-theoretic functions

Euler's totient function

Number of divisors / sum of divisors

Lagrange's four-square theorem

Any number can be represented by at most 4 squares. Check proof in wikipedia.

Modular arithmetic

Modular Inverse

A modular multiplicative inverse of an integer aa is an integer xx such that axa \cdot x is congruent to 11 modular some modulus mm. To write it in a formal way: we want to find an integer xx so that

ax1modm.a \cdot x \equiv 1 \mod m.

We will also denote xx simply with a1a^{-1}.

It can be proven that the modular inverse exists if and only if aa and mm are relatively prime (i.e. gcd(a,m)=1\gcd(a, m) = 1).

Finding the Modular Inverse using Binary Exponentiation

pow(a,m-2,m)  # m must be prime/Fermat's little theorem
  • For an arbitrary (but coprime) modulus mm: aϕ(m)1a1modma ^ {\phi (m) - 1} \equiv a ^{-1} \mod m
  • For a prime modulus mm: am2a1modma ^ {m - 2} \equiv a ^ {-1} \mod m

Finding the Modular Inverse using Extended Euclidean algorithm

Use the identity eqaution (linear diophantine equation in two variables)

ax+my=1a \cdot x + m \cdot y = 1

Take modulu m both sides and you get the result.

The function below solves ax+by=1ax+by = 1, for given aa and bb.

def extended_euclidean(a, b):
    if b == 0:
        return 1, 0, a
    else:
        x, y, gcd = extended_euclidean(b, a % b)
        return y, x - (a // b) * y, gcd

why this works

Linear Congruence Equation

Chinese Remainder Theorem

Factorial modulo p

Discrete Log

Primitive Root

Discrete Root

Montgomery Multiplication

#Number (Private) systems

Balanced Ternary

Gray code

Note how graycode construction adds 0-s and 1-s(reverses these) in the previous solution

class Solution:
    def circularPermutation(self, n: int, s: int) -> List[int]:
        graycode = [i ^ (i >> 1) for i in range(1<<n)]
        i = graycode.index(s)
        print(graycode)
        return graycode[i:]+graycode[:i]
    
class Solution:
    def circularPermutation(self, n: int, s: int) -> List[int]:
        return [s ^ i ^ (i >> 1) for i in range(1<<n)]
    
class Solution:
    def circularPermutation(self, n: int, start: int) -> List[int]:
        # my construct of consecutive bits add 0 and 1 to previous solution
        # this is the same as gray code...
        def dp(i):
            if i == 1: return ['0','1']
            res,rev = [],[]
            for code in dp(i-1):
                res.append('0'+code)
                rev.append('1'+code)
            return res + rev[::-1]
        res = dp(n)
        res = [int(b,2) for b in res]
        i = res.index(start)
        return res[i:] + res[:i]

It's not super obvious why nn XOR (n>>1)(n>>1) and (n+1)(n+1) XOR ((n+1)>>1)((n+1)>>1) would differ by 1 bit.

XOR is commulative. we need to prove that nn XOR (n>>1)(n>>1) XOR (n+1)(n+1) XOR ((n+1)>>1)((n+1)>>1) is a power of two.

nn XOR (n>>1)(n>>1) and (n+1)(n+1) XOR ((n+1)>>1)((n+1)>>1) == nn XOR (n+1)(n+1) and (n>>1)(n>>1) XOR ((n+1)>>1)((n+1)>>1) == 2k12^{k} - 1 XOR 2k112^{k-1} - 1

Finding inverse Gray code

Given Gray code gg, restore the original number nn, i.e. g=n(n>>1)g = n^(n>>1), given gg find nn

We will move from the most significant bits to the least significant ones (the least significant bit has index 1 and the most significant bit has index kk). The relation between the bits nin_i of number nn and the bits gig_i of number gg:

n=nknk1...n1n = n_{k}n_{k-1}...n_{1} in binary presentation ni{0,1}n_{i} \in \{0,1\}

Rewrite n=gn = g XOR n>>1n>>1 to get:

nk=gk,nk1=gk1nk=gkgk1,nk2=gk2nk1=gkgk1gk2,nk3=gk3nk2=gkgk1gk2gk3,\begin{align} n_k &= g_k, \\ n_{k-1} &= g_{k-1} \oplus n_k = g_k \oplus g_{k-1}, \\ n_{k-2} &= g_{k-2} \oplus n_{k-1} = g_k \oplus g_{k-1} \oplus g_{k-2}, \\ n_{k-3} &= g_{k-3} \oplus n_{k-2} = g_k \oplus g_{k-1} \oplus g_{k-2} \oplus g_{k-3}, \vdots \end{align}
def gray_to_dec(g):
    res = 0
    while g:
        res ^= g
        g >>= 1
    return res

Miscellaneous

Enumerating submasks of a bitmask

Given a bitmask mm, you want to efficiently iterate through all of its submasks, that is, masks ss in which only bits that were included in mask mm are set.

Consider the implementation of this algorithm, based on tricks with bit operations:

while sub:
    print(sub)
    sub = (sub-1) & m

The above code visits all submasks of mm, without repetition, and in descending order. Suppose we have a current bitmask ss, and we want to move on to the next bitmask. By subtracting from the mask ss one unit, we will remove the rightmost set bit and all bits to the right of it will become 1. Then we remove all the "extra" one bits that are not included in the mask mm and therefore can't be a part of a submask.

Iterating through all masks with their submasks. Complexity O(3n)O(3^n)

In many problems, especially those that use bitmask dynamic programming, you want to iterate through all bitmasks and for each mask, iterate through all of its submasks:

for (int m=0; m<(1<<n); ++m):
    get_all_submasks(m)

Let's prove that the inner loop will execute a total of O(3n)O(3^n) iterations.

First proof: Consider the ii-th bit. There are exactly three options for it:

  1. it is not included in the mask mm (and therefore not included in submask ss),
  2. it is included in mm, but not included in ss, or
  3. it is included in both mm and ss.

As there are a total of nn bits, there will be 3n3^n different combinations.

Second proof: Note that if mask mm has kk enabled bits, then it will have 2k2^k submasks. As we have a total of (nk)\binom{n}{k} masks with kk enabled bits (see binomial coefficients), then the total number of combinations for all masks will be:

k=0n(nk)2k\sum_{k=0}^n \binom{n}{k} \cdot 2^k

To calculate this number, note that the sum above is equal to the expansion of (1+2)n(1+2)^n using the binomial theorem. Therefore, we have 3n3^n combinations, as we wanted to prove.

Arbitrary-Precision Arithmetic

Fast Fourier transform

Operations on polynomials and series

Continued fractions