1 day ago
LW - Wittgenstein and ML — parameters vs architecture by Cleo Nardo
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Wittgenstein and ML — parameters vs architecture, published by Cleo Nardo on March 24, 2023 on LessWrong.
Status: a brief distillation of Wittgenstein's book On Certainty, using examples from deep learning and GOFAI, plus discussion of AI alignment and interpretability.
"That is to say, the questions that we raise and our doubts depend on the fact that some propositions are exempt from doubt, are as it were like hinges on which those turn."
Ludwig Wittgenstein, On Certainty
1. Deep Learning
Suppose we want a neural network to detect whether two children are siblings based on photographs of their face. The network will received two n-dimensional vectors v1 and v2representing the pixels in each image, and will return a value y(v1,v2)∈R which we interpret as the log-odds that the children are siblings. So the model has type-signature Rn+nR.
There are two ways we can do this.
We could use an architecture yA(v1,v2)=σ(vT1Av2+b), where
σ is the sigmoid function
A is an n×n matrix of learned parameters,
b∈R is a learned bias.
This model has n2+1 free parameters.
Alternatively, we could use an architecture yU(v1,v2)=σ(vT1(U+UT2)v2+b), where
σ is the sigmoid function
U is an n×n upper-triangular matrix of learned parameters
b∈R is a learned bias
This model has n2/2+n/2+1 free parameters.
Each model has a vector of free parameters θ∈Θ. If we train the model via SGD on a dataset (or via some other method) we will end up with a trained models yθ:Rn+nR, where y_:Θ(Rn+nR) is the architecture.
Anyway, we now have two different NN models, and we want to ascribe beliefs to each of them. Consider the proposition ϕ that siblingness is symmetric, i.e. every person is the sibling of their siblings. What does it mean to say that a model knows or belives that ϕ.
Let's start with a black-box definition of knowledge or belief: when we say that a model knows or believes that ϕ, we mean that yθ(v1,v2)=yθ(v2,v1) for all v1,v2∈Rn which look sufficiently like faces. According to this black-box definition, both trained models believe ϕ.
But if we peer inside the black box, we can see that NN Model 1 believes ϕ in a very different way than how NN Model 2 believes ϕ.
For NN Model 1, the belief is encoded in the learned parameters θ∈Θ.
For NN Model 2, the belief is encoded in the architecture itself y_.
These are two different kinds of belief.
2. Symbolic Logic
Suppose we use GOFAI/symbolic logic to determine whether two children are siblings.
Our model consists of three things
A language L consisting of names and binary familial relations.
A knowledge-base Γ consisting of L-formulae.
A deductive system ⊢ which takes a set of L-formulae (premises) to a larger set of L-formulae (conclusions).
There are two ways we can do this.
We could use a system (L,Γ,⊢) , where
The language L has names for every character and familial relations parent,child,sibling,grandparent,grandchild,cousin
The knowledge-base Γ has axioms {sibling(Jack,Jill),sibling(x,y)sibling(y,x)}
The deductive system ⊢ corresponds to first-order predicate logic.
Alternatively, we could use a system (L,Γ,⊢), where
The language L has names for every character and familial relations parent,child,sibling,grandparent,grandchild,cousin
The knowledge-base Γ has axioms {sibling(Jack,Jill)}
The deductive system ⊢ corresponds to first-order predicate logic with an additional logical rule sibling(x,y)⊢sibling(y,x).
In this situation, we have two different SL models, and we want to ascribe beliefs to each of them. Consider the proposition ϕ that siblingness is symmetric, i.e. every person is the sibling of their siblings.
Let's start with a black-box definition of knowledge or belief: when we say that a model knows or believes that ϕ, we mean that Γ⊢sibling(τ1,τ2)sibling(τ2,τ1) for every pair of closed L-terms τ1,τ2. According to this black-box definiti...