LT2, William Gates Building
\[\text{data} + \text{model} \stackrel{\text{compute}}{\rightarrow} \text{prediction}\]
|
\[\text{data} + \text{model} \stackrel{\text{compute}}{\rightarrow} \text{prediction}\] |
\[ \text{data} + \text{model} \stackrel{\text{compute}}{\rightarrow} \text{prediction}\]
\[\text{data} + \text{model} \stackrel{\text{compute}}{\rightarrow} \text{prediction}\]
\[ \text{odds} = \frac{p(\text{bought})}{p(\text{not bought})} \]
\[ \log \text{odds} = w_0 + w_1 \text{age} + w_2 \text{latitude}.\]
\[ p(\text{bought}) = \sigma\left(w_0 + w_1 \text{age} + w_2 \text{latitude}\right).\]
\[ p(\text{bought}) = \sigma\left(\mathbf{ w}^\top \mathbf{ x}\right).\]
\[ y= f\left(\mathbf{ x}, \mathbf{ w}\right).\]
We call \(f(\cdot)\) the prediction function.
\[E(\mathbf{ w}, \mathbf{Y}, \mathbf{X})\]
\[ p(\text{bought}) = \sigma\left(w_0 + w_1 \text{age} + w_2 \text{latitude}\right).\]
\[ p(\text{bought}) = \sigma\left(\beta_0 + \beta_1 \text{age} + \beta_2 \text{latitude}\right).\]
These are interpretable models: vital for disease modeling etc.
Modern machine learning methods are less interpretable
Example: face recognition
Outline of the DeepFace architecture. A front-end of a single convolution-pooling-convolution filtering on the rectified input, followed by three locally-connected layers and two fully-connected layers. Color illustrates feature maps produced at each layer. The net includes more than 120 million parameters, where more than 95% come from the local and fully connected.
There is a lot of evidence that probabilities aren’t interpretable.
See e.g. Thompson (1989)
LLMs are already being used for robot planning Huang et al. (2023)
Ambiguities are reduced when the machine has had large scale access to human cultural understanding.
“ ‘When someone seeks,’ said Siddhartha, ‘then it easily happens that his eyes see only the thing that he seeks, and he is able to find nothing, to take in nothing. […] Seeking means: having a goal. But finding means: being free, being open, having no goal.’ ”
Hermann Hesse
book: The Atomic Human
twitter: @lawrennd
The Atomic Human pages MONIAC 232-233, 266, 343 , human-analogue machine (HAMs) 343-347, 359-359, 365-368.
podcast: The Talking Machines
newspaper: Guardian Profile Page
blog posts: