Tuesday, October 3, 2023

Errors in McCoy et al.'s "Embers of Autoregression"

The thesis of McCoy et al.'s Embers of Autoregression: Understanding Large Language Models Through the Problem They are Trained to Solve is interesting. However, I spotted a two errors early on which make me question the authors' attention to detail as well as their claims in areas that I am less familiar with.

First, on page 1, Embers mis-characterizes the central claim of Bubek et al.'s Sparks of Artificial General Intelligence and the reasoning behind it. According to Embers (emphasis added):

Virtually any task can be framed in the form of linguistic queries, so LLMs could be applied to virtually any task—from summarizing text to generating computer code. This flexibility is exciting: it led one recent paper to argue that LLMs display “sparks of artificial general intelligence” (Bubeck et al., 2023).

But the flexibility of LLMs is not what triggered the "sparks of AGI" paper. If that were the case, the paper could have just as well have been written about GPT-2 or GPT-3.5, which are equally flexible in the sense of being able to perform tasks "framed in the form of linguistic queries". In fact, the paper was about GPT-4 and its impressive performance across a variety of problem domains. As the abstract of Sparks states:

Given the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.

Then, on page 2, Embers mischaracterizes the term "autoregression" in an ML context (emphasis added):

The crucial question to ask, then, is: What problem(s) do LLMs need to solve, and how do these pressures influence them? Here we focus on perhaps the most salient pressure that defines any machine learning system, namely the task that it was trained to perform. For the LLMs that have been the focus of recent attention in AI, this task is autoregression—next-word prediction (Elman, 1991; Bengio, Ducharme, and Vincent, 2000; Radford et al., 2018)—performed over Internet text.

Recent large language models have indeed been "trained" to predict the next word or token, but that is not what "autoregression" means. Autoregession has to do with generating output based on previous outputs. In statistics, an "autoregressive model describes a system whose status (dependent variable) depends linearly on its own status in the past". In machine learning, "autoregression" refers to a process by which a model generates outputs in a loop, one output at a time, where each additional output is appended to the input and fed back into the process to generate the next output. 

 

Monday, March 20, 2023

An undecidable Diophantine equation

As shown in Jones (1982), it is undecidable whether the equation below has a solution over the positive integers for given positive integers x and v:

$$\begin{alignat*}{2}  & &&(elg^2 + α - (b - xy)q^2)^2\\  &+ &&(q - b^{5^{60}})^2\\  &+ &&(λ + q^4 - 1 - λb^5)^2\\  &+ &&(θ + 2z - b^5)^2\\  &+ &&(l - u - tθ)^2\\  &+ &&(e - y - mθ)^2\\  &+ &&(n - q^{16})^2\\  &+ &&(\\ & && ~~~r\\  & &&~~~- [g + eq^3 + lq^5 + (2(e - zλ)(1 + xb^5 + g)^4 + λb^5 + λb^5q^4)q^4][n^2 - n]\\  & && ~~~- [q^3 - bl + l + θλq^3 + (b^5 - 2)q^5][n^2 - 1]\\  & &&)^2\\  &+ &&(p - 2ws^2r^2n^2)^2\\  &+ &&(p^2k^2 - k^2 + 1 - τ^2)^2\\  &+ &&(4(c - ksn^2)^2 + η - k^2)^2\\  &+ &&(k - r - 1 - hp + h)^2\\  &+ &&(a - (wn^2 + 1)rsn^2)^2\\  &+ &&(c - 2r - 1 - φ)^2\\  &+ &&(d - bw - ca + 2c - 4aγ + 5γ)^2\\  &+ &&(d^2 - (a^2 - 1)c^2 - 1)^2\\  &+ &&(f^2 - (a^2 - 1)i^2c^4 - 1)^2\\  &+ &&((d + of)^2 - ((a + f^2(d^2 - a))^2 - 1)(2r + 1 + jc)^2 - 1)^2\\  &+ &&(v - ((zuy)^2 + u)^2 - y)^2\\  &= &&~0 \end{alignat*}$$

That equation corresponds to the 19-equation system from Jones' Theorem 3 (1982). Jones' system of equations

     L₁ = R₁, L₂ = R₂, …, L₁₈ = R₁₈, Lᵥ = R

is equivalent to:

     (L₁ – R₁)² + (L₂ – R₂)² + … + (L₁₈ – R₁₈)² + (Lᵥ – Rᵥ)² = 0

The single equation above has a solution just when x is a member of the vth semidecidable set, with v representing the index of each semidecidable set listed in an order determined by the equation. Some semidecidable sets are not decidable. Therefore, an algorithm capable of deciding the solvability of the equation for any given x and v would be able to decide membership in undecidable sets.

Reference: Jones, James P. (1982). Universal Diophantine Equation. The Journal of Symbolic Logic, 47(3), 549-571. https://doi.org/10.2307/2273588.



Followers

Contributors