I should also note, I’m oversimplifying many of these concepts. The AIs do not have an internal monologue, I’m just using that as a way of demonstration here. They actually represent information in the form of huge matrices. The matrices don’t do anything, unless you prompt them with a question (by multiplying them together). So when people talk about “the AI is going to take over” it’s really quite silly. Oh, are the matrices going to start multiplying themselves? It’s like saying we need to limit how large of numbers your calculator can multiply, because we’re afraid if they get too big it might start multiplying numbers all by itself (see how that make zero sense?).