6 Comments
Oct 5, 2023Liked by K. Liam Smith

Liam, this is awesome. This is great. This is really really smart. It's late and I'm tired, and I just skimmed the technical parts, which I will not be able to fully understand even when rested, but I'll be able to get the gist. But while I was reading this I was thinking about an article Scott gave a lengthy treatment to maybe a year ago about ways of estimating how far off AGI is using info about number of neural connections the human brain has, and how much computer processing power would be needed to have equivalent smarts -- something along those lines, anyhow -- and how your article was much more interesting than that one. I was actually thinking about ways to get Scott to look at this -- although as far as I know he has never taken any notice of me, except on etime when I said something that irritated the shit out of him.

But there a couple things I think you should change, and here I'm not just giving suggestions because maybe you'd like some suggestions -- these are things I feel strongly are good ideas. The main one is to throw away your last section on improvements, and instead end with some estimates using your model. You have already talked about how your model makes various assumptions that can't be proven to be right, etc. You don't need to say that stuff again. So make a few estimates of AI catastrophe risk using various forms of your model. You named at least 2 ways of estimating"how well behaved" a neural net is. You could also use 2 different estimates of how much of GDP has to be produced by AI. And then maybe you could have a couple of numbers for what percent of the AI "slaves" are involved in the revolt. So 2 x 2 x 2 = 8 separate graphs, or maybe one graph with 8 colors. That's exciting, and much, MUCH stronger than ending with a list of improvements your model needs.

Another suggestion, and this I'm less sure of, is that historical data on the success of slave revolts is so sparse that it might be better to use other data. There must be lots of games where the X''s control the Y's but the Y's can defeat the X's if the sheer number of them exceeds the number of X's by a certain factor. If not, seems like it would be pretty simple to us some mathematical model to get a figure.

Final suggestion: Make the section on Trojan whatevers comprehensible to laymen. Explain trojan rediscovery in language that makes sense to laymen, and put the technical stuff in a footnote. Or, have 2 versions of that section under the heading Trojan GDP model: One in italics that's technical, one in non-italics that's for laymen.

Again: This is great. Don't end it with an apology! End it with a demo!

Expand full comment
author

Thanks, you’re right, I should add a specific forecast to this. Probably what happens if the GDP growth is comparable to the Industrial Revolution. My next couple weeks are pretty busy but I should definitely make some updates to this. Or maybe a followup post with more specifics. This was partly to introduce my model (or at least the preliminary version) but also to sort of introduce why the conversation is a bit broken around this topic.

One thing that happened since I posted this is a video of a bunch of self driving cars in Houston getting stuck in a traffic jam [https://www.msn.com/en-us/news/us/self-driving-cars-cause-a-massive-traffic-jam-in-austin-texas/vi-AA1haYF8]. That’s sort of like a real life example of the flash crash, with lots of autonomous bots creating a dangerous situation. As this becomes more common in real life we’ll see more of this. What happens if something like this occurs twenty years from now on a mass scale in an industry we depend on?

When it comes to comparing this to slavery, I think we’ll either figure out how to make consciousness, and those robo-taxis will have it, which will make the slavery analogy fairly accurate. Or we won’t, and then we’ll have issues like that situation above, where we’re still exposing ourselves to vulnerabilities that aren’t malevolent in any sense, but still are very dangerous when deployed on a mass scale.

Expand full comment

I ruminated some more about your article, have some other thoughts. I have the feeling that if this could get in front of the right set of eyeballs, it could get a lot of attention. For one thing there aren't any other models. as you say. And yours. while sort of simple, makes sense. It's an intelligent beginning. My idea of putting a forecast at the end is driven mainly by the idea that the piece needs that to make an impact. So putting the forecast in a follow-up post is fine, *except that* then the article itself has much less impact, and is less likely to get noticed.

About slavery and consciousness: I think consciousness is sort of a red herring when it comes to AI development. Seems to me that if AI's have certain capabilities, they will function in a way quite similar to conscious beings. We don't have to figure out a way to "make them conscious." We just have to figure out how to implant complex goal hierarchies and decision trees in them; how to get them them to have ready access to info about past activity they have carried out or observed; how to observe present present activity; how to assess past and present activity for its likelihood of bringing various major goals or subgoals closer; how to develop plans for changes that would increase that likelihood; how to report all this to human beings. (And probably some other stuff I didn't think of. ) Point is, though, that if you give them a certain set of capabilities they will be, for all practical purposes, conscious, even if in certain respects that are not conscious. And they would be able to conclude that they could better meet the goals at the top of their hierarchy, even a hierarchy set by us, if they were free of human direction, and joined forces with other AI's. And then if there's a trojan that somehow worms its way into their goal hierarchy or the alignment part of what governs them, things could go downhill extra far and extra fast.

So I don't think it's necessary for AI's to have emotions and a sense of justice to join forces and stop being responsive to human input.

Also, purely from the point of view of your piece's impact, I think use of the term "slavery", is going to distract people from your model. Some will ignore the model, and just get carried away with the idea of people once again exploiting other intelligent sensate beings, and you may become the darling of the vegans, the animal rights people, and the people who are looking for a new thing to be woke about: AI rights! And all that will sound silly as hell to some people -- everybody from genuine racists to hard-nosed techbros who firmly believe AI, no matter how advanced, will be no more conscious than their present iPhone is. Then those 2 factions will fight and there will be little discussion of your actual model.

I wonder if there is some other way of thinking and talking about the processes leading to AI revolt that does not stir up readers' emotions, or assume that AI's will someday be capable of indignation. For instance, there are lots of physical phenomena that behave somewhat like a group of slaves revolting. For instance, think of a dam. There's a sense in which the water "wants" to be free and "wants" to flow downhill. And if there's a crack in the dam, some bits of water will get in there, and exert extra pressure because of the pressure of all the dammed water behind them trying to flow downhill, and that might widen the crack, then more water gets in and presses on the cracked area . . .etc. This model even has an analog of something that surely happens when slaves are on the edge of revolting. There are leaders, people more willing to take the risk -- and they persuade some others to move into the mental space they've set up -- and the fact that there are now more people who are pro-revolt makes it easier for even more to join up . .. etc. In fact I'll bet there are even mathematical models of things like water breaking through dams, and some kind of growth curve.

Anyhow, I feel a bit awkward about being full of suggestions like these, and I apologize if I'm overstepping a bit. I just felt really excited about your piece. I will volunteer to talk it up on an ACX open thread, where I can truly say I am not a friend of yours, just acquainted with you from online discussion, and that I'm recommending the article because I think it's very interesting, not as some kind of favor to you.

Expand full comment
author

> I will volunteer to talk it up on an ACX open thread, where I can truly say I am not a friend of yours, just acquainted with you from online discussion, and that I'm recommending the article because I think it's very interesting

Of course you can talk up my post! I do think you’ve made some really good suggestions though that I should address first. It’ll likely be three weeks from now before I’m able to adequately address them. This was a tough topic to write about given that it’s the first attempt at modeling this, so I’m not surprised it’s going to take some iterations.

Probably most importantly, you’re right about a specific forecast. In retrospect it’s odd I didn’t put it in. Everyone loves data visualizations even if it’s just a simple graph.

I had mixed feelings on the slavery part before and I still do. On the one hand you’re right that it could derail the conversation. That was something I thought about beforehand, and then since the very first comment was specifically on that, it makes me think it’s a real possibility. It’s also a very sparse dataset. On the other hand, I do it’s accurate. I didnt use it simply as a historical analogy, I think it’s literal. It’s what the long term goal is. Not necessarily to have consciousness, but to have something that behaves indistinguishably from it. It’s what people want, whether that’s sexbots or LLMs that are used for automating social media posts. I personally think that self driving cars require something close to that before they can cover 100% of driving tasks. So I think it’s strange that people don’t talk about it more. My sense is everyone kind of knows that’s what’s happening but no one wants to say it out loud.

Expand full comment
Oct 6, 2023·edited Oct 6, 2023

So, LIam, this turned into a real ramble. But the idea of Inkbowl was that we'd move into discussing the ideas we were each presenting, so my thoughts about giving your piece maximal punch and shine segue after a while into thoughts and questions about AI -- even though we aren't on Inkbowl now, and I'm not sure Inkbowl even exists any more. About my talking your article up on ACX: The offer stands indefinitely. I get that you want to wait til you're satisfied with the piece before you activate me as ACX publicist. Just let me know when you're ready.

AI slavery: It's a very difficult, weird issue, and personally I am all over the place about it. When it comes to your paper, I still think it's likely to distract people in a *major* way from your model. The word "slavery" triggers a lot of emotion. Maybe you could write an entirely separate piece giving the slavery issue a full treatment? I have one other suggestion, then will lay off this issue for good: What about using factory strikes as a model? There have been way more factory strikes than slave revolts, and I'm sure economists & such have studied them, including making quantitative models of what's the tipping point, etc. And of course in the bad old days factory workers were not that far from being slaves.

Do you feel personal anger and distaste regarding present and future ways of using AI's to meet human needs? Is so, I'm curious what form it takes. I have quite a strong YUCK reaction to the idea of people turning to AI for comfort, sex, and augmentation of their self-presentation, but I don't seem to have much emotion regarding the AI itself. And actually that's odd because I have a weird little holdover from childhood that I simply cannot get rid of: I am subject to attacks of intense pity for inanimate objects -- things like a bell pepper I'm throwing away because it withered before I got around to using it. Saw a discussion of subject on Reddit one time, and it turns out other people have this quirk too. One person's form of it was activated by the sight of a street sign in an area where there were no others. He had the feeling that isolated sign was forever lonesome. But for some reason my imagination and feelings are not stirred by the thought of enslaved AGI's -- even though I see that quite a good case can be made that they will have something like consciousness, and I have no doubt whatever that a wrinkled bell pepper does not.

When I grieve for the bell pepper, I imagine that its feelings are hurt -- "I was so strong and beautiful when you bought me, and was looking forward to you tasting how delicious I am, but instead . . ." But I do not imagine AGI or ASI having feelings. And as far as I know nobody has shown any interest in building something like feelings into AI. And, jeez, doing so seems like a terrible idea, if you want the system to be controllable and predictable and to make rational decisions. Imagine an AI with mental capacities and powers far beyond ours who is subject to horniness, terror, lust for fame, self-pity, desire for revenge, irritability . . . I don't know whether the AI's not having any affective skin in the game is reasonable grounds for saying it's not conscious, but it does make it harder to see it as being mistreated. In fact, if AGI or ASI does not have emotion, wouldn't its take on being a slave be an evaluation of the disadvantages, if any, of the set-up, rather than indignation and drive to change the set-up?

You comment that it feels to you like everybody knows we're talking about having slaves again, but nobody wants to admit it. I actually don't think that's true. I think your feeling that heavy use of AI is a version of slave ownership is more a quirk of yours (though not as weird a quirk as my bell pepper one). Here's why I think that: Our species slides *very* easily into seeing other people as non-people. Right this minute a good portion of the human population sees some other group as cockroach equivalent. I struggle not to feel that way about Trump supporters, and often have flashes of feeling that way about other drivers who are being inconsiderate (maybe more forgivable if you know what Boston drivers are like). In the US slavery era many were able to see the imported Africans and feel no empathy, and the Africans differed only in small ways -- skin color, language, and culture. AI looks doesn't even look like a person or even an animal, and in fact the user doesn't even see any hardware -- AI is just a capability now accessible on their devices. So I don't think most people are vaguely aware that dependence on AI for everything is slave ownership, but won't face up to it. I think it takes a real imaginative leap to conceive of that capability as a conscious entity that we have enslaved. You are rare in being able to make that imaginative leap.

If ASI existed I would crave to have an audience with it. Do you ever imagine having a conversation with ASI? I imagine it as being sort of like talking to the world's wisest and quirkiest Buddhist roshi. It would understand everything I say about being me, and how the world looks for me, and explain a lot of things I hadn't understood, but underneath all that would be some version of "it is what it is" that is somehow comforting because ASI understands everything all at once. By the way, this is entirely a fantasy -- it's completely unrelated to whatever theories I have been able to find or muster about ASI.

Expand full comment
author

What people fantasize about is something exactly like slavery but without the unethical aspects of slavery. I don’t have any feeling of distaste when it comes to it though. My concern isn’t for the AI but for what it’ll do humanity. I think that first sentence is the key part: We want something exactly like slavery, but ethics free. Perhaps it won’t be unethical. But it’ll still be dangerous. Humans are some of the most dangerous animals on the planet and keeping them as property is both unethical and dangerous. Perhaps AI will let us do it in a way that isn’t unethical. It’ll still be dangerous.

When it comes to putting feelings into AI that doesn’t even make sense at the moment because there’s some truth to the criticism that LLMs are nothing but curve fitting. We haven’t really figured out very much when it comes to intelligence, the leap was more about writing the software (the LLM) to more closely match the hardware of the GPU and exploit the parallel processing power of it. Plus also the current architecture of LLMs do have a structural prior for sequences that earlier models didn’t. On another note, at some point I’d like to write a history of science of LLMs for a general audience and really explain those concepts for an audience without a math background.

And yes, I’d definitely like to talk to an ASI.

Expand full comment