Exclusive interview with Stephen Wolfram on AI

This afternoon, we have a special edition of the Bay Area Times, an exclusive interview with Stephen Wolfram, the founder and CEO of Wolfram Research, creator of Wolfram Language, Wolfram|Alpha, Mathematica, and now Wolfram plugins for ChatGPT and Bard.

Below are the main insights of the interview, which you can read in full at the end.

Highlights of our interview with Stephen Wolfram

Superintelligent AGI is a “mushy kind of concept”

  • “There’s a traditional basic science fiction idea of [this] thing, it’s going to be like humans, but it’s going to be smarter than humans. And it’s always been rather unclear to me what it is. It's sort of a philosophically, mushy kind of concept.”

  • “It’s very easy to make something we don’t understand”, it’s much harder to make something useful.

  • Stephen suggests LLMs are very good at reproducing human speech, but it is likely that they are not the best AI tool to solve every problem.

On AI killing us all: “hope that won’t happen”

  • These theories “reminded me [of] what theologians used to say, when they were arguing for the existence of God, by giving these kinds of rankings of ‘there must always be a greater such and such.’ And eventually there has to be a limit to that. And that's just not something that mathematically works that way.”

Best AI products used every day

Wolfram Language, the AI product that Stephen Wolfram uses everyday.

  • “The things I built for the last 40 years. I mean, I use Wolfram Language all the time. And you know, is it AI? I don't know. Does it do things which back when I started building it, people said, oh, well, when you'll meet when you're able to do that, then we know we have AI?”

AI products that Stephen wishes existed: tiny LLMs

  • “A tiny LLM that [does] the linguistic processing of an LLM and the common sense of an LLM in [a] form factor that can just run locally on my computer.”

Open source vs. closed source AI

The largest companies releasing LLM models, divided between open source and closed source.

  • It has to make money: “If you're going to make something real and good, it's going to cost money. [People] who work on it are going to have to eat, etc. So, somewhere, somehow, money's got to be flowing.”

  • Open source requires a 2nd product: “You can have something where you say, my software is open source, I make all my money on consulting. My software is open source, I make all my money on patent licensing. My software is open source, I make my money from advertisers, for example.”

  • He prefers charging for products. “I've tended to favor you just make stuff and you sell it to people who find it useful. And I think [that has] allowed us [to] maintain a kind of thread of innovation over now 36 years.”

  • Building a 2nd LLM is much easier than building the 1st. “Nobody knew, including the people at OpenAI, [that] ChatGPT was going to work at the level that it does work. All the money that went into OpenAI [could have] run out of steam before it delivered anything terribly useful. It was really a serious gamble.”

Full transcript

This transcript has been edited especially lightly.

We've seen generative AI used frequently for productivity gains, for taking things from 1 to n. How do you see AI being used for creating new things from 0 to 1, for example, new drugs, new physics theories, etc.?

Well, I mean, at some level, computational creativity is cheap.

You know, for 40-something years, I've explored all sorts of simple programs where you kind of pick the program at random, or you enumerate lots of different simple programs. And what you see is lots of stuff you didn't expect to see, lots of stuff where you say, wow, that's something new and different and creative, so to speak.

I think the challenge is, when you sort of do those creative things, do you make things which us humans care about or not? We can make all kinds of elaborate sorts of computational processes. But the question is, whether those computational processes are things that are useful to us, for our, you know, technological or other purposes, so you know, you talk about drugs, you can make a random protein if you want to, you can, you know, it'll fold up in some complicated way. And the question is, is that protein actually useful for something?

We don't even know whether the proteins that exist in biology are in some sense, random. We know that biology has managed to recruit them for all sorts of purposes, that let us do all our biological things, but so sort of, it's this issue of, of: creativity is complicated. If you go too far, you end up with something that you can't use. And I think that, if you say, can one come up with creative solutions to problems that we humans have defined? The answer is absolutely.

Large language models don't happen to be perhaps the… well they're, they have some degree of creativity and you know, you crank up the temperature, and they get more creative. If you crank up the temperature too high, they start sounding bonkers, so to speak. Perhaps very creatively, so, but in a way that doesn't really resonate with what we humans expect to see.

What is the alternative to LLMs?

It depends on what you're doing.

And you know, a lot of it depends on what you want to do. I mean, if you're trying to… LLMs are the best thing we know right now, for reproducing kind of human linguistic kinds of -- kinds of things. You know, if you're, if you're interested in sort of finding an optimal algorithm for this or that kind of computational task, you might be better off just enumerating all possible algorithms, and seeing which one, or possible simple programs, and seeing which one works best, which is a very different kind of thing than what you get in a trained neural net.

So, it depends on what kind of thing you're interested in doing.

And in terms of pure language processing, you know, my guess is that probably the, you know, we've discovered that with a big enough neural net, we can do a good job of generating and processing human language. And that's a fascinating scientific discovery. Having made that discovery, it's probably possible to make something much simpler than that kind of neural net that will be successful at processing human language, but that's a, that's a piece of future science, so to speak, to figure out quite how to do that.

Do you think, to create new drugs or new physics theories, is the path LLMs or is it a new technology?

I mean, look, what does it mean to create new drugs?

You know, typical in the pharmaceutical industry is, you say, here's a target. Here's the thing we want a molecule to do. Now, can we find a molecule that does that? Which part of that pipeline are you talking about? It's a question of finding, you know, what do we want to drug to do? Well, it's, you know, the, there are there different pieces to that often the finding of targets is done in part by mining the literature.

I guess some people believe that LLMs could get so intelligent, and they would start to refine themselves, and they would do things that we don't even comprehend how, but it would get to super intelligence.

It’s very easy to get something, I mean, you know, pick a random cellular automaton rule, run it, it does something we don't understand. It's very easy to make something we don't understand. And, you know, an LLM left to its own devices, and you just, you know, you -- what does it mean to you, you can start sort of, sort of wandering around in the space of all possible LLMs, you will certainly get to an LLM that does things we don't understand.

And I think that the question is, you know, is what the LLM does useful, harmful? Whatever else. I mean, it's very easy to get things that computation makes it really easy to generate structure. That doesn't happen to be what we humans have ever seen. before. So it's a difficult question, if you say, you know, if you have a sort of, yeah, I mean, this is a complicated question. If you know if you have a, I don't know what you mean by making new drugs. I mean, if you mean, figure out, you know, given a verbal description of you know, I want a molecule that will have a manganese atom in a cage in the middle of it, then, yes, there are, you know, people have played around with LLMs for that. There are other kinds of neural net technologies that allow you to kind of extrapolate from what we know and existing molecules.

With regards to the Wolfram Physics Project, do you use AI or LLM on that?

I use them a bit.

I mean, not LLMs, because they didn't exist at the time when I was doing that project. I used in trying to understand, kind of, space of possible, the consequences of different rules. There's a question of kind of how can you classify what different possible rules do, and the most efficient apparatus that I have for doing that is looking at them and using my own sort of human visual processing system.

But, the, I did use machine learning and neural nets and so on a bit to, to sort of do a version of what my human visual system does, but without me personally having to do it, so to speak, and over larger amounts of data, of trying to sort of classify behaviors and so on, and that was fairly useful.

I think, you know, I would expect, I mean, the kinds of things that one might hope for from an LLM is something like reading the whole literature of physics and being able to pull out from that some effect that people sort of haven't noticed, that's kind of a subtle thing, that kind of seeing 1000 different physics papers all mentioning one particular thing in one particular direction, being able to get that kind of, sort of, statistical analysis of the language of lots of kinds of things people have written about physics. That's the kind of thing where might think could be useful. I haven't, I haven't managed to make that successful yet, but I don't know that's, that's, but I have a feeling that in the thing like that. It's one of these things where if you have a question, you can potentially get it answered that way. But if you just say, go out and figure this out, without knowing what you're figuring out, you will not get anything that that's, that's useful.

More generally, how do you see artificial general intelligence, in the sense of how close are? Probability? AGI meaning something that could give us a theory of everything by itself, much much started than humans.

You know, we have the physical world, the physical world does what it does. What is the theory of everything? It's something that has to connect what actually happens in the existing physical world with some narrative that we humans can understand.

So it's good, you know, that's what we're trying to do. So, for, you know, when we have, and I think we are, we have at least the elements of sort of how a theory of physics works. One of the things that's nontrivial about it is sort of, at the lowest level, sort of operating at the level of sort of the atoms of space and so on, there's nothing much that's human comprehensible at that level, to relate to things that we experience requires going a long way away from that. And requires identifying things like space and time and so on, that we humans are kind of familiar with.

There may very well be aspects of the universe, that our sort of current sensory apparatus just doesn't, you know, doesn't happen to be sensitive to and about those things. You know, we won't yet know to kind of analyze that theory of physics in those terms. I mean, that's what happened in the history of physics, plenty of times, people just didn't know to look at this or that thing, because it wasn't something that can be measured at that time.

But I think it's, it's, you know, if you, I mean, there's kind of a traditional basic science fiction idea of, we're going to have this thing it's going to be like humans, but it's going to be smarter than humans. And it's always been rather unclear to me what it is. It's sort of a philosophically, mushy kind of concept.

And one needs to, you know, one needs to tighten up what one's talking about, and then one can kind of address, you know, what's possible and what's not. I mean, it was a big surprise that something like LLMs could do as well as they can do with human language. And, you know, I don't think anybody could have predicted sort of at this moment with this engineering and so on. When we get to the point where, you know, another LLM can write sort of credible human-like text. And yeah, so I'm not sure I'm not going to be able to tell you, you know, we're X number of years away from this thing because I don't know what the thing is.

So I guess you also don't believe in the theories like the ones from Eliezer Yudkowsky that this sort of magical AGI will kill us all somehow?

Well, hope that won't happen.

You know, I like to give the analogy of, you know, computational processes that go on and AIs and computational processes that go on in the natural world. In both cases, you know, it's possible that, that nature could wipe us out too. I mean, this is, it's this, this question of what. Yeah, I mean, the, you know, it's I think it's a complicated story, because you know, this there's the question of, what is the goal of an AI? What's it trying to do?

There's no abstractly defined goal for an AI. Perhaps when we built it, we gave it some goal. Perhaps we gave it you know, kind of world domination as a goal. The you know, it's really AI, like, all technology is a means of kind of automating getting to goals which have been defined.

If it isn't that it's more like nature, where it just runs and does what it does. And it's, it's, you know, and as I say, then, then the question of, sort of, does it happen to wipe us out? is similar to the question of does nature happen to wipe us out? There's no kind of, you know, there's no necessary path that leads to that.

No, I mean, one of the things that is tricky is that, in the we are the result of a history of biological evolution, which has all been about kind of the survival of the fittest and struggle for life and so on. And insofar as the AIs that we build are sort of imprinted with that same sort of biological imperative, that, that will lead to some, some effects.

I mean, I think the concept that no one will always get to a better stronger sort of AI as a, as a, as a strangely kind of, it's sort of a strangely flawed concept, because it suggests, it's like, if you take humans, you know, people might imagine that, you know, there's the single thing you know, the general intelligence quotient, little g it used to be called, where there's just sort of a scale and some people have higher single g and some people have lower single g.

Well, it's pretty obvious that kind of the kind of the comparison of humans is not on a linear scale like that. And there, it's not even clear there's any useful linear scale of that kind.

It's, and, similarly with, with sort of AIs, you are looking at, kind of, you make an AI, you make another AI. It's a whole sort of, complicated network of, I don't know, you could say put the AIs in a, in a and some kind of battle to the death with each other.

And probably it will depend a lot on that, which AI will win will depend on lots of details, and it is very unlikely that there'll be sort of a single ranking of AI is that it makes me wonder actually, in, in that case, you know, certainly in I don't know a game like chess, for example. One has these notions of, you know, Elo scores and things like this, which represent kind of the result of progressive competitions like that. I suppose one could imagine making such a ranking for AIs. I, you know, I'm yeah, I don't think that the I mean, I haven't, I haven't, I haven't seen that Eliezer for years.

Well, listen I last saw him and his stories may have evolved. The, I enjoyed hearing his, his, his thinking although I have to say it, it reminded me rather much of kind of what theologians used to say, when they were kind of arguing for the existence of God, by giving these kinds of rankings of there must always be a greater such and such. And eventually there has to be a limit to that. And that's just not something that mathematically works that way.

Let’s move on to some more concrete, practical things. What are the best AI products that you use, on a day-to-day basis, and are useful for you?

The things I built for the last 40 years.

I mean, I use Wolfram Language all the time. And you know, is it AI? I don't know. Does it do things which back when I started building it, people said, oh, well, when you'll meet when you're able to do that, then we know we have AI?

Well, you know, back, back when I started building these things, people thought that as soon as you could do any kind of symbolic math, you were doing AI. Then people said when you can do, you know, question answering from natural language you are doing AI. That's what we achieved with, with Wolfram Alpha back in 2009.

So, you know, the thing I personally use all the time every day is Wolfram Language, and it's kind of, it's my medium for thinking about things. Just as kind of one uses human language as a way to kind of concretize thoughts about things, and the, you know, I use computational language as kind of my way, my medium for sort of crystallizing thoughts about things, which has the nice feature that, well, then my computer can actually help work those things out.

But in terms of, of, well thing we're just now developing that will come out another week or so, hopefully, is packaging LLM technology within our computational language.

So rather interesting set of possibilities, where you can define some computational function within the language, which you define in terms of these sort of precise computational constructs that might be a thing that relates to human natural language. It might be a thing that relates to geography. It might be a thing that makes use of all sorts of computable data that we've accumulated over a long period of time.

But the question is, can you also sort of have an LLM in the loop as part of the kind of the doing of the computations that need to be that one wants to do and the answer is yes. And one can kind of have sort of a fine-grained way of using an LLM.

So, you know, you might just have something where you say, you know, change the sentence into active voice, which is something an LLM can do. But that could be part of a much bigger pipeline of, of processing of some piece of text or whatever, or summarize this piece of text. And then you take some things from that.

It's always important when you think about using some of these sorts of machine learning-based systems to sort of have a problem where, sort of, if it gets it right you're a winner. If it gets it wrong, it's not a disaster, which is different from the typical experience of doing computation, where you really, sort of the goal is to just get it right. And you know, it's not, oh, we got 10 results and three of them are what we wanted, and so that's good. If and it's sort of a different, kind of thinking about what you're trying to do in computation and so on.

Okay, so you've developed plugins for GPT, that can access Wolfram.

Yes, that's right.

We also released a ChatGPT plugin kit, so you can add your own plugins to, to ChatGPT, for example, that call Wolfram Language functionality. And that has the interesting feature that from like, ChatGPT on the web, you can actually call back into your local computer… gets a little bit scary because you can, you know, do you really want the LLM controlling things that happen on your local computer? Well, if you define what it is it does, then that's okay. If you let it kind of go free range, you can start worrying it's going to delete all your files or something.

What’s like the best LLM-based product that you wish existed? Perhaps something that you guys will build in the future or that you someone else to build.

Oh, I'd love to have a, a tiny LLM that sort of does the linguistic processing of an LLM and the common sense of an LLM in, a, in a form factor that can just run locally on my computer. That will be a very nice thing to have.

Like LLaMA, the Meta LLM, something like that?

Well, I mean, obviously we've been doing experiments with that. We're not it doesn't. Not, not quite there yet.

How do you see open source versus closed source in terms of LLMs?

Well, I mean, that's a complicated question.

I think that in terms of software in general, it's always a question of what's the business model?

Because in the end, if you're going to make something real and good, it's going to cost money. You know, people who work on it are going to have to eat, etc. So, somewhere, somehow, money's got to be flowing.

And you know, people have played the shell game of moving around kind of where the money is, is coming from and, you know, you can, you can have something where you say, my software is open source, I make all my money on consulting. My software is open source, I make all my money on patent licensing. My software is open source, I make my money from advertisers, for example.

My own personal approach for the last 40 years or so, has been to try as much as possible to align the, the source of revenue with the place where value [is going]. And so I've tended to favor you just make stuff and you sell it to people who find it useful. And I think that, that's allowed us to kind of maintain a kind of thread of innovation over now 36 years of my current company.

That is really only possible because we have this alignment between sort of where the revenue comes from and where the value is going. I think, in you know, in the end, if, if people are going to make sort of valuable technology, I think, I mean, maybe there'll be some other model that comes along, but I think that typically, you know, first run innovation tends to happen in the situation, where the value and the revenue are aligned.

By the time you're at, well, you know, such and such a thing already exists. Let's make a clone of it. It's a very different economic proposition. Not least because it's a lot, it's a lot lower risk and cheaper to do that.

So I mean, you know, once an LLM exists, it's a lot easier to build the second LLM than it was to build the first one. Nobody knew, including the people at OpenAI, that, you know, ChatGPT was going to work at the level that it does work.

And that was kind of a, a, you know, a high-risk proposition. I mean, all the money that went into OpenAI, the thing could have sort of run out of steam before it delivered anything terribly useful. It was really a serious gamble. And that's so now the question is, I mean, is there a business model which will drive innovation? When there's kind of no, when, when whoever does that innovation, sort of doesn't get benefit from it? I don't know. I've not seen one. I mean, that's that's not been a thing in the things I've seen in software development in general. That, that's not been a good long-term driver.

It's, it's been, as I say, the situation we're in right now, where an LLM that works quite well exists. Come on, make other ones, that's surely doable. You know, it's a detailed question whether it's like with operating systems, it's kind of like you can make for a long time, it was kind of like, well, you can make an operating system and it's open source. But if you want, you know, good graphic design to have been done and nice user experience stuff to have been done, that turns out to be something that was just too grindy for people to be sort of unrequired too much global organization. For that to be a thing that happened quickly in kind of the open-source world. And I think one can expect here, that there will be things that involve too much kind of coordinated, grinding engineering, that it's going to be easy to execute that in, in a kind of crowdsourced way.

I guess OpenAI now has the advantage of having a whole ecosystem of companies building plugins like yours.

Yeah, I mean, I think that the, you know, it will depend for, I mean, you know, obviously, we're, we're, we're sort of active members of this particular ecosystem of [LLMs]. And so yes, we, wetend to know, I think, most of them I don't know, I'm not at the frontlines of all of it.

But I think we know most of the players and you know, everybody is still trying to figure out what are the right business models. You know, how will this evolve? How do different players fit into things? I mean, I think it's, it's been really nice that the things we've built over the last 40 years or so, turn out to be just a wonderfully unique and valuable thing in the sort of emergent ecosystem.

And, you know, all the effort that I put into building our computational language to be sort of as usable as possible by humans, seems, not that I expected this, to have the wonderful extra benefit, that it also makes it easy for AIs to use.

And it's really kind of interesting to me that even all the effort that we put into naming, you know, 7000, built-in functions in Wolfram Language, well, turns out the AIs, just like humans can figure out what these functions do, from the name, so can the AIs, and it's really kind of neat to see that happen.

And so, you know, as I say, it wasn't, I, I, I certainly you know, I built this for humans, so to speak. Turns out it also works really well for AIs. And this whole idea of, you know, in a sense what, what we've tried to build is, kind of the, the sort of computational stack, which is very different from the kind of sort of statistical learning thing that an LLM is all about.

And in a sense, it's been, it's really pretty nice to see that on both sides. One’s gotten to this really, you know, good point, both with what statistical kind of LLM-like technology can do, and with what kind of computational technology can do.

Now, of course, there's some there's some mixing, because inside our whole computational stack, there's plenty of machine learning, kinds of things that are being used in sort of heuristic choices of algorithms and things like this. It's kind of like what I was saying before, it's kind of like, like, if the underlying algorithm is sort of all machine learning all the way down, it might not get the right answer.

But if the AI is used to pick which of 10 possible algorithms, all of which would get the right answer, but at different speeds, which of those you should use. That's kind of a, a safe way to kind of use AI in a computational context. And there's, there's lots of that and what we've built over the last few decades.

And plus some things which are purely heuristic, where it's a question of, you know, produce a visual output where the labels on some plots don't overlap. Well, there probably many ways to solve that. And any one of them is okay. And that's another kind of place where one can sort of use machine learning type methods safely, so to speak, but it's really quite a wonderful thing to see that the things that we've been building for so long, sort of fit so nicely, and are really the, you know, a complement to kind of what, what's been made possible with LLMs.

Thank you Stephen for your time and this great interview.

Was this newsletter forwarded to you? Sign up here. Liked it? Forward it to friends and get rewards (see below).

What did you think of today's newsletter?

Login or Subscribe to participate in polls.

Thank you for reading Bay Area Times. Got any tips? Email us at [email protected].

Disclaimer: The Bay Area Times is a news publisher. All statements and expressions herein are the sole opinions of the author. The information, tools, and material presented are provided for informational purposes only, are not financial advice, and are not to be used or considered as an offer to buy or sell securities; and the publisher does not guarantee their accuracy or reliability. You should do your own research and consult an independent financial adviser before making any investments. Assets mentioned may be owned by members of the Bay Area Times team.